In 2023, our Sophomore year, we were sitting in our Spanish three class as we tested out ChatGPT’s newest text-to-voice feature. As the AI spoke, it sounded eerily human. Every hitched breath, every pause, and every stutter stunned us. We never realized a machine could sound more like a person than we did.
We fear how dangerous AI may become as it progresses. The history of artificial intelligence dates back to the 1930s when Alan Turing engineered the Turing Machine. Now, many people have grown accustomed to using AI in their everyday lives. For example, voice-activated virtual assistants such as Siri and Alexa have been seamlessly integrated into daily routines. AI is now being used to visually enhance movies.
According to the BlackBerry Research and Intelligence Team, Hollywood studios “are already using visual effects powered by AI in a myriad of creative ways, such as recreating famous historical figures or de-aging an older actor.” AI’s capabilities raise questions about the ethical boundaries of digital manipulation.
As AI becomes better and more accessible to the public, many have become concerned about the risks of identity theft. Numerous political figures and celebrities have had their voices cloned to fabricate false statements and promote misinformation.
According to NPR, “a deep fake video of President Volodymyr Zelensky appeared to show the Ukrainian leader calling on his soldiers to lay down their arms and surrender.”
This shows how dangerous AI can be when used unethically. Many high-profile people have suffered due to those who have misused it, and it has led to the larger public worrying about their safety.
The misuse of AI by higher powers increases our growing concern about how easily manipulated information can be spread and strips away what little assurance society has for authorities.
Former President Donald Trump has become obsessed with using and reposting AI-generated images for his presidential campaign. He has reposted fake-generated photos of Taylor Swift endorsing his presidency. The images feature Taylor with the caption, “Taylor wants you to vote for Donald Trump,” with him retweeting “I accept!” to the post.
As the world changes, we must educate ourselves so we don’t fall behind. Many schools across the country have already started utilizing AI in their classrooms to educate students on how to use AI.
Recently, District 204 has made a curriculum to achieve this, allowing students and teachers to use Chat GPT-4 for assignments and lessons. It has been a slow adjustment, and some teachers still hesitate to take advantage of it.
“I don’t think that, from an education standpoint, we really fully know or understand it yet, even as an adult for me,” social studies teacher Elizabeth Molla said.
There are some concerns about how students would apply AI to their work without misusing it. Promoting these tools without considering how easily they can be misused seems short-sighted.
“The fear is, how do we use it and use it effectively,” Molla said. “It’s like this new kind of gray line that we’re going to hopefully walk together.”
Yes, AI can help students with assignments, self-grading, and personalized study plans. However, with minimal effort, students can exploit these systems to bypass learning altogether. In hindsight, it is a team effort to trust one another to learn to use AI properly.