Are We Ready For AI?

Artificial Intelligence has been around since the 1950s and has gotten much better since then. When it was first introduced it was not called Artificial Intelligence, it was called Machine Learning. It started off as a programming language that helped with assembly lines. The mid to late 60s was when AI developed into something that humans could converse with. Joseph Weizenbaum created the first “chatterbot” (later shortened to chatbot). He called it ELIZA and it was a mock psychotherapist. 

In the late 2010s, OpenAI developed DALL-E which is an image generating software. It inspired other technology companies to make websites and apps like Reface and DeepFakes Web. With this software, people are making AI-generated images called ‘DeepFakes.’ These images range from using celebrities in advertisements to using minors inappropriately in photos. AI is very interesting to people because it takes a non-living thing and makes it almost have a personality. As interested as people are in AI, as a society we have proven that we are not ready for it, and we have already violated people. 

At the beginning of 2024, singer-songwriter Taylor Swift was subject to multiple types of fake content made of her. According to a New York Times article by Tiffany Hsu and Yiwen Lu, Swift’s face and voice were edited onto a video that advertised free cookware sets. In late January, sexually explicit photos of Swift were posted on X, formerly known as Twitter. Many other celebrities like Oprah Winfrey, Martha Stewart, Tom Hanks, and MrBeast have been subject to these incidents, too.  Additionally, in late January, an Australian TV network was caught editing photos of female lawmaker Georgie Purcell to be more revealing. A Washington Post article by Frances Vinall stated that the next day Purcell created a post on X, talking about how she was uncomfortable with the photo and how she “can’t imagine this happening to a male MP.” Sadly, celebrities and politicians are accustomed to libel online, so it was not much of a surprise that this happened to them.  

It is very upsetting but not that surprising that teenage girls have been subject to this. The internet has never really been a safe place for teenage girls, and AI has not made it any better. The online sexual exploitation of teenage girls and people in general has gotten much worse. In March of 2024, NBC News reported that middle school students in Beverly Hills California were making and distributing AI-generated sexually explicit photos of their peers. In the last week of February, school officials were made aware of the photos. They revealed that the images were AI-generated of students’ faces put onto nude bodies.

Another way that students have misused artificial intelligence is turning in AI-generated work. Chat-GPT is the most common website that students will use to cheat. There has been so much AI-generated work that teachers have had to set rules about using it and have to use a special website to check students' work when they turn it in. 

We have not had access to AI for long, but somehow we have already found a way to misuse it. There are no federal laws regulating AI, but lawsuits can still be filed for invasion of privacy. We are definitely not ready to have access to artificial intelligence without regulations set by the government to prevent people from invading others' privacy.

By Lucy Silberman