In January 2021, artificial intelligence company OpenAI released DALL-E, a generative AI model that creates images out of text prompts. In November 2022, the same company released ChatGPT. Those programs and many others have sparked wider conversations about the ethics and future of AI.
In those conversations, however, a key point has been lost. AI is not a new concept. In the 1950s, mathematician Alan Turing, famous for building the Enigma machine, asked the question, “Can machines think?” Through that, he developed the Turing Test, which determines whether a computer can trick human evaluators into believing they are communicating with other humans.
From there, the popularization of AI, particularly in the sci-fi genre with characters such as HAL 9000 from “2001: A Space Odyssey” brought the idea that machines can think into the cultural eye.
The growth of AI software in the early 2020s contributed to concerns about the potential harm of the technology. However, with AI being around for decades, many forgot how pervasive it has been, particularly in the digital age.
The key distinction between AI of the past and present is the older type predicts while the new creates. Humans are a pattern driven species, making predictability inevitable. Because of this, predictive AI analyzes what has been created to act.
For example, if someone were to play a game of chess against a robot, the software would read the moves made by the player and every player before them to make its move based on an algorithm.
Technology used daily, such as Apple’s Siri or Google’s autofill are examples of predictive AI. This has been a part of the technological consciousness for decades, so to say it is something new is objectively false.
On the other hand, generative AI, which has gotten much of the media’s attention in recent years, examines published works to create completely “new” works. New is in quotes because what the program is creating is not truly new, but is repurposed works humans have made.
An example of this type of AI would be Google’s Gemini program, released in December 2023. The program scans results from websites on a particular topic and places an answer at the top of the search page.
The issue with most generative AI is the spread of false information. If one were to look up a question on Google, the AI overview may or may not be giving a correct answer, leading to reliability issues. The same goes for models such as ChatGPT. This is why using ChatGPT as a search engine may yield incorrect results.
In the case of image and audio generation, the question of copyright infringement comes into play. Because the AI scans images to create something, all while claiming it is original, the argument can be made that these software are stealing the work of the creators.
If someone were to ask an image generating program to make a photograph of the Ohio University campus, it would be taking data from images by real photographers to create something resembling the campus, likely with inaccuracies, all the while not crediting those artists.
Another issue with generative AI is that if it publishes work, other programs will use that data. If enough programs put out artificially generated images, when new images go to be made, it will be wildly distorted from the prompt in a game of visual telephone. The same process goes for text.
While AI is nothing new, the world is in the new age of it. Concerns around the ethics of generative AI are absolutely justified, with the spread of false information and intellectual property infringement on the minds of artists, writers and anyone else trying to convey information. Going back to Turing’s question on whether computers are able to think or not, maybe. These programs may not be sentient droids like HAL 9000, but they are far more advanced than he could have ever predicted.
Ethan Herx is a sophomore studying photojournalism at Ohio University. Please note that the views and opinions of the columnists do not reflect those of The Post. Want to share your thoughts? Let Ethan know by emailing or tweeting them at eh481422@ohio.edu or @ethanherx.