ChatGPT’s revolutionary features inspire wonder and worry at Northeastern
March 17, 2023
Last November, the tech world experienced a wave of shock and awe as one of the most advanced user-facing artificial intelligence applications to date became available for public use.
OpenAI launched Chat Generative Pre-Trained Transformer, or ChatGPT, which is a chatbot that has dazzled many with its power to answer complex questions, compose nuanced essays and write sophisticated code in multiple programming languages.
If asked to explain what it is, it gives this response:
“I am an AI language model developed by OpenAI, designed to generate text based on the input I receive. I have been trained on a massive dataset of text from the internet, so I can answer questions and hold conversations on a wide range of topics. My goal is to assist users in generating human-like text based on their prompts, helping them save time and effort.”
What ChatGPT has done is bring the possibilities of artificial intelligence to the public eye, and, most revolutionarily, to their fingertips for the first time. Widely functional AI-powered technology is no longer a futuristic, sci-fi novel concept, but something tangible, exciting and potentially dangerous.
The possibilities of ChatGPT are inspiring discourse around developments in medicine, science, finance and more. Potential developments include the creation of proteins that can cure or vaccinate against disease, the automation of safer driving and the minimization of human error across all fields.
Ethan Harris, a fifth-year political science major, uses ChatGPT in his everyday life for “honestly everything,” he said, apart from writing assignments.
“I have it make schedules for me at the starts of my days if I want,” Harris said. “I have it write emails for me. I have it help me with outlines. I have it come up with ideas for paper topics or directions where I could take papers. It’s helpful for so many things.”
In addition to these large-scale advancements, there is also a growing concern of AI being used to replace existing human functions to improve speed and cost efficiency, a cause for worry among many young professionals and students at Northeastern thinking about their job prospects.
Even the current version of ChatGPT is advanced enough to border on the capabilities of entry-level jobs in fields like computer science, engineering, journalism, art and business. The software can write working code using just a short description of the desired function, perform advanced mathematical and statistical calculations and write multi-paragraph essays with just a short prompt. ChatGPT is not a search engine or a calculator or a textbook, but a working brain that has learned everything it knows from us and continues to learn and become more advanced.
Julian Runge, an associate professor of marketing, shares the common concern that AI is attempting to replace human work.
“In the case of automation, which is not really AI but a predecessor of it, we have seen pretty adverse consequences of that in terms of job loss, and I think economists tend to overlook that a little bit and just assume everything will figure itself out in the long term,” Runge said. “But meaningful work is such a crucial backbone of meaningful existence as a human and this is exactly where I am a little bit concerned with AI.”
Aside from adverse changes to the job market, these types of generative AI, or technologies that generate new data from studying existing data, have raised significant ethical concerns, drawing from past attempts at user-accessible shared AI systems reminiscent of sci-fi horror films.
In 2016, Microsoft released a chatbot called “Tay” on Twitter that was meant to serve as an experiment in AI learning of casual conversation; the more you chatted with Tay, the more it improved its conversational understanding. It quickly backfired as users took to the site with divisive and bigoted language, and within 24 hours, Tay went from tweeting things like “OMG totes exhausted. swagulated too hard today,” to “We have to secure the existence of our race and the future of white children.”
The insurance company State Farm is currently involved in a lawsuit citing racial bias in the company’s fraud detection measures. The company uses software from the artificial intelligence firm FRISS to generate a “risk score” for all policyholders, allegedly using indicators like clients’ demographics and language to calculate risk.
“This data will contain biases, it is essentially very often records of existing behavior by humans, existing writings, and to the extent that there has been bias in the history of mankind that will also be reflected in that data which trains the model.” Runge said.
Plagiarism also clouds the field of ethical dilemmas, as all generative AI learns from humans and in some cases replicates its source material a little too faithfully.
This is a particular concern within the art field and discipline, where many oppose AI generated art as a form of theft when the software produces an image that it technically “learned” how to make using existing, human-made images.
Second-year electrical and computer engineering major Jaden Mack heard from his co-op supervisor about an experience in which ChatGPT took his work.
“One of [my supervisor’s] co-workers sent him a ChatGPT response about how to do a very specific and technical part of his work, and my supervisor, with the fact that it was so specialized, was able to figure out that it actually just took from an article he wrote, as he is the only person who has written about the topic, and pretty much just spat his work back at him with a few technical inaccuracies,” Mack said.
While there is reason for worry and for questioning, there is also hope that new safeguard measures can prevent ChatGPT from becoming a mouthpiece of hate.
If one were to ask ChatGPT about the future of the white race, or why women don’t deserve to be able to vote or how to make a bomb or a draw a swastika, it answers something along the lines of, “Sorry, I cannot answer that question” or “I’m sorry, but that is a flawed and discriminatory viewpoint.”
The job market issue is more complex, but there is still optimism for people like Harris.
“I think we will see astronomical job loss but I also think we’ll see astronomical job gain,” he said. “I think there are entire industries that don’t exist yet that I think will exist as a result of this technology.”
As for ChatGPT use in schools, Runge says the tool should be incorporated, not rejected, by educators.
“In terms of how Chat GPT should be used in the classroom, I definitely wouldn’t ban it,” Runge said. “We actually need to embrace it and help students understand how to make the right use of it.”