Artificial Intelligence writes unreliable articles for CNET

As+Artificial+Intelligence+technologies+become+smarter%2C+more+repercussions+become+possible.

Knox

As Artificial Intelligence technologies become smarter, more repercussions become possible.

Venkata Sri Saiveer Chelliboyina, Perspectives/Spotlight Reporter

News outlet Computer Network (CNET) discovers mistakes in many of its stories that were written by AI. 

CNET has been silently using Artificial Intelligence (AI) to write dozens of its stories. These articles are marked with a note that says “this article was assisted by an AI engine.” AI-generated research often contains incorrect information.

CNET later fixed articles with correction notes that clarified what details were inaccurate.

One, however, stated, “We have replaced phrases that were not entirely original.” Unintentional plagiarism might have been a consequence of trusting AI to produce original work. 

The increasing accessibility of artificially intelligent assistants poses plenty of legal and ethical ramifications. ChatGPT, for example, is a free language model from OpenAI. Sometimes when these programs answer incorrectly, they do so with confident wording that masks inaccurate information. That being said, the amount of times ChatGPT answers accurately is impressive.

A lot of the errors in CNET’s AI-written articles spread misinformation such as inaccurately “explaining loan interest rate payments and certificates of deposit or CDs.”

The potential for jobs such as travel agents, management officers and even programmers to be replaced by AI is scary enough as is. Language models like ChatGPT are great at figuring out logical problems and generating poems or essays; these answers often feel robotic and repetitive. 

As for the CNET situation, we can ask ChatGPT what it thinks.

When asked to “write a short article on why CNET found mistakes in its articles that were written by AI,” ChatGPT responded:

“CNET, a technology news organization, found mistakes in articles written by AI due to the limitations of the technology. AI is not yet able to fully understand context and nuances in language, leading to inaccuracies in the information it produces.”

“In CNET’s case, the AI-generated articles contained errors such as inaccurate stock prices and misquoted sources. This demonstrates the need for human oversight and editing when using AI for writing articles.” 

“While AI has the potential to streamline the writing process, it is not a replacement for human intelligence and judgment. Until AI technology improves, relying solely on AI to write articles is risky and can result in the dissemination of unreliable information.”

The use of AI in creating stories has the potential to automate the media cycle. In a hypothetical future where news is written solely by AI, articles can be churned out faster and in a more timely manner. However, trained language models often fall into predictable patterns. ChatGPT, for example, formulates its answer the same way when prompted to make arguments, leading to a string of responses feeling robotic. 

The creator’s voice defines good writing and art. James Cameron made Avatar a metaphor for protecting nature. F Scott Fitzgerald wrote The Great Gatsby to comment on people in power and the American Dream. The best a machine can do is mimic the inspirations of creators that it knows. 

Dall-E, an image-generating model also developed by OpenAI, has been accused of plagiarizing from real artists. The data sets that a model is trained on influences its behaviors, often resulting in art or writing that closely resembles an actual piece of content written by a human. Yet, the machine produces content with the single intent of responding to the user’s prompt and therefore lacks voice. 

But ChatGPT and other language models can still be very useful tools. Similar to search engines, these programs are best when they supplement a human, instead of replacing them.