A Potential Cultural Crisis? AI Content Is Hard to Identify, Linguists Can't Do Anything About It

2023 is the yearartificial intelligence (AI)of the year, various applications such as chatbotsChatGPT,AIThe ubiquity of songs written for the Grammys and more. Recently, a study at the University of South Florida revealed that AI-generated textual content may no longer be distinguishable from that written by humans. The researchers invited several linguistics experts to participate in the project, but even these professionals found it difficult to recognize the content written by AI. In total, only 391 TP3T cases could be correctly distinguished.

In this study, Matthew Kessler, a scholar in the Department of World Languages at the University of South Florida, in collaboration with J. Elliott Cassar, an assistant professor of applied linguistics at the University of Memphis, invited 72 linguistic experts to review a series of research abstracts and determine which were written by humans and which were generated by AI. However, none of the experts were able to correctly identify all four samples, and 131 TP3T's answered all of them incorrectly.

Based on these results, the researchers concluded that most modern professors are unable to distinguish between content written by students themselves and content generated by AI. The researchers speculate that software may need to be developed in the near future to help professors recognize AI-written content.

Although linguistic experts have attempted to use a number of rationales for judging writing samples, such as identifying certain linguistic and stylistic features, these methods have largely failed, resulting in an overall correct recognition rate of only 38.91 TP3T.

Overall, chatbots like ChatGPT can indeed write short articles, even better than most humans in some cases. However, when it comes to long-form writing, humans still have the edge. The study authors note that in long texts, AIs have been shown to produce hallucinations and fictional content, making it easier to recognize that it was generated by an AI.

The study was published in the journal Research Methods in Applied Linguistics. Matthew Kessler hopes this work will raise awareness and calls for the establishment of clear ethics and guidelines for the use of AI in research and education.

This article comes from users or anonymous contributions, does not represent the position of Mass Intelligence; all content (including images, videos, etc.) in this article are copyrighted by the original author. Please refer to this site for the relevant issues involvedstatement denying or limiting responsibilityPlease contact the operator of this website for any infringement of rights (Contact Us) We will handle this as stated. Link to this article: https://dzzn.com/en/2023/1099.html

Like (0)
Previous September 11, 2023 at 12:53 pm
Next September 11, 2023 at 1:07 pm

Recommended

Leave a Reply

Please Login to Comment