Probe reveals: misinformation in ChatGPT cancer treatment protocols

(go ahead and do it) without hesitatingOpenAIchatbotChatGPTTaking the world by storm, a new study reveals that it is not yet possible to replace human experts in specific fields. Researchers at Brigham and Women's Hospital, affiliated with Harvard Medical School in the United States, recently found that theChatGPTThe cancer treatment protocols generated are rife with inaccurate information.

In a study published in JAMA Oncology, experts presented ChatGPT with multiple cancer cases and then revealed that one-third of the responses contained misinformation. Notably, ChatGPT often mixes correct and incorrect information, making it difficult to recognize truly accurate information.

Probe reveals: misinformation in ChatGPT cancer treatment protocols

Dr. Danielle Bittman, co-author of the study, stated that they were "struck by the extent to which correct and incorrect information was intertwined, making it difficult for even professionals to identify errors." She further noted that "large language models, while they can provide compelling answers, are not designed to provide accurate medical advice. Error rates and erratic answers are becoming a critical safety issue that needs to be addressed urgently in the medical field."

ChatGPT took the market by storm when it launched in November 2022, attracting 100 million users in just two months. However, despite the huge success, generativeartificial intelligence (AI)Models are still prone to falling into the "illusion" of being overconfident in providing insufficiently guided or outright wrong information.

Important attempts have been made recently regarding the use of artificial intelligence in healthcare, mainly aimed at streamlining administrative tasks. Earlier this month, a major study pointed to the proven safety of using AI to screen for breast cancer, potentially cutting radiologists' workload in half. A Harvard computer scientist recently found that the latest version of the model, GPT-4, performed well on the U.S. medical licensing exam, even suggesting that it may outperform some physicians in clinical judgment.

Nonetheless, given the accuracy issues with generative models such as ChatGPT, it will be difficult for them to replace physicians in the short term. The study, published in JAMA Oncology, noted that 12.51 TP3T of ChatGPT's responses were "illusory" and had the highest likelihood of providing incorrect information when asked about localized treatments or immunotherapy for advanced disease.

OpenAIThe unreliability of ChatGPT has been recognized, and its terms of use explicitly warn that its models are not intended to provide medical information and should not be used "to provide diagnostic or treatment services for serious medical conditions."

This article comes from users or anonymous contributions, does not represent the position of Mass Intelligence; all content (including images, videos, etc.) in this article are copyrighted by the original author. Please refer to this site for the relevant issues involvedstatement denying or limiting responsibilityPlease contact the operator of this website for any infringement of rights (Contact Us) We will handle this as stated. Link to this article: https://dzzn.com/en/2023/415.html

Like (0)
Previous August 26, 2023 at 12:16 pm
Next August 27, 2023 at 3:43 pm

Recommended

Leave a Reply

Please Login to Comment