AI risk
-
Google CEO says "unacceptable" for Gemini AI to generate biased content
Google CEO Sundar Pichai recently told employees via an internal memo that the company found inaccurate and biased images and text content generated by its AI model Gemini "unacceptable." The content involves misrepresentations of historical events, such as showing racially diverse Nazi German soldiers or non-white...
-
Amazon researcher points out that training of big language models needs to be wary of data pitfalls
Researchers at Amazon warn of the need to be wary of data traps during the training process of large language models, Techradar reports. They point out that there is currently a large amount of content on the web that is generated by machine translation, and that this low-quality content can be a problem for the training process. The researchers found that a large number of web...
-
OpenAI Provides Board of Directors with the Ability to Block CEOs from Releasing New Models, Strengthening AI Risk Management
Recently, OpenAI published a series of guidelines on preventing AI risks, which clearly state that its board of directors has the right to choose to delay the release of an AI model that will be released, even if the company's leadership believes that the model is safe. This initiative aims to strengthen AI risk management and ensure the safe and sustainable development of AI technology. ...
-
Study Shows ChatGPT Doesn't Apply to Getting Drug Medical Information, Answers May Mislead Users
A recent study from Long Island University has raised concerns about ChatGPT's use in obtaining drug medical information. The results of the study show that ChatGPT is not suitable for obtaining drug medical information because its answers may be misleading to users. The study was conducted by researchers at Long Island University who used a free version of...