Recently.OpenAIA series of publications on preventingAI riskguidelines, which explicitly state that even if the company's leadership believes that it will release theAImodel is safe, and its Board of Directors also has the option to delay the release of the model. This initiative is intended to strengthen AI risk management and ensure the safety and sustainability of AI technology.
October 27 of this year.OpenAIAnnounced the formation of a security team called Preparedness, whose primary mission is to minimize the risks posed by AI. The guidelines announced are an extension of the Preparedness team's work and are intended to further strengthen AI risk management by making it possible for the board of directors to prevent the CEO from releasing new models.
image source network
Aleksander Madry, Preparedness team leader, said his team will send monthly reports to a new internal security advisory group, Bloomberg reported. The group will then analyze them and make recommendations to Altman and the company's board of directors.Altman and the company's executives can decide whether or not to release a new AI system based on those reports, but the board has the power to reverse that decision.
This regulation reflects OpenAI's high regard for AI risk management. Against the backdrop of the rapid development of AI technology, how to ensure the safety and sustainable development of AI technology has become an important issue.OpenAI ensures that adequate assessments and reviews are conducted prior to the release of new AI systems by strengthening its internal security consulting and risk management in order to avoid potential risks and negative impacts.
At the same time, this provision reflects OpenAI's respect for and recognition of the powers of the Board of Directors. In the governance structure of a company, the board of directors, as the highest decision-making body, has the right to supervise and intervene in important decisions of the company. In the development of AI technology, the board of directors needs to give full play to its supervisory role to ensure that the company's decisions are in line with the company's long-term interests and social responsibility.
OpenAI's stipulation that the board of directors can prevent CEOs from releasing new models is an important step in strengthening AI risk management. This provision reflects OpenAI's responsible attitude towards AI technology, and also provides us with a lesson and reference. In the future, we expect more technology companies to take similar measures to strengthen AI risk management and ensure the safe and sustainable development of AI technology.
This article comes from users or anonymous contributions, does not represent the position of Mass Intelligence; all content (including images, videos, etc.) in this article are copyrighted by the original author. Please refer to this site for the relevant issues involvedstatement denying or limiting responsibilityPlease contact the operator of this website for any infringement of rights (Contact Us) We will handle this as stated. Link to this article: https://dzzn.com/en/2023/2202.html