The initiative called “Frontier Model Forum” aims to stimulate a “benevolent evolution” of the most sophisticated AI systems and to “reduce potential dangers,” as mentioned in a statement.
Members commit to sharing optimal strategies with legislators, researchers, and associative organizations to mitigate the risks of these new devices.
The rapid diffusion of generative AI through platforms like ChatGPT (OpenAI), Bing (Microsoft), or Bard (Google) creates vast apprehensions among authorities and the population.
The European Union (EU) is about to complete a proposal for AI regulation that will impose responsibilities on companies in the field, especially in terms of transparency with consumers and human supervision of the machine.
In the USA, political quarrels within Congress block any progress in this direction. The White House therefore encourages the companies involved to ensure the safety of their products themselves, in the name of their “moral responsibility,” according to the phrase used by Vice President Kamala Harris in May.
Last week, the Joe Biden administration obtained “assurances” from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to adhere to “three principles” in the development of AI: safety, stability, and trust.
These companies are supposed, among other things, to control their software upstream, protect against cyberattacks and deception, and design a process to label contents produced by AI, with the aim of clearly authenticating them.
The leaders of these companies do not deny the threats, quite the contrary. In June, Sam Altman, the director of OpenAI, and Demis Hassabis, the head of DeepMind (Google) in particular, argued for fighting against the “dangers of extinction” of humanity “associated with AI.”
During a Congressional hearing, Sam Altman supported the fashionable idea of establishing an international organization dedicated to the governance of artificial intelligence, as is the case in other sectors.
In the meantime, OpenAI is aiming for a “general” AI, with cognitive abilities comparable to those of human beings.
In a July 6 article, the Californian company qualified “frontier models” of AI as very refined central programs “that could possess threatening capabilities significant enough to pose serious risks to public safety.”
“Dangerous skills can emerge unexpectedly,” OpenAI still warns, and “it is difficult to truly prevent a deployed model from being used maliciously.”