Companies that are at the forefront of developing artificial intelligence technology, like Amazon, Google, Meta, Microsoft, OpenAI, Inflection, and Anthropic. Have agreed to follow a set of AI self-regulations negotiated by President Joe Biden’s administration.
The White House said on Friday- “it has obtained voluntary agreements from seven American businesses to assure the safety of their AI technologies before they are released. Some of the agreements demand independent scrutiny of how commercial AI systems function, but they don’t specify who would check the technology or hold the businesses responsible.”
The Surge of AI
As generative AI tools become more and more mainstream and easy to access. It is important to recognize the dangers and other misuses they come with. The risks are high when it comes to AI as it holds the key to databases, credentials, and other information.
According to the White House, the four tech giants, OpenAI, and the startups Anthropic and Inflection have agreed to security testing. “Carried out in part by independent experts” to protect against critical concerns like cybersecurity and biodiversity.
Companies Doing Their Part
The businesses have agreed to procedures for disclosing system vulnerabilities and to the use of digital watermarking. To aid distinguish between authentic photographs and AI-generated (deep fake) ones. Furthermore, they will also make technology weaknesses and hazards public. This covers the impact on prejudice and fairness.
In fact, supporters of AI regulations claim Biden’s action is a beginning, but more must be done to hold businesses and their products accountable.
“We must be clear-eyed and vigilant about the threats emerging technologies can pose,” Biden said just as claiming companies should have a “fundamental obligation” to ensure their products are safe.
Many technology leaders have advocated for regulation. And a number of them visited the White House in May to meet with Vice President Kamala Harris, Vice President Joe Biden, and other officials.
However, some experts and upstart rivals are concerned that the proposed regulation could benefit well-funded first movers. Such as OpenAI, Google, and Microsoft, driving out smaller players due to the high cost of making their AI systems known as large language models comply with regulatory requirements.
Even more people—79%—believe that generative AI would contribute to increased efficiency, while 52% assert that it will boost employment chances. On the other hand, given the implications for security and privacy, as well as its propensity to spread false or misleading information, it isn’t a leap to refer to AI as a double-edged sword.
Companies are in agreement to self-regulate themselves, but what are these regulation processes? Moreover, how will they follow them and bring them to practice?
Companies committed to red-teaming efforts to eliminate societal risks and trigger national security concerns.
Companies have committed to information-sharing efforts with the government to establish transparency.
Companies agreed to establish external and internal threat detection programs.
For any audio or video material produced using any of the company’s exclusive and readily accessible AI technologies, provenance and/or watermarking systems would be implemented.