The UK and US are taking action to advance the development of more robust artificial intelligence technology. The UK competition watchdog has initiated a review of the sector, while the White House has advised technology companies that they bear a primary responsibility to create safer products.
Regulators feel more pressure to take action regarding the rise of AI-powered language generators like ChatGPT. Concerns about the potential spread of false information, higher levels of fraud, and the impact on the job market exist. Even the nearly 30,000 signatories of Elon Musk’s letter, which stopped the project, are worried Asks to give.
On Thursday, the UK Competition and Markets Authority (CMA) said it would look at the underlying systems – or foundation models – behind AI tools. The preliminary review, described by a legal expert as an “early warning” for The sector, plans to release its findings in September.
The US government announced measures to address risks in AI development on the same day Vice-President Kamala Harris met with chief executives leading the industry’s rapid advancements. According to the White House, companies creating technology must ensure their products are secure before they are released or made available to the public.
Last week, many scientists and business leaders raised concerns about technology’s rapid disruption of established industries. The meeting served as a culmination of these warnings. One notable figure, Geoffrey Hinton, who is considered the “godfather of AI,” left Google to speak more openly about the potential dangers of this technology. Sir Patrick Vallance, the outgoing science adviser of the UK government, has urged ministers to proactively address the social and economic changes that AI will bring about. It could be triggered, saying the impact on jobs could be as significant as the Industrial Revolution.
According to Sarah Cardell, AI can revolutionize how businesses operate, but it is crucial to safeguard consumers.
The CMA chief executive said: “AI has exploded into the public consciousness over the last Although it has only been a few months, we have been aware of it for a while. The potential benefits of this transformative technology must be easily accessible to UK businesses and consumers while people are protected from problems such as false or misleading information.”
There have been concerns about ChatGPT and Google’s competitor, Bard Services, providing inaccurate information when responding to user inquiries. Furthermore, there have been reports of scams that utilize voices generated by AI technology. Anti-misinformation organization NewsGuard said this week that chatbots pretending to be journalists are running around 50 AI-generated “content farms”. Last month, a song by Drake and The Weeknd featuring fake AI-generated vocals was removed from streaming services.
The CMA review aims to assess the potential growth of the base model markets, identify the opportunities and risks for consumers and competition, and establish “guiding principles” to safeguard and aid consumers.
Some top players in the AI industry include Microsoft, OpenAI (which Microsoft has invested in), and Alphabet (the parent company of UK-based DeepMind). Additionally, some promising AI startups, such as Anthropic and Stability AI, are based in the UK and responsible for creating Stable Diffusion.
According to Alex Hafner, a competition partner at UK law firm Fladgate, the CMA’s decision to focus on this area and the current regulatory trends indicate that their announcement should be considered a pre-warning about the aggressive development of AI programs without proper scrutiny.
During his visit to the White House, Harris met with the CEOs of OpenAI, Alphabet, and Microsoft to discuss the potential dangers of unregulated AI development. Harris emphasized to the executives that it is their responsibility, both ethically and legally, to prioritize the safety and security of their products. A statement was issued after the meeting.
The government has announced plans to allocate $140 million (equivalent to £111 million) towards establishing seven new national AI research institutes. These institutes aim to develop ethical, trustworthy, and responsible AI technologies that benefit the public. The private sector has dominated AI development, with the tech industry producing 32 notable machine-learning models last year, compared to just three produced by academia.
Leading AI developers have also agreed to publicly evaluate their systems at this year’s Defcon 31 cybersecurity conference. The companies that have agreed to participate include OpenAI, Google, Microsoft and Stability AI.
The White House stated that this individual exercise would offer essential insights to researchers and the general public regarding the influence of these models.
Robert Weissman, president of the consumer rights nonprofit Public Citizen, praised the White House announcement as a “helpful step” but said more aggressive action is needed. Weissman recommended banning the introduction of new generative AI technologies such as chatGPT and sound diffusion tools.
“Right now, the big tech companies have to save themselves. Companies and their top AI developers know the risks generative AI poses.
According to him, they are in a race to stay ahead of each other, and nobody wants to slow down.
She also told the EU on Thursday that it must protect grassroots AI research or hand control of technology development to US firms.
The European Parliament received an open letter from the Large-Scale AI Open Network (Lion), a German research group, warning against the risks of implementing one-size-fits-all rules that could hinder research and development opportunities.
Rules mandating the monitoring of researchers or developers to regulate downstream use could make it impossible to publish open-source AI in Europe,” which would “incentivize large companies” and “hinder efforts to improve transparency, reduce competition, and academic freedom.” According to the letter, investing in AI overseas will be restricted and encouraged.
“Europe cannot afford to lose its AI sovereignty and suppose open-source research and development for AI are excluded. In that case, the European scientific community and the economy will rely heavily on a few foreign and proprietary companies for crucial AI infrastructure.
Their creators control the most considerable AI efforts from companies like OpenAI and Google. For example, the model behind ChatGPT is impossible to download, and the paid access that OpenAI provides customers with access to it has many legal and technical restrictions on how they can use it. In contrast, open-source efforts involve creating a model and releasing it to anyone who sees fit to operate, improve, or adapt it.
“We’re working on open-source AI because we think that kind of AI will be safer, more accessible and more democratic,” said Christoph Schuhmann, organizational lead at Laon.