Leading AI firms pledge ‘responsible’ tech development

Published: Updated:
Read Mode
100% Font Size
2 min read

More than a dozen of the world’s leading artificial intelligence firms pledged at a global summit on Wednesday to develop and use their technology safely, as concern rises over the lack of safeguards for ChatGPT-style AI systems.

Fourteen companies, including South Korea’s Samsung Electronics, sprawling tech giant Naver and America’s Google and IBM, agreed on the final day of the Seoul summit to “minimize risks” as they push the cutting-edge field forward.

For all the latest headlines follow our Google News channel online or via the app.

“We commit to continuing to advance research endeavors to promote responsible development of AI models,” they said in the Seoul AI Business Pledge.

The companies also promised to “minimize risks, and enable robust evaluations of capabilities and safety.”

The two-day summit, co-hosted by South Korea and Britain, gathered top officials from global AI companies such as OpenAI and Google DeepMind to find ways to ensure the safe use of the technology.

Their commitment builds on the consensus reached at the inaugural global AI safety summit at Bletchley Park in Britain last year.

Under their new pledge, the companies also agreed to help socially vulnerable people through AI technologies, although it gave no details on how this would be achieved.

Sixteen tech firms, including ChatGPT-maker OpenAI, Google DeepMind and Anthropic, also pledged on Tuesday to make fresh safety commitments that included sharing how they assess the risks of their technology.

That includes what risks are “deemed intolerable” and what the firms will do to ensure that such thresholds are not crossed.

The stratospheric success of ChatGPT soon after its 2022 release sparked a gold rush in generative AI, with tech firms around the world pouring billions of dollars into developing their own models.

Such AI models can generate text, photos, audio and even video from simple prompts and its proponents have heralded them as breakthroughs that will improve lives and businesses around the world.

However, critics, rights activists and governments have warned that they can be misused in a wide variety of ways, including the manipulation of voters through fake news stories or “deepfake” pictures and videos of politicians.

Read more:

Scarlett Johansson says OpenAI chatbot voice ‘eerily similar’ to hers

OpenAI disbands team devoted to artificial intelligence risks

Reddit gives OpenAI access to its wealth of posts

Top Content Trending