.
.
.
.

EU wants to ban use of AI for surveillance

Published: Updated:

The European Commission wants to ban the use of artificial intelligence to track people and rank their behavior, with fines up to 4 percent of a company’s turnover for violations as part of draft AI rules to be announced next week.

For the latest headlines, follow our Google News channel online or via the app.

The proposed rules also include safeguards and prior authorization for the use of AI in applications considered to be high-risk that could affect people’s health, safety, fundamental rights and freedom, according to a Commission document seen by Reuters.

The move would put the EU at the forefront of regulating a technology that has triggered concerns about its harmful social effects and use as a tool for social control by authoritarian governments, while others see it as an engine of economic growth.

The Commission’s proposal, which will need to be thrashed out with EU countries and the bloc’s lawmakers before it becomes law, warns against manipulative and indiscriminate surveillance practices which contravene human rights and dignity, democracy and freedom.

“The use of artificial intelligence for the purposes of indiscriminate surveillance of natural persons should be prohibited when applied in a generalized manner to all persons without differentiation,” the document said.

It said exceptions are allowed for public security reasons.

The EU also wants to ban social scoring amid criticism that some companies misuse the technology for hiring, giving out loans and other major decisions which can favor privileged groups.

The document says AI applications used in remote biometric identification systems, job recruitment, access to educational institutions, assessing creditworthiness and asylum and visa applications are considered high risk and that data used in the systems should be free of bias.

Such systems should be overseen by humans, with national bodies set up to assess and issue certificates valid up to five years to certain high-risk systems. Other high-risk systems will be allowed to do a self-check.

Fines can be levied for developing and selling banned AI apps, providing false information to the authorities or not co-operating with them. AI systems used for the operation of weapons or other military purposes are excluded from the rules.

The European Commission’s tech chief Margrethe Vestager will present the rules on April 21. The proposal, which can still be amended, was first reported by Politico.

Read more:

Microsoft to acquire AI firm Nuance for $19.7 bln

Instagram to introduce new child protection tools, including age prediction via AI

Star Trek actor William Shatner’s life story to live on through AI