EU introduces new Artificial Intelligence Act – UK to be impacted?

European Parliament Approves New Rules on Artificial Intelligence

Members of the European Parliament have voted overwhelmingly to approve a new world-leading set of rules on artificial intelligence. These rules are designed to ensure that humans remain in control of the technology and that it benefits the human race.

Changes in the Rules

The rules are risk-based, meaning that the riskier the effects of an artificial intelligence (AI) system, the more scrutiny it will face. For example, a system that makes recommendations to users will be considered low-risk, while an AI-powered medical device will be classified as high-risk.

The EU anticipates that most AI applications will fall into the low-risk category, and different activities have been grouped to ensure the laws remain relevant in the future. Technologies intending to use AI for policing, for instance, will require more scrutiny.

Companies will need to clearly disclose when AI technology has been utilized, and higher-risk companies must provide transparent information to users and maintain high-quality data on their products. The Artificial Intelligence Act prohibits applications deemed “too risky,” including the use of AI by police to identify individuals, with some exceptions in extreme cases.

Additionally, certain forms of predictive policing, emotional tracking systems in schools or workplaces, and deepfake technologies must be labeled to prevent the spread of disinformation.

Impact on the UK

The Artificial Intelligence Act is groundbreaking, and governments worldwide are closely examining it for inspiration. The UK, for example, although having AI guidelines, they are not legally binding. At a global AI Safety Summit in London, AI developers agreed to collaborate with governments to test new models before their release to mitigate risks.

Prime Minister Rishi Sunak also announced the establishment of the world’s first AI safety institute in Britain.

Industry Response

The tech industry has actively engaged in lobbying efforts to ensure that the new rules are favorable to them. Meta, the parent company of Facebook, Instagram, and WhatsApp, requires AI-modified images to be labeled. Google has restricted its Gemini chatbot from discussing elections in countries holding elections this year to reduce the dissemination of disinformation.

While the industry generally supports better regulation of AI, there have been concerns raised, such as OpenAI’s CEO Sam Altman hinting last year that they may consider pulling out of Europe if they cannot comply with the AI Act, although he later clarified that there were no plans to do so.

Upcoming Implementation

The rules are scheduled to start taking effect in May 2025.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button