Artificial Intelligence (AI) is transforming the way we live, work, and communicate. With the increasing adoption of AI technology, there has been a growing concern about its impact on society. However, it would be incorrect that this transformation comes sudden or unexpected. For example, the European Union (EU) has been developing for some time a comprehensive AI strategy and proposed regulations that aim to ensure the ethical and safe use of AI technology.
What is the European AI Strategy?
In April 2018, the European Commission published its Communication on AI for Europe, which outlined the EU’s vision and approach to AI. The European AI Strategy aims to position Europe as a leader in developing and deploying AI while addressing the potential risks and ethical concerns associated with this technology.
The European AI Strategy focuses on three key pillars:
- boosting investment in AI research and innovation,
- preparing for socioeconomic changes caused by AI, and
- ensuring an appropriate ethical and legal framework for AI.
Proposed Regulations on AI
To ensure that the ethical and safe use of AI technology, the European Commission proposed a set of regulations on AI in April 2021. The proposed regulations are aimed at ensuring that AI is developed and used in a way that respects fundamental rights, protects public safety and privacy, and contributes to a sustainable and inclusive society.
The proposed regulations are divided into two main parts:
- the first part focuses on the requirements for AI systems that are considered high-risk,
- the second part covers voluntary codes of conduct and regulatory sandboxes for low-risk AI systems.
Requirements for High-Risk AI Systems
High-risk AI systems are defined as those that have a significant impact on the health and safety of individuals or that have a high potential for causing harm to society. The proposed regulations require that high-risk AI systems comply with specific requirements, such as:
- Being subject to strict transparency and traceability requirements
- Having appropriate human oversight
- Being tested and certified before deployment
- Ensuring that data used to train the AI systems is of high quality and unbiased
- Having robust cybersecurity measures in place
These requirements are intended to ensure that AI systems are developed and used in a way that respects fundamental rights and protects public safety and privacy.
Voluntary Codes of Conduct and Regulatory Sandboxes for Low-Risk AI Systems
The proposed regulations also recognize that not all AI systems pose a high risk to individuals or society. As such, the regulations provide for voluntary codes of conduct and regulatory sandboxes for low-risk AI systems.
Voluntary codes of conduct are aimed at encouraging organizations to develop and adopt ethical standards for the development and use of AI systems. Regulatory sandboxes, on the other hand, are designed to provide a safe and controlled environment for the testing and development of new AI technologies.
Impact of the Proposed Regulations
The proposed regulations on AI have been welcomed by many stakeholders as a positive step towards ensuring the ethical and safe use of AI technology. However, there are concerns that the regulations may stifle innovation and limit the development of AI technology in Europe.
There are also concerns that the proposed regulations may not go far enough in addressing the potential risks and ethical concerns associated with AI technology. For example, some experts argue that the regulations should be expanded to cover all AI systems, not just those that are considered high-risk. For example, the