According to the AI Act, European Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.
The European AI Act classifies AI according to its risk:
1.Unacceptable risk is prohibited. Therefore, the following types of models should not be used:
a. Subliminal, manipulative, or deceptive AI
b. Systems that exploit vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
c. Biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorises biometric data.
d. Social scoring systems.
e. Systems assessing risk of an individual committing criminal offenses.
f. Compiling facial recognition databases from the internet or CCTV footage.
g. Inferring emotions in workplaces or educational institutions.
h. ‘Real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement.
2. High-risk AI systems: they are regulated, and the AI Act focuses mostly on these.
3.Limited risk AI systems: they are subject to lighter transparency obligations. This means, developers and deployers should make sure that end-users are aware that they are interacting with AI (thus making it clear that there is an AI model behind chatbots and deepfakes).
4.Minimal risk AI models are unregulated: these include most AI applications that were available on the EU single market at the moment of starting AI Act in 2021. For instance, AI enabled video games and spam filters.
Clearly, this scenario is changing with generative AI, which raises the level of risk of AI models, making them mostly high-risk.
Even if now AI regulations are advancing, we think AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes. This includes making sure we are able to generate trustworthy AI systems that are fair-by design and are explainable and clear to decision makers.
Legislation will clearly not fast track adoption of responsible AI but organisations are the ones who need to share experiences and solutions to show what “good practice” looks like.
Boards need to embrace Corporate Digital Responsibility to assess digital impacts of products/services on all stakeholders by examining societal, economic, technological & environmental impacts.
Therefore, we are supporting companies in their role to ensure the technology is not deployed in “negative use cases” that could harm society and generating AI models that are transparent and accountable across industries.
→ Check our Trustwothy AI solution