Risky Business: The EU's Multifaceted Approach to AI Governance
By Wyatt Smith ‘26
The EU AI Act
The early 2020s will be known as the years when AI took the market by storm. It remains to be seen what the outcome of the AI revolution will be, and whether it is truly a “revolution” as some say. What is clear, however, is that reactions to the technology have been different all over the world. In the US, a small but increasing number of states have adopted sweeping laws governing artificial intelligence (AI). These laws exist in the absence of federal legislation, of which no serious proposals have materialized. Those familiar with the advent of data privacy law in the US and abroad will find this trend familiar. Similarly to the realm of data privacy law, one legislative body has emerged at the forefront of the push to regulate a new technology: the European Union.
The EU AI Act passed through its final phase of legislation on May 21st, 2024, and was deployed “in force” on August 1st, 2024. This “deployment in force” was limited however, despite the authoritative language. The law has been rolling out since August 2024, and by August 2026 most of its provisions will have taken full effect.
The law applies to all member states of the European Union, and seeks to create a unified framework for the governance and compliance of AI by organizing AI implementations into separate “risk categories,” plus a special additional semi-category for “general purpose models” such as ChatGPT, Claude, or Gemini. The law applies to AI as defined in the statute:
“‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;”
The aim of this legislation is to create a uniform market environment for AI and AI compliance, prevent AI from being used for discriminatory purposes, and ensure that products developed within EU borders, or intended for EU markets are developed with the individual rights of their users as a constraint, rather than a post-dev afterthought.
Risk-Based AI Governance
The EU AI Act rightly does not treat artificial intelligence as a monolith. Artificial intelligence models vary greatly in capability and use case. Instead, the act breaks AI down into four categories based on their risk, and sets specific guidelines and regulatory requirements for each one. The categories are:
Unacceptable risk;
High risk;
Limited risk;
And minimal risk
Categories of AI defined as carrying an unacceptable risk are designated by Title II, Article 5 of the act. Eight specific applications of artificial intelligence are named:
Harmful AI-based manipulation and deception;
Harmful AI-based exploitation of vulnerabilities;
Social scoring;
Individual criminal offence risk assessment or prediction;
Untargeted scraping of the internet or CCTV material to create or expand facial recognition databases;
Emotion recognition in workplaces and education institutions;
Biometric categorisation to deduce certain protected characteristics;
And real-time remote biometric identification for law enforcement purposes in publicly accessible spaces.
The first two categories are targeted at reducing the use of AI for misinformation by corporations or organizational actors. Presently, even AI generated videos can be nearly indistinguishable from reality to the untrained eye. The conflict in Iran has highlighted this; both sides of the conflict have employed AI, particularly AI image and video generators, to create misleading media. Prohibitions on social scoring and criminal offence risk are meant to prevent citizens of the EU from falling victim to discrimination on the basis of uncontrollable variables, or a protected legal status such as ethnicity or religion. The last four categories also center on preventing unfair discrimination.
These deployments of artificial intelligence are wholly prohibited.
The next category down is the high risk category. These deployments are not fully prohibited, but are subject to stringent quality standards. High risk deployments include AI deployed for the purposes of:
Biometrics;
Running critical infrastructure;
Education & vocational training;
Employment management systems;
Providing access to public services and benefits;
Law enforcement;
Immigration control;
Justice and the democratic process.
Any AI model developed or deployed for these purposes must adhere to a strict set of standards that are reminiscent of quality control standards in manufacturing. On the design side, these standards include a lifecycle risk management system, data governance, technical documentation, automatic logging and record-keeping, transparency instructions for deployers, human oversight design, and cybersecurity standards.
The obligations don’t end there. After meeting those design requirements, companies must maintain a quality management system, keep records of operations for no less than ten years, undergo a conformity assessment, and, upon passing, issue a statement of EU compliance. Deployers of such systems also have a duty to report incidents and failures encountered during the deployment, and must ensure no AI model is operating without human oversight.
Article 50 of the AI Act lays out the guidelines for limited risk systems. These guidelines are most likely to apply to consumer AI, such as LLMs (large language models), agentic AI, and media creation models like Sora. The guidelines for limited risk systems are designed to be much more permissive than those of the high risk systems, highlighting the stark schism between the different categories of AI. These guidelines are also reminiscent of the General Data Protection Regulation, or GDPR, Europe’s comprehensive data privacy legislation.
Most regulations for this category of deployment center around transparency. It is required that, when someone is interacting with an AI model, that is made explicitly clear to them so that they don’t mistake their conversation partner as human. For example, if a company were to deploy a chatbot with a human profile picture for customer service, the bot would have to make its status as an LLM explicitly clear. It is also required that AI generated media be tagged, or marked to ensure that it can be swiftly identified as AI generated. Some have criticized this as too lax; watermarks can be easily removed. If emotion recognition or biometrics are employed, the user must be informed (this rule also applies up the chain, for high risk systems). Deepfakes must also be transparent under penalty of law.
Minimal risk systems include applications of artificial intelligence that do not carry with them a chance of abuse. These may include spam filters, video game AI, or autocorrect. There are no regulations in the EU AI Act that apply to these models, as there is ostensibly no need to regulate them.
There is a fifth category that exists outside of the hierarchy enumerated above: General purpose AI, or GPAI. This category was added later on in the process of drafting the EU AI Act, because when the act began development in 2021, the technology was not yet on the market. GPAI includes the vast majority of LLMs. This category is further divided into models that do, and do not carry systemic risk. In this case, systemic risk is defined as a model whose training compute exceeds 10²⁵ FLOPs (floating-point operations). In laymen’s terms that means that very large AI models like the type you would access via the cloud would be considered “systemic risk,” whereas smaller models that could be run on a PC using software like Ollama would likely not pose systemic risk.
All GPAI models must keep technical documentation and maintain written copyright policies. Systemic risk models must go a step further, reporting incidents to the EU and undergoing periodic assessments by EU inspectors. This category was formulated in response to the fact that some AI models, chiefly LLMs, are “foundational,” meaning they can be used for a variety of purposes that scale all levels of the EU risk hierarchy.
Enforcement
The EU AI Act creates an EU Commission AI office that has exclusive power to enforce the act. The Commission is also backed by an independent panel of experts who advise the commission. The AI office works in tandem with local authorities and market surveillance bureaus in individual member states to detect infractions. These organizations have robust investigatory powers at their disposal, allowing them to call up technical documentation, training data, risk assessments, and test results. Fines are hefty; they can reach €35 million in cases of banned model use, and can exceed €15 million for violating other requirements. As of the time of this writing, the act is yet to be enforced.
Criticisms
There are several criticisms that can be leveled against the EU AI Act.
For one, enforcement of the act depends heavily on the level of participation of the member states and their market surveillance capabilities. For example, if Hungary is more lax in auditing AI corporations than France, the enforcement of the law will differ between countries. The same goes for a gap in capability. If Greece has more sophisticated tools for catching violations than Belgium, Greece will clearly be able to enforce the act better.
Some language in the act may also be considered vague. For example, Annex VI of the act provides an off-ramp for systems that may otherwise be considered high risk. Evaluative language such as “significant risk” is used, leaving the definition of “high risk” up to some debate. Bad actors may exploit this vulnerability.
Evaluative language is used in defining other standards as well, such as the stringency of copyright and quality control standards. These grey areas are likely to lead to litigation testing.
Potential Impact
The EU AI Act is engineered in such a way that it is unlikely to constitute a massive burden to AI companies. Rather, it will create some friction in the development, deployment, and subsequent compliance of AI models, but nothing that will be insurmountable. One of the foundational principles of the act is creating a uniform regulatory environment. In the EU, companies will now have a simplified and streamlined compliance process. This may offset some of the friction created by additional regulations.
For citizens of the EU, not much will change in terms of how the vast majority of them will be impacted by AI. The act is meant as a future proofing measure, rather than a measure to immediately put out a fire or control a situation. The act may protect them in the future from tyrannical or unethical use of AI in prohibited or high-risk categories.
Another major consideration is the Brussels Effect, a phenomenon coined by Columbia Law professor Anu Bradford. The Brussels Effect hypothesizes that countries around the world, especially those connected economically to the EU, tend to adopt legislation that mirrors that of the EU. After the GDPR was passed, for example, Brazil, Japan, South Korea, India, and others passed similar legislation heavily based on the GDPR. The EU AI Act may have a similar effect. An exception to the Brussels Effect tends to be the United States.
While the EU AI Act is not a sweeping, drastic legislation, it is an important law that will serve as a model for future AI governance. Historically, the adoption of policy has always lagged behind technological innovation, but the government that is the first to legislate cultivates a potential cultural, legal, and economic advantage for itself. Time will tell whether the EU AI Act is a model for AI governance, or will serve as a footnote in the history of technology law.
Wyatt Smith is a senior majoring in Operations and Supply Chain Management.
Sources
European Parliament & Council of the European Union. (2024). Regulation (EU) 2024/1689
laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
Siegmann, C., & Anderljung, M. (2022). The Brussels Effect and Artificial Intelligence How EU Regulation Will Impact the Global AI Market. Governance.ai. From https://www.governance.ai/research-paper/brussels-effect-ai?utm_source=chatgpt.com
Johnson, D.B. (October 3, 2025). Researchers say Israeli government likely behind AI-generated disinfo campaign in Iran. Cyberscoop. https://cyberscoop.com/citizen-lab-disinformation-campaign-israel-iran-evin-prison/?utm_source=chatgpt.com
Goldin, M. (March 7, 2026). State actors are behind much of the visual misinformation about the Iran war. ABC News.
https://abcnews.com/US/wireStory/state-actors-visual-misinformation-iran-war-130849781