Community
As someone who's been following the AI industry closely, I have to say the new European Union AI Act has really caught my attention. It's a beast of a document; 458 pages and 113 articles! When I first heard about it, I thought, "Well, there goes my weekend reading." But jokes aside, this is a big deal, folks. It's probably the most ambitious attempt to regulate AI that we've seen so far.
What's the Act All About?
At its core, the EU AI Act is trying to categorize AI systems based on their risk levels. Let me break it down for you:
Here's a thought that keeps bugging me: What if someone creates an AI system to help companies comply with this very Act? Would it be considered limited risk, even though it's essentially judging what's high risk or unacceptable? It's these kinds of edge cases that make me wonder how flexible this legislation will be in practice.
A Nod to the Little Guys
I was pleasantly surprised to see how much attention the Act gives to startups and small businesses. The word "startup" appears 32 times! That's more than I expected in a legal document. They're talking about "simplified ways of compliance" that shouldn't be too costly for smaller companies. But here's the rub - they don't really define what "excessive cost" means. As someone who's worked with startups, I can tell you that's a pretty crucial detail.
Some Interesting Exceptions
Remember when I wrote about using AI to treat addiction? Well, the Act has some clarifications that relate to this. AI used for medical purposes, like psychological treatment or physical rehab, gets a pass on the behavioral manipulation ban. That's a relief - it shows they're thinking about the beneficial uses of AI too.
They've also given a green light to "common and legitimate commercial practices" in advertising. I'm not entirely comfortable with this one. In my experience, the line between persuasive and manipulative advertising can be pretty thin, especially with clever AI-driven ad targeting.
The Transparency Conundrum
Now, here's where things get tricky for a lot of companies, including the big players like Microsoft, Anthropic, Google, and OpenAI. The Act requires publishing summaries of copyrighted data used in training AI models. That's a tall order. If you are using AWS Bedrock, or Azure OpenAI, your app would currently not be allowed to be used in the EU.
Take Llama 3, one of my favorite open-source models. As it stands, it wouldn't pass this test - there's very little documentation about its training data. On the flip side, models trained on well-documented datasets like The Pile are sitting pretty.
The Copyright Puzzle
The Act doesn't outright ban using copyrighted material for training, but it does require disclosure. Sounds simple, right? Not so fast. In the EU, if you post something original on Facebook or Reddit, it's typically considered your copyright. But in the US, the terms of service often give these platforms (and potentially others) broader rights to use your content. It's a real tangle, and I'm curious to see how companies will navigate this.
What Does This Mean for AI Innovation?
Some people are arguing that this Act will boost AI adoption by providing clarity. I'm not so sure. Don't get me wrong - I'm all for responsible AI development. But the sheer complexity of these regulations makes me worry. For small startups operating on a shoestring budget, these new regulatory hoops could be a real burden.
In the short term, I wouldn't be surprised if this puts a bit of a damper on AI adoption in the EU. It's a classic case of good intentions potentially having unintended consequences.
The Road Ahead
The good news is that this isn't happening overnight. The Act will be phased in over several years, giving companies some breathing room to adapt. But if you're running a business in Europe or thinking about entering the European market, my advice would be to start wrapping your head around this now. It's going to take time to figure out how to align your AI systems with these new requirements.
Final Thoughts
The EU AI Act is a big step, no doubt about it. It's trying to strike a balance between protecting citizens and fostering innovation, which is no easy task. As someone deeply interested in AI's potential, I'll be watching closely to see how this plays out.
For now, my recommendation to companies would be this: Start assessing your AI systems against these new standards. Think about how you can bake transparency and risk assessment into your development process from the get-go.
One thing's for sure - the AI field isn't slowing down anytime soon. This legislation will need to keep up, and I wouldn't be surprised if we see updates and new interpretations coming out regularly. It's going to be a wild ride, folks. Buckle up!
Written by: Dr Oliver King-Smith is CEO of smartR AI, a company which develops applications based on their SCOTi® AI and alertR frameworks.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Elaine Mullan Head of Marketing and Business Development at Corlytics
12 August
Abhinav Paliwal CEO at PayNet Systems- A Neo Banking Software Platform
Donica Venter Marketing coordinator at Traderoot
Dmytro Spilka Director and Founder at Solvid, Coinprompter
11 August
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.