Europe Moves Forward with AI Regulation
European lawmakers today voted overwhelmingly in favor of the landmark AI regulation known as the EU AI Act. While the act does not yet have the force of law, the lopsided vote indicates it soon will in the European Union. Companies would still be free to use AI in the United States, which so far lacks consensus on whether AI represents a risk or opportunity.
A draft of the AI Act passed by a large margin today, with 499 members of European Parliament voting in favor, 28 against and 93 abstentions. A final vote could be taken later this year after negotiations by members of Parliament, the EU Commission, and the EU Council.
First proposed in April 2021, the EU AI Act would restrict how companies can use AI in their products; require AI to be implemented in a safe, legal, ethical, and transparent manner; force companies to get prior approval for certain AI use cases; and require companies to monitor their AI products.
The AI law would rank different AI uses by the risk that they pose, and require companies meet safety standards before the AI could be exposed to customers. AI with minimal risk, such as spam filters or video games, could continue to be used as they have been historically, and would be exempt from transparency requirements.
The screws begin to tighten with AI said to have “limited risk,” a category that includes chatbots such as OpenAI’s ChatGPT or Google’s Bard. To abide by the EU AI Act, a user must be informed that they are interacting with a chatbot, according to the proposed law.
Organizations would need to conduct impact assessments and audits on so-called high-risk AI systems, which includes things like self-driving cars, as well as decision-support systems in education, immigration, and employment. Europe’s central government would track high-risk AI use cases in a central database.
AI deemed to carry an “unacceptable” risk would never be allowed in the EU, even with audits and regulation. Examples of this type of forbidden AI includes real-time biometric monitoring and social scoring systems. Failing to adhere to the regulation could bring fines equal to 6% or 7% of a company’s revenue.
Today’s vote bolsters the notion that AI is out of control and needs to be reined in. A number of prominent AI developers recently have called for a ban or a pause on AI research, including Geoffrey Hinton and Yoshua Bengio, who helped popularize modern neural networks and who signed a statement from the Center for AI Safety calling for treating AI as a global risk.
Hinton, who left his job at Google this spring so he could speak more freely about the threat of AI, compared AI to nuclear weapons. “I’m just a scientist who suddenly realized that these things are getting smarter than us,” Hinton told CNN’s Jake Tapper May 3. “…[W]e should worry seriously about how we stop these things getting control over us.”
However, not all AI researchers or computer scientists share that point of view. Yann LeCun, who heads AI research at Facebook-parent meta–and who joined Hinton and Bengio in winning the 2018 Turing Award for their collective work on neural networks–has been outspoken in his belief that this is not the right time to regulate AI.
LeCun said today on Twitter that he believes “premature regulation would stifle innovation,” specifically in reference to the new EU AI Act.
“At a general level, AI is intrinsically good because the effect of AI is to make people smarter,” LeCunn said this week at the VivaTech conference in Paris, France. “You can think of AI as an amplifier of human intelligence. When people are smarter better things happen. People are more productive, happier.”
“Now there’s no question that bad actors can use it for bad things,” LeCunn continued. “And then it’s a question of are whether their more good actors than bad actors.”
Just as the EU’s General Data Protection Regulation (GDPR) formed the basis for many data privacy laws in other countries and American states, such as California, the proposed EU AI Act would set the path forward for AI regulation around the world, says business transformation expert Kamales Lardi.
“EU’s Act could become a global standard, with influence on how AI impacts our lives and how it could be regulated globally,” she says. “However, there are limitations in the Act…Regulation should focus on striking an intelligent balance between innovation and wrongful application of technology. The act is also inflexible and doesn’t take into account the exponential rate of AI development, which in a year or two could look very different from today.”
Ulrik Stig Hansen, co-founder and president of the London-based AI firm Encord, says now is not the right time to regulate AI.
“We’ve heard of too big to regulate, but what about too early?” he tells Datanami. “In classic EU fashion, they’re seeking to regulate a new technology that few businesses or consumers have adopted, and few people are, in the grand scheme of things, even developing at this point.”
Since we don’t yet have a firm grasp of the risks inherent in AI systems, it’s premature to write laws regulating AI, he says.
“A more sensible approach could be for relevant industry bodies to regulate AI like they would other technology,” he says. “AI as a medical device is an excellent example of that where it is subject to FDA approval or CE marking. This is in line with what we’re seeing in the UK, which has adopted a more pragmatic pro-innovation approach and passed responsibility to existing regulators in the sectors where AI is applied.”
While the US does not have an AI regulation in the works at the moment, the federal government is taking steps to guide organizations towards ethical use of AI. In January, the National Institute of Standards and Technology (NIST) published the AI Risk Management Framework, which guides organizations through the process of mapping, measuring, managing, and governing AI systems.
The RMF has several things going for it, AI legal expert and BNH.ai co-founder Andrew Burt told Datanami earlier this year, including the potential to becoming a legal standard recognized by multiple parties. More importantly, it retains the flexibility to adapt to fast-changing AI technology, something that the AI EU Act lacks, he said.
September 26, 2023
- Dataiku Unveils LLM Mesh and Announces LLM Mesh Launch Partners Snowflake, Pinecone, and AI21 Labs
- Lightbits Now Available in the Microsoft Azure Marketplace
- New MongoDB Atlas Vector Search Capabilities Help Developers Build and Scale AI Applications
- CoreWeave and VAST Data Partner to Build the Data Foundation for Next-Gen Public Clouds with NVIDIA AI
- Confluent Unveils Data Streaming for AI to Simplify and Accelerate the Development of Real-Time AI Applications
- Teradata Helps Customers Accelerate AI-led Initiatives with New ModelOps Capabilities in ClearScape Analytics
- Qumulo Expands Strategic Relationship with Fujitsu to Help EMEA Customers Scale Anywhere
- IBM Accelerates Enterprise AI for Clients with New Capabilities on IBM Z
- Fujitsu Introduces New AI Mechanisms for Hallucination Detection and Phishing URL Identification
- Penguin Solutions Certified as NVIDIA DGX-Ready Managed Services Partner
- Anodot and Automat-IT Partner to Maximize AWS Value
- Salesforce Signs Definitive Agreement to Acquire Airkit.ai
September 25, 2023
- HPC + AI Wall Street 2023 Uncovers the Fastest IT on the Planet for FinServ Today
- Databricks CEO and Co-Founder Ali Ghodsi Announced as a Keynote Speaker for Money20/20 USA
- Amazon to Invest Up to $4B in Anthropic to Advance Generative AI
- ThoughtSpot Experiences Exceptional Year of Growth as Customers Realize Value from AI-Powered Analytics
- Komprise Introduces Storage Insights to Unify Data and Storage Management
September 21, 2023
- RelationalAI to Host ‘The Promise of AI and Relational Knowledge Graphs’ Webinar
- dbt Labs to Reveal Exciting New Features and Products at Coalesce 2023
- ibi Unleashes the Power of Legacy Systems with Open Data Hub for Mainframe
Most Read Features
- Databricks Versus Snowflake: Comparing Data Giants
- Data Mesh Vs. Data Fabric: Understanding the Differences
- PayPal Feeds the DL Beast with Huge Vault of Fraud Data
- What Is MosaicML, and Why Is Databricks Buying It For $1.3B?
- Big Data File Formats Demystified
- How Generative AI Is Transforming the Call Center Market
- Duet AI Goes Everywhere in Google’s Cloud
- How the Coronavirus Response Is Aided by Analytics
- What Does ChatGPT for Your Enterprise Really Mean?
- AI Ethics Issues Will Not Go Away
- More Features…
Most Read News In Brief
- Mathematica Helps Crack Zodiac Killer’s Code
- GenAI Debuts Atop Gartner’s 2023 Hype Cycle
- Top 10 In-Demand GenAI Skills
- Is ChatGPT Getting Dumber?
- Oracle Introduces Integrated Vector Database for Generative AI
- DSPy Puts ‘Programming Over Prompting’ in AI Model Development
- Starburst Brings Dataframes Into Trino Platform
- GenAI Adoption, By the Numbers
- Neo4j Finds the Vector for Graph-LLM Integration
- Data Fabric Firm Denodo Raises $336 Million
- More News In Brief…
Most Read This Just In
- Salesforce and Snowflake Make Data Sharing-Based Integration Generally Available
- Salesforce and AWS Deepen Generative AI Partnership with Harmonized Customer Profile Offerings
- Google Cloud Unveils New Generative AI Innovations and Partnerships at Next ’23
- Dataiku Announces Breakthroughs in Generative AI Enterprise Applications, Safety, and Tooling
- Habu Unveils Generative AI Capabilities on Google Cloud to Create Value From Data Clean Rooms
- Groq Achieves Doubling of Llama-2 70B LLM Inference Performance in 3 Weeks
- NetApp and Google Cloud Introduce Managed Storage Service to Revolutionize Enterprise Workloads in the Cloud
- Salesforce Announces the New Einstein 1 Platform
- Quantum Announces Next-Gen Cold Data Storage Solutions, Simplifying Cloud Integration
- More This Just In…