Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Navigating the AI Regulatory Landscape: An Overview, Highlights, and Key Considerations for Businesses
The recent weeks have highlighted the increasing concerns over AI safety and security and showcased a collaborative effort among global entities in the EU, US, and the UK aiming to mitigate these risks. Here's a brief overview of the most recent key regulatory developments and their potential implications for businesses.
As users increasingly rely on Large Language Models (LLMs) to accomplish their daily tasks, their concerns about the potential leakage of private data by these models have surged.
[Provide the input text here]
[Provide the input text here]
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now? Title italic
A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.
English to French Translation:
Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?
Lorem ipsum dolor sit amet, line first line second line third
Lorem ipsum dolor sit amet, Q: I had 10 cookies. I ate 2 of them, and then I gave 5 of them to my friend. My grandma gave me another 2boxes of cookies, with 2 cookies inside each box. How many cookies do I have now? Title italic Title italicTitle italicTitle italicTitle italicTitle italicTitle italic
A: At the beginning there was 10 cookies, then 2 of them were eaten, so 8 cookies were left. Then 5 cookieswere given toa friend, so 3 cookies were left. 3 cookies + 2 boxes of 2 cookies (4 cookies) = 7 cookies. Youhave 7 cookies.
English to French Translation:
Q: A bartender had 20 pints. One customer has broken one pint, another has broken 5 pints. A bartender boughtthree boxes, 4 pints in each. How many pints does bartender have now?
In recent weeks, AI safety and security have taken center stage in conversations about the rapid evolution and widespread adoption of Generative AI.
With AI's significant influence on industries and societies at large, and the increasing multitude of risks with potentially profound, long-term implications for businesses and individual users alike, the urgency to establish regulations ensuring safe and secure deployment is gaining momentum.
In response to these growing concerns, regulatory bodies are taking action: the EU is half negotiating and half bringing their AI Act to the finish line, the U.S. has issued a pivotal Executive Order on Safe, Secure, and Trustworthy AI, and the UK has spearheaded key conversations at the AI Safety Summit.
For businesses, these developments spell out a clear message: the time to prepare is now.
At Lakera, our engagement with the evolving AI regulatory landscape has been both proactive and influential:
Our founding team contributed to laying the foundations for the EU AI Act over the last decade. More recently, our CEO, David Haber, was invited to share his technical insights at an EU Parliament meeting in October 2023, discussing the potential implications of Article 28b for enterprises and startups.
In the US, our partnership with White House-supported initiatives—like the Generative Red Teaming Challenge at DEFCON 31—has sparked conversations regarding regulatory changes and how to adapt to them.
Engaging in dialogues with policy experts like Kai Zenner and partners like Credo AI and DEKRA has allowed us to explore the impact of AI regulations on the corporate world and advocate for responsible innovation.
In this article, we’d like to give you a brief overview of the most recent key regulatory developments and their potential implications for businesses.
1. EU AI Act, Article 28b
The EU AI Act, proposed in April 2021, is a comprehensive regulatory framework designed to govern the deployment of AI systems within the EU. Article 28b specifically addresses the need for enterprises to responsibly manage risks associated with AI foundation models. This includes ensuring that the AI they use does not compromise safety or ethical standards.
The European Parliament approved this AI Act on June 14, 2023, and the final version of the AI Act is expected to be published by the end of 2023.
On October 9, 2023, EU policymakers, AI business leaders, top foundation model providers, and researchers gathered for a roundtable discussion at the European Parliament to focus on the Governance of General-Purpose AI.
2. The US Executive Order on Safe, Secure, and Trustworthy AI
A few weeks later, on October 30th, the United States responded with its own set of directives to shape the AI landscape, with President Biden issuing the Executive Order on Safe, Secure, and Trustworthy AI.
The Executive Order provides guidelines for AI governance, research and development, and encourages collaboration between the government and the private sector to advance AI technologies that are secure and beneficial for the public.
The EU & US AI Regulatory landscape: Key considerations for businesses
The European Union's AI Act and the United States' Executive Order on AI represent two significant regulatory approaches to artificial intelligence by two of the world's leading economies.
Here’s a brief rundown of the key considerations for enterprises:
Safety and Security: Both the EU AI Act and the U.S. Executive Order place a strong emphasis on the safety and security of AI systems. For instance, Article 28b of the EU AI Act mandates enterprises to responsibly manage risks associated with AI, while the U.S. order mandates developers to share safety test results with the government.
Risk Management: The EU's approach to risk assessment and mitigation reflects in the U.S. strategy. On both sides of the Atlantic,extensive red-team testing is mandated to ensure AI systems are secure before their public release.
Transparency and Ethical Use: Both regulatory frameworks promote transparency in AI applications and ethical deployment, with the U.S. focusing on the detection of AI-generated content and the EU specifying clear responsibilities for businesses in their use of AI.
**🛡️ Discover how Lakera’s Red Teaming solutions can safeguard your AI applications with automated security assessments, as well as identifying and addressing vulnerabilities effectively.**
3. The AI Safety Summit
On November 1st and 2nd, the UK held the AI Safety Summit at Bletchley Park. The event brought together international governments, leading AI companies, civil society groups, and experts in research.
The core focus revolved around misuse risks and the potential loss of human control over both narrow and frontier AI technologies—those that possess dangerous capabilities or exhibit advanced, multifaceted performance that could match or outstrip today's leading models.
A shared understanding of the risks posed by frontier AI and the need for action
A forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
Appropriate measures which individual organizations should take to increase frontier AI safety
Areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
Showcase how ensuring the safe development of AI will enable AI to be used for good globally
The outcomes of the summit are anticipated to be instrumental in shaping international regulatory frameworks, establishing a roadmap for the secure and responsible integration of AI into societal norms.
AI Regulatory Landscape: Highlights & Best Practices
Here are our key highlights and essential best practices for preparing for AI regulatory changes, a topic we've elaborated on in an article published on Kainos's blog. The post includes insights from Lakera CEO David Haber, John Sotiropoulos, core contributor to the OWASP Top 10 for LLM, and Dr. Suzanne Iris Brink, Data Ethics Manager at Kainos.
Here’s a brief overview.
1. Increase testing and assurance: Foundation models providers must rigorously test and apply red teaming to both open-source and proprietary models, responding to the uncertain regulatory environment. Enhanced transparency and diverse development teams are critical to minimize risks.
2. Adopt actionable open standards: Developers should embrace standards like the OWASP Top 10 for LLM to secure AI integrations and address novel risks such as prompt injections. These standards aid in fortifying AI applications alongside established security protocols.
3. Accelerate standards alignment: Amidst emerging AI threats, there is a need for consensus and cooperation among the organizations to harmonize AI security measures.This will help prevent contradictions and foster effective defenses against threats like privacy inference attacks on LLMs.
4. Invest in automated defenses: New AI security tools, like Lakera Guard, are automating the protection of AI systems. This helps companies quickly identify and mitigate risks from data poisoning to toxic language outputs.
5. Integrate security with ethics: Security in AI should extend beyond traditional measures to include ethical implications, ensuring that AI systems do not perpetuate bias or discrimination. Integrating data ethics frameworks is essential for comprehensive risk management.
6. Promote secure-by-design and ethics-by-design AI delivery: Effective AI security must be woven into the very fabric of project delivery, beginning with thorough threat models and risk assessments. It's crucial to integrate ethical considerations from the start, utilizing secure-by-design practices to address safety challenges proactively.
Navigating the AI Regulatory landscape: Summary
In the evolving world of AI, regulatory landscapes are shifting as dynamically as the technologies they seek to govern. The recent weeks have not only highlighted the increasing concerns over AI safety and security but have also showcased a collaborative spirit among global entities aiming to mitigate these risks. As illustrated by pivotal movements in the EU, U.S., and the UK's AI Safety Summit, the impetus to create a framework for safe, secure, and ethical AI is stronger than ever.
Adapting to these changes requires not only a compliance mindset but also a dedication to continuous learning, ethical consideration, and international collaboration. The road ahead is one of partnership—across industries, borders, and cultures—forging a path that ensures AI technologies enhance our global society responsibly.
Lakera works with the Fortune 500 companies, startups, and organizations to mitigate compliance risks. Get in touch with us at contact@lakera.ai or sign up for free for Lakera Guard.
Learn how to protect against the most common LLM vulnerabilities
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Explore the essentials of Responsible AI, focusing on ethical and safe AI use in technology. Learn about accountability, privacy, and industry standards from companies like Microsoft and Google. This guide covers how Responsible AI is implemented in AI's lifecycle, ensuring transparency and aligning with society's values.
Learn what AI alignment is and how it can help align AI outcomes with human values and goals. Discover different types and techniques along with the challenges it faces.