We are pleased to share takeaways from our AI House Davos panel titled "AI Safety Unplugged: Navigating the Risks Without the Hype"Â during the World Economic Forum 2024.Â
On stage, panelists Yann LeCun, Chief AI Scientist at Meta, David Haber, CEO of Lakera, Seraphina Goldfarb-Tarrant, Head of Safety at Cohere, and Max Tegmark, Professor at MIT, delved into the challenges, benefits, risks, and future predictions of AI development and deployment. They debated different perspectives on AGI, which fears are real and which aren't, and laid out what each of them envision for the future of AI.Â
Max Tegmark highlighted the risks associated with large-scale AI deployment, particularly regarding misinformation and deep fakes, emphasizing the significant concerns surrounding AI-driven misinformation. âThis is going to be the year of fakes. More than four billion people are going to the elections. Brace yourself for some really hardcore deep fakes.âÂ
Seraphina Goldfarb-Tarrant addressed the challenges in enterprise AI deployments, including the lack of effective evaluation methods at an extreme scale, leading to risks such as the propagation of rare events and biases. âFor enterprise deployments, the biggest issue we're running into is a combination of two things: the absence of a good evaluation method and dealing with extreme scale.â
David Haber expanded on this and discussed companies deploying AI technologies at scale, introducing new accessible interfaces and capabilities that propagate risks to hundreds of millions of users. He also shared his prediction for the upcoming "Internet of Agents" (IoA) era, which will likely amplify cyber risks: âWeâre preparing for the âInternet of Agents,â where a network of AI agents is capable of interacting with one another to complete transactions previously executed by humans. Itâs the interconnectedness of AI systems that will quickly amplify many of the risks we see today.â
Yann LeCun was outspoken about the status of AGI, stating, âWe're still missing some very, very basic things [...] We're nowhere near human-level intelligence, despite what you might hear from the most optimistic people who tell you AGI is just around the corner ⌠I'd be happy if by the end of my career we can get something as smart as a cat.â Yann also emphasized the limitations of autoaggressive LLMs, expressing, âMy prediction is that autoregressive LLMs are intrinsically unsafe. They cannot be fine-tuned to death to be safe. It's not possible. You can always jailbreak them.â
The challenge of evaluating AI was a significant discussion point, highlighting that intelligence comprises a diverse set of skills and abilities and no single test could ever fully capture its full complexity. As LeCun put it, âThere is no single test that really measures intelligence [...] depending on which system you build for what skills, you're going to have certain skills and not others. And so you cannot have a single test.â
In the context of AI development, panelists believe the debate over open source should focus on finding a balanced approach. As Tegmark explained, âIt's not a binary debate where you open source everything or nothing.â Open-source platforms are essential for creating AI systems that work in all languages and cultures, enabling decentralized development and fine-tuning to cater to diverse values and interests. The challenge lies in avoiding undue power concentration and promoting nuanced discussions about the role of open source in AI development.
Finally, in a time when technology adoption has never moved faster, the panelists emphasized the need to focus on the immediate and certain risks associated with the rapid deployment of AI technologies. As Haber explained, âRather than having the public discourse be dominated by what may happen in the future, I would love to see a lot more of our discourse happening on the 100% certain risks.â The key takeaway is the importance of clarity and understanding of AI capabilities and control for policymakers, businesses, researchers, and society at large, with a focus on AI tools that empower humans.
Interested in watching the full session? Check out the recording on YouTube.
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Get the first-of-its-kind report on how organizations are preparing for GenAI-specific threats.
Compare the EU AI Act and the White Houseâs AI Bill of Rights.
Get Lakera's AI Security Guide for an overview of threats and protection strategies.
Explore real-world LLM exploits, case studies, and mitigation strategies with Lakera.
Use our checklist to evaluate and select the best LLM security tools for your enterprise.
Discover risks and solutions with the Lakera LLM Security Playbook.
Discover risks and solutions with the Lakera LLM Security Playbook.
Subscribe to our newsletter to get the recent updates on Lakera product and other news in the AI LLM world. Be sure youâre on track!
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. â¨Come join us and 1000+ others in a chat thatâs thoroughly SFW.