Skip to main content

Alexa Hacking at DEF CON 29

This year, I delivered a talk at DEF CON 29 IoT village on the social exploitation of victims proxied through Alexa voice assistant devices. 


Check out the Video here!!!

The talk was live-streamed on Twitch on Friday, August 6th at 3:30pm PT on the IoT Village Twitch Channel. If you missed the live talk, check out the video on YouTube here:

What's the talk about???

As voice assistant technologies (such as Amazon Alexa and Google Assistant) become increasingly sophisticated, we are beginning to see adoption of these technologies in the workplace. Whether supporting conference room communications, or even supporting interactions between an organization and its customers — these technologies are becoming increasingly integrated into the ways that we do business. While implementations of these solutions can streamline operations, they are not always without risk. During this talk, the speaker will discuss lessons learned during a recent penetration test of a large-scale “Alexa for Business” implementation in a hospital environment where voice assistants were implemented to assist with patient interactions during the peak of the COVID-19 pandemic. The speaker will provide a live demonstration of how a cyber-criminal could potentially use pre-staged AWS Lambda functions to compromise an “Alexa for Business” device with less than one-minute of physical access. Multiple attack scenarios will be discussed to include making Alexa verbally abuse her users (resulting in possible reputation damage), remote eavesdropping on user interactions, and even active “vishing” (voice phishing) attacks to obtain sensitive information. Finally, the talk will conclude with a discussion of best-practice hardening measures that can be taken to prevent your “Alexa for Business” devices from being transformed into foul-mouthed miscreants with malicious intent.



Comments

Popular posts from this blog

Another "Fappening" on the Horizon?

So in case you aren't fully up-to-speed on useless hacker trivia, "The Fappening" (also sometimes referred to as "Celebgate") was a series of targeted end-user cyber attacks which occurred back in 2014 (which strangely feels like forever in tech years), that resulted in unauthorized access to the iCloud accounts of several prominent celebrity figures.  Following these breaches, photographs (for many including personal sexually explicit or nude photos) of the celebrities were then publicly released online.  Most evidence points to the attack vector being spear phishing email attacks which directed the victims to a fake icloud login site, and then collected the victim's credentials to subsequently access their real icloud accounts. Migration to MFA In response to these events, Apple has made iCloud one of the very few social web services that implements compulsory MFA ("Multi-Factor Authentication").  But while they might be ahead of the indust...

Bypassing CAPTCHA with Visually-Impaired Robots

As many of you have probably noticed, we rely heavily on bot automation for a lot of the testing that we do at Sociosploit.  And occasionally, we run into sites that leverage CAPTCHA ("Completely Automated Public Turing Test To Tell Computers and Humans Apart") controls to prevent bot automation.   Even if you aren't familiar with the name, you've likely encountered these before. While there are some other vendors who develop CAPTCHAs, Google is currently the leader in CAPTCHA technology.  They currently support 2 products (reCAPTCHA v2 and v3).  As v3 natively only functions as a detective control, I focused my efforts more on identifying ways to possibly bypass reCAPTCHA v2 (which functions more as a preventative control). How reCAPTCHA v2 Works reCAPTCHA v2 starts with a simple checkbox, and evaluates the behavior of the user when clicking it.  While I haven't dissected the underlying operations, I assume this part of the test likely makes determ...

ChatGPT and the Academic Dishonesty Problem

I've recently seen some complaints from students online (across Reddit, ChatGPT, and Blind) who were indicating that they had been falsely accused of using generative AI when writing essays at their schools and universities. After seeing several of these, I decided to look into ZeroGPT (the top tool being used right now by academic organizations to crackdown on generative AI cheating), and what I found was more than a little concerning. Falsely Accused Imagine you are an undergrad student and business major, looking forward to finishing out your senior year and preparing to take your first steps into the real world. After turning in an essay on comparing and contrasting different software licensing models, you are informed that a new university tool has determined that your essay was AI generated. Because of this, you have been asked to stand in front of the University ethics committee and account for your misconduct.  Only problem is — you didn’t use generative AI tools to create ...