Talking chatGPT, AI, and our future robot overlords at RSAC 2023!!!
Just recently received the fantastic news that my presentation (on leveraging Large Language Models like chatGPT for social engineering) was accepted for RSAC 2023!!!
I started my research into using AI systems for social engineering exploitation about a decade ago. And it has been crazy to see the evolution of this technology over the years, and how recent innovations in the last few years have completely changed everything. I've had the amazing opportunity to share this story with audiences at ToorCon, DEFCON (AI Village), HOU.SEC.CON, and Texas Cyber Summit. And now, will have the opportunity to share it at RSAC 2023! It's crazy how much this talk evolves just in the few months between presentations. But with chatGPT, Bing, Bard, and other emerging LLMs, things are changing SO FAST now! There is so much new and awesome stuff that will be added into the RSA presentation. Looking forward to seeing everyone in San Francisco.
What the talk is about?
The talk has the same title of the recent talks I've given on the same topic.
Cat-phish Automation: The Emerging Use of Artificial Intelligence in Social Engineering
As a teaser, I've linked a recording of a talk I gave in October 2022 at HouSecCon about the same topic. I say this is a teaser, because SO MUCH has changed in the industry within just a few months:
Pre-release version of chatGPT made available to the public
By January, chatGPT already had over 100 million active users (outpacing the growth of tech platform giants like TikTok and Instagram)
Microsoft leverages business partnership with OpenAI to announce integration into Bing and Edge
Google declares internal "Code Red"
Google announces "Bard" as the successor of LaMDA (yeah, the same system that A.I. ethicist and whistleblower Blake Lemoine declared to the world is now sentient) and as a competitor to chatGPT/Bing, with plans to also integrate into search capabilities.
With so many developments in just a few months, I've had to go back to the drawing board (but has been a blast doing it). To be honest, the talks that I've given on this topic previously only set the stage for the new content that I'll be presenting at RSA. And who knows what all could happen between now and April 24th (when RSA starts)! Looking forward to seeing everyone there.
For more on the same topic...
Check out the latest blog that I wrote in partnership with Set Solutions, about a potential risk of chatGPT (and other LLMs) in the workplace.
The Sociosploit team conducts much of its research into the exploitation of social media using custom built bots. On occasion, the team will use public APIs (Application Programming Interfaces), but more often than not, these do not provide the same level of exploitative capabilities that could be achieved through browser automation. So to achieve this end, the Sociosploit team primarily uses a combination of two different Python libraries for building web bots for research. Each of the libraries have their own advantages and disadvantages. These libraries include: Mechanize Pros: Very lightweight, portable, and requires minimal resources Easy to initially configure and install Cuts down on superfluous requests (due to absense of JavaScript) Cons: Does not handle JavaScript or client-side functionality Troubleshooting is done exclusively in text Selenium Pros: Operations are executed in browser, making JavaScript rendering and manipulation easy Visibility of browse
This is the story of how a failed Bootstrap implementation on a website allowed me to gain JavaScript code execution into thousands of user browsers. How I Found It? Before I get into the story, I'll quickly explain how I found this vulnerability in the first place. I have started developing a new opportunistic approach for acquiring persistent XSS (Cross Site Scripting) on various web-services across the Internet. This methodology consists of the following steps: Use custom web-crawler to spider web services across the Internet and scrape source code. It iterates through IP addresses and hits the web-root content for every IP address. It then identifies websites that are using externally hosted JavaScript. This is achieved for each server by… Reviewing the HTML source code for <script> tags with a source (src) value containing a full web-address (rather than a local path). An example would be <script type='text/javascript' src='https://domain.name/path/to/ho
As many of you have probably noticed, we rely heavily on bot automation for a lot of the testing that we do at Sociosploit. And occasionally, we run into sites that leverage CAPTCHA ("Completely Automated Public Turing Test To Tell Computers and Humans Apart") controls to prevent bot automation. Even if you aren't familiar with the name, you've likely encountered these before. While there are some other vendors who develop CAPTCHAs, Google is currently the leader in CAPTCHA technology. They currently support 2 products (reCAPTCHA v2 and v3). As v3 natively only functions as a detective control, I focused my efforts more on identifying ways to possibly bypass reCAPTCHA v2 (which functions more as a preventative control). How reCAPTCHA v2 Works reCAPTCHA v2 starts with a simple checkbox, and evaluates the behavior of the user when clicking it. While I haven't dissected the underlying operations, I assume this part of the test likely makes determination
Comments
Post a Comment