Skip to main content

Sociosploit at HouSecCon

Recently had the privilege to present some of our latest projects at HouSecCon.  Within the presentation, I cover the following:

  • The use of software robotics in social media hacking
  • LinkedIn Scraper - To generate lists of target emails for phishing and spam
  • Twittersploit - Remote Access Trojan with Command and Control over Twitter
  • OkStupid and Plenty of Phish - Dating Bots for Social Engineering
Check out the video here:

Comments

Popular posts from this blog

Building Bots with Mechanize and Selenium

The Sociosploit team conducts much of its research into the exploitation of social media using custom built bots. On occasion, the team will use public APIs (Application Programming Interfaces), but more often than not, these do not provide the same level of exploitative capabilities that could be achieved through browser automation. So to achieve this end, the Sociosploit team primarily uses a combination of two different Python libraries for building web bots for research. Each of the libraries have their own advantages and disadvantages. These libraries include: Mechanize Pros: Very lightweight, portable, and requires minimal resources Easy to initially configure and install Cuts down on superfluous requests (due to absense of JavaScript) Cons: Does not handle JavaScript or client-side functionality Troubleshooting is done exclusively in text Selenium Pros: Operations are executed in browser, making JavaScript rendering and manipulation easy Visibility of browse

Bootstrap Fail - Persistent XSS via Opportunistic Domain Sniping

This is the story of how a failed Bootstrap implementation on a website allowed me to gain JavaScript code execution into thousands of user browsers. How I Found It? Before I get into the story, I'll quickly explain how I found this vulnerability in the first place.  I have started developing a new opportunistic approach for acquiring persistent XSS (Cross Site Scripting) on various web-services across the Internet.  This methodology consists of the following steps: Use custom web-crawler to spider web services across the Internet and scrape source code. It iterates through IP addresses and hits the web-root content for every IP address. It then identifies websites that are using externally hosted JavaScript. This is achieved for each server by… Reviewing the HTML source code for <script> tags with a source (src) value containing a full web-address (rather than a local path). An example would be <script type='text/javascript' src='https://domain.name/path/to/ho

Bypassing CAPTCHA with Visually-Impaired Robots

As many of you have probably noticed, we rely heavily on bot automation for a lot of the testing that we do at Sociosploit.  And occasionally, we run into sites that leverage CAPTCHA ("Completely Automated Public Turing Test To Tell Computers and Humans Apart") controls to prevent bot automation.   Even if you aren't familiar with the name, you've likely encountered these before. While there are some other vendors who develop CAPTCHAs, Google is currently the leader in CAPTCHA technology.  They currently support 2 products (reCAPTCHA v2 and v3).  As v3 natively only functions as a detective control, I focused my efforts more on identifying ways to possibly bypass reCAPTCHA v2 (which functions more as a preventative control). How reCAPTCHA v2 Works reCAPTCHA v2 starts with a simple checkbox, and evaluates the behavior of the user when clicking it.  While I haven't dissected the underlying operations, I assume this part of the test likely makes determination