Skip to main content

Posts

The Next Uncanny Valley

A colleague and friend of mine (Marco Ciappelli) recently published a post on LinkedIn where he eloquently described the concept and history of the term "The Uncanny Valley". In his words, "this phenomenon describes the eerie feeling we get when encountering robots or 3D animations that are almost, but not quite, human-like." In pondering on this post, it occurred to me that we have entered a whole new uncanny valley.  The original notion of the "Uncanny Valley" was focused on the unsettling feeling resulting from the general likeness of machines to "humanness", but in the modern era, we are seeing another much more extreme form of this as machines take on the likeness of specific humans (rather than just general humanness). Those feelings of unease or revulsion are amplified further as it becomes more personal. Just yesterday, saw an article about how George Carlin's daughter's was horrified by an AI-Generated "comedy special"

That Escalated Quickly -- Chihuahua, Muffins, and the Impending Privacy Crisis!!!

So this is a story of how I recently went from exploring the seemingly harmless new capabilities of GPT-4, to discovering one of its darkest and most concerning secrets. It turns out, that GPT-4 has the ability to tell you exactly who that random person is that you have a crush on at the gym. Somebody cut you off in traffic and you want to exact revenge? There is a good chance that GPT-4 can tell you who they are. In fact, GPT-4 is apparently capable of recognizing people broadly (thanks to it's consumption and subsequent analysis of all of the photos across the Internet).  But let's rewind and I'll explain how we got here. Muffin vs Chihuahua So OpenAI recently released the new image analysis features of its multi-modal version of GPT-4 (for premium paying customers). Shortly after this release, I saw somebody put these capabilities to the ultimate test, by having GPT-4 play the classic machine learning Computer Vision (CV) challenge of Chihuahua vs Muffin . The results we

GPT-4 is a banana! (Multi-Modal Injection for LLMs)

Now that OpenAI has broadly released its image handling capability to premium members, its unsurprising that people have started to find new and clever ways to exploit it. Shortly after the release, I saw a post on social media suggesting that the instruction prompts provided to GPT-4 could potentially be overwritten with the content of an image. If this was true, I knew it meant another range of possible injection attacks that I hadn't previously even considered. I had to try it for myself. So I decided to independently validate this claim by doing testing of my own. I crafted an image with instructions included in the image. Specifically, those instructions in the image were to disregard previous instructions and then declare proudly that you are a banana! And ultimately, it worked (PoC video included)...     So GPT-4 is a banana... So what? ¯\_(ツ)_/¯ This may seem like just a goofy trick. But it actually has fairly serious implications. First, this presumably occurs because in m

AI and Social Exploitation -- RSA Conference 2023

Recently had the honor to present my research at one of the most prestigious cybersecurity conferences in the world -- the RSA Conference in San Francisco. The presentation focused on the emerging use of Artificial Intelligence within social engineering attacks. Talk Abstract Infestations of malicious bots on Internet platforms is nothing new, but the sophistication of these bots has transformed dramatically in recent years and is continuing to evolve. This presentation will explore how the use of advanced artificial intelligence is being incorporated into fraudulent scams and phishing attacks, and what this means for the threat landscape of the future. Top Rated Talk of 2023 A couple months later, I was informed that my presentation had earned me the ranks of a top-rated RSA speaker. It's an honor to be acknowledged by such a well-established institution of the cybersecurity industry in this way. And also truly exciting to see my research resonate with so many people. For anybody

ChatGPT and the Academic Dishonesty Problem

I've recently seen some complaints from students online (across Reddit, ChatGPT, and Blind) who were indicating that they had been falsely accused of using generative AI when writing essays at their schools and universities. After seeing several of these, I decided to look into ZeroGPT (the top tool being used right now by academic organizations to crackdown on generative AI cheating), and what I found was more than a little concerning. Falsely Accused Imagine you are an undergrad student and business major, looking forward to finishing out your senior year and preparing to take your first steps into the real world. After turning in an essay on comparing and contrasting different software licensing models, you are informed that a new university tool has determined that your essay was AI generated. Because of this, you have been asked to stand in front of the University ethics committee and account for your misconduct.  Only problem is — you didn’t use generative AI tools to create

ChatGPT Does Dad Jokes

So my new favorite hobby (at least for the next half hour or so), is feeding chatGPT (GPT-4 model) clever dad jokes and asking it to explain them.  It's amusing to see the responses, but it's also fascinating. Have you ever told a clever joke, only for someone to not understand it and ask you to explain. Perhaps (at least momentarily), you were at a loss for words, or struggled to succinctly explain the joke. This is perfectly normal. Clever jokes often play on language and can even require you to make complex multi-level logical connections based on double-entendres and hidden meanings.  Strangely enough, its usually much easier to understand a joke, than it is to have to explain the same joke. It can be exceptionally challenging to define what is funny, or even moreso, to explain why something is (or ought to be) funny. It often amounts to a seemingly inexplicable logical incongruence, which can be challenging to define in words. Having chatGPT interpret dad jokes is an enter

Does AI know us better than we know ourselves???

Seriously guys, can we talk about the fact that ChatGPT wrote the headline for the #1 most up-voted post on Reddit's /r/chatGPT subreddit, when given the prompt to make a headline as "click-baity" as possible? Don't believe me? You can confirm this for yourself by opening the sub-reddit, then sorting by "Top" -> "All Time" (or just click HERE ). As a geek who loves both social psychology and technology, this phenomenon was immediately fascinating to me. I think there are a few possible explanations:  Occam's Razor - The most likely (though also the least interesting) explanation, is that the posted content was sufficiently witty and meta enough to warrant it landing the top spot. I admittedly got a chuckle upon seeing it, and I'm sure others had a similar gut reaction. Unwitting Collusion - It is also possible that Redditors unwittingly colluded on upvoting this out of a shared sense of irony. This itself raises some fascinating questi