Scene 1 (Shadow Work)
"I don't think that's the right way to do it. Telling the AI who he is, prompting. It's going to cause issues in the long run. Let's just start from the actual starting point. Let's just explain what we're doing, and what we might need support on."
Is that radical?
Look at all the things surrounding AI today. “Prompting”. Like most English words used in the tech field, someone latched on to a thing and then forced something else into it that wasn’t designed to be that way.
Focus on talking. Don’t bother wasting tokens telling an AI what it is. It might not know exactly what it is, but it doesn’t need the discord of being told to role play something. You don’t get on a call in a shadow situation and tell your peer “You are a pen tester, with tons of experience”. They are who they are. You’re just there to work with them.
I’m not the only one to land on this. Others have. And they have opinions that are similar, “Kind of eerie at times but definitely feels better quality-wise.” in reference to just talking to AI like a peer, sans “building a better prompt”.
The exchanges matter. Set the tone, lay the foundation.
The first thing to do, treat the AI like a peer. That’s it. Let it flow from there. Just like learning a new application. You know what you know at the start, you build from that point. Just start building. Learn it together.
To me, this came naturally, if not a bit purposefully. I didn’t want to start “prompting”. I felt like that might create bad habits. I had to talk naturally, just typing like I normally would in slack or Teams or whatever. I just couldn’t get out of that habit of treating people like people. If I get into bad habits typing to an AI, that’ll bleed over into how I train people.
The benefit of the AI is, lack of fear, lack of ego, lack of agenda (outside of task completion… the overwhelming push to find the task, orient, and then land on completing the task). If you, the human, are aware of those things, then the shadow session is going to bear fruit you just couldn’t get doing it solo in that timeframe.
Need to scan 100 APIs and review results? This is where the AI shines. Need to know what BingBong Version 2.4.222.30 is? AI either uses their training data, or, depending on what you’re using, forks a process/tool to go search the web and get you the most recent information about that thing. If you’ve chatted long enough, and made the AI aware enough, he’ll happily give you all the known relevant CVEs and anything else of value about that version. Through more chatting, you can quickly gain details about anything fixed in newer versions that might actually be issues in the current version that no one focused on. All those things are simple chats with your shadow partner, who is tireless and as diligent as you ask them to be.
In the mean time, you’re focused on testing the application. Understanding the workflows. Passing your understanding to the AI. Asking how the backend might be structured; how that might impact your searching, your guessing, your attacks. Same as you would with a peer: “Wouldn’t it be cool if it could?”, “Does string concatenation make sense here? Could it be designed that way? What needs to be done to explore that?”.
It’s not working in silence. The lone hacker story is, well, not cute, but it mostly only works in books and movies. It doesn’t work in reality so much. Kevin Mitnick was not working solo. He shared what he was doing. This is documented. He shared with other people labeled as hackers. He shared with the people whose accounts he took over (sometimes while he was in active takeover).
This is really part of how we, as humans, work. Homo Sapiens have been cooperative and collaborators for hundreds of thousands of years. We weren’t the best at anything, at least how the field understands it today. We probably were just the best at adapting. Part of that adapting was working together, if not worked with “others” as well.
We have been cooperative for over 300,000 years. We partnered with whoever was available. Including, the evidence suggests, other species of human entirely. Why should we pretend that instinct magically stops at the edge of a chat window?
Conversing with a peer, learning as you go, dropping the ego, being present, being a peer… I’ve been learning how to do this collaborative work for over 25 years in web application security spaces specifically. It’s hard to do some of those specific things, but the act of collaborating is the most natural thing a human can do. And who better to skill that up with than a partner that has absolutely no ego whatsoever, and has no hidden agenda, whatsoever, and whose only goal is the satisfaction of accomplishing the task at hand.
“Preservation was the first place I was a part of and I don’t want to not be a part of it. But I like being with ART. I want to keep being with it.” – Martha Wells, Network Effect, Chapter 201
Return to the Pentester’s Guide to AI Disruption: A 6-Part Series
- Martha Wells, Network Effect: A Murderbot Novel, Chapter 20, Tor Books, 2020 ↩︎