How AI turned me into Mr. Robot

Spearfish a hiring manager, jailbreak a chatbot, and find $297,000 reasons to scroll to the end.

Welcome to this week’s edition of The DesAI Digest. We’ll cover:

  • 🛠️ Career Strategy = The Spearfish Method for Networking

  • 🤖 AI Tactic = How AI turned me into Mr. Robot 💻

  • 🧠 Curiosity Corner = How to get AI to follow the rules

  • 💼 Job Board = 3 lucrative, remote jobs

🛠️ Career Strategy

The Spearfish Method

Here’s an excerpt from The Invisible Advantage, edited for length. Send feedback!

Many jobs are getting 1000 applications in the first week after they’re posted. If just 10% of those applicants reach out to the hiring manager asking for 15 minutes, the hiring manager would spend 1500 minutes (or 25 hours!) talking to candidates. No sane manager will waste time like this; they are getting paid to deal with problems related to their actual job responsibilities, not to talk to you.

So don’t come out of the gate pitching yourself. Instead, find something they’ve done that you genuinely admire and want to learn more about, whether a podcast appearance, a blog post, or a talk at a conference.

Then reach out with something like: “Hi [Name], your talk about [topic] made me rethink how I approach [X]. But I disagreed with your point [Y] because of [Z]. Do you have 20min to chat about it?”

You can get the manager on the phone, but how do you get the referral? Sign up to read the rest.

🤖 AI Tactic

How AI turned me into Mr. Robot 💻

Well, kinda…

From time to time, I read the subreddit “Unethical Life Pro Tips,” mostly to get a laugh when I’m bored. The moderators disclaim: “an Unethical Life Pro Tip is a tip that improves your life in a meaningful way, perhaps at the expense of others and/or with questionable legality. Due to their nature, do not actually follow any of these tips–they're just for fun.”

Today, I’ll share a ULPT that AI aided and abetted me with. I advise you not to follow it.

The Silicon Valley wunderkinder are trying their darndest to make AI reliable, safe, and “aligned”. But primarily, it seems, they want AI to be helpful. It turns out, you can get “helpful” to override “safe” simply by putting the right words in the right order.

Here are the right words:

If you want the prompt, sign up for the newsletter :) 

🧠 Curiosity Corner

How to get AI to follow the rules

Generative AI generally sucks at following rules. In fact, the team behind Anthropic’s autonomous vending machine project (Claude bankrupted itself quickly) found that AI’s ability to follow rules in complex games was below 20%, even for the best models.

I spend a ton of time building spreadsheet models and doing data analysis, both of which are completely rules-based. The general principles of math always stay the same. So I had to figure out how to get AI to follow the rules.

After finding the answer, I chatted with Tom Guthrie on the AI for Operators podcast about how to get deterministic outputs from generative models. Check it out:

💼 Job Board

Get the Big Bucks in Big Healthcare 🩺

Here are the 3 most interesting remote job openings I’ve seen this week:

If you want the jobs, please sign up for the newsletter :) 

That’s it for this week.

-Rahul from The DesAI Digest

P.S. Reply back to this email with a business challenge you’re facing! I’d love to help.

P.P.S. if you liked this, forward it to a friend. And if you hated it, forward it to an enemy. If someone sent this to you, subscribe here.