I'm going to be real with you.
I asked a founder with 45k followers to forward me a week of his DM requests. He gets about 30-40 per day.
23 of them started with "I hope this message finds you well."
23. Out of 40.
He replied to 2. Both were specific. Both referenced something he'd actually posted. The other 38 got archived without a second thought.
If you're using AI to write your outreach without editing it properly, you're in that pile of 38. Fun.
The Actual Problem
You've probably heard some version of "AI writes generic content because it's trained on average data." That's technically true but it misses the point.
The problem is simpler: you're not editing enough.
Most people use AI like this:
Write prompt → Copy output → Send.
That's not AI-assisted outreach. That's AI outreach with your name on it. And everyone can tell.
We ran a test last month. Showed 50 X users (all 10k+ followers) a mix of AI-generated and human-written DMs. Asked them to guess which was which.
87% accuracy.
They're not using AI detectors. They just... know. The tells they mentioned:
- "Too polished for a DM"
- "Generic compliment that could be for anyone"
- "No reference to anything I actually posted"
- "Felt like a template"
Sound familiar? Yeah. Me too. I've done this. We've all done this.
The 5 Things That Give You Away
After looking at hundreds of AI-generated DMs (including plenty of our own early attempts), these are the patterns that scream "a robot wrote this":
1. Generic Compliments
"I've been following your content and I'm really impressed by what you're building."
Cool. So has everyone else who sent a message today. This could literally be sent to any of the 10,000 people building things on X.
The fix isn't to write a better compliment. It's to reference something specific from the last 48 hours.
Instead:
"Your thread yesterday on cold email dying, we saw the same thing. Reply rates dropped 60% in 6 months for our clients. Switched them to X DMs and it's night and day."
You can't fake recency. AI can't know what someone posted yesterday. That single reference changes everything.
2. Hedge Words Everywhere
"You might be interested in..." "We could potentially..." "Would you be open to..."
AI hedges because it's trained not to make strong claims. But weak language = weak message.
I went through 20 AI-generated DMs last week. Average hedge words per message: 4.3.
Just say the thing. "This worked for [similar person]" beats "you might find value in this" every time.
3. Too Polished
Perfect grammar. Perfect structure. Perfect punctuation.
Humans don't write like that in DMs. We use fragments. We start sentences with "And." We skip periods sometimes.
Your English teacher would hate it. Your prospect will trust it more. (We have a whole guide on this if you want to dive deeper.)
AI polished:
"I noticed that you have been experiencing challenges with your content reach. I would love to share some strategies that have been working well for our clients."
Human messy:
"Saw your tweet about reach dying. Been there. Found something that actually worked for 3 clients last month, might be relevant for you too"
The second one has no period at the end. "Been there" is a fragment. It reads like a text message. That's the point.
4. Vague Claims
"We've helped many businesses achieve significant growth."
Many businesses. Significant growth. What does that even mean?
AI can't know your actual results, so it makes up vague garbage. You need to add the real numbers.
With actual numbers:
"Last 3 clients: reply rate went from 2% to 11%. One hit 18% but he had great existing content already."
The parenthetical about the outlier makes it MORE believable. You're being honest about what's typical vs exceptional. That's how humans talk about results.
5. The "Could Apply to Anyone" Test
Read your DM out loud. Now ask: could I send this exact message to 100 different people?
If yes, it needs work.
The goal isn't a good DM. It's a DM that could only be sent to this one specific person. Everything else is noise.
What Actually Works
The workflow that gets replies:
1. Write a detailed prompt with context, constraints, and examples of your voice.
2. Get the output. Don't send it yet.
3. Add one 48-hour reference. Check their recent posts. Takes 30 seconds.
4. Delete every generic compliment. If it could apply to anyone, cut it.
5. Remove hedge words. Might, potentially, possibly, could, would be open to. Gone.
6. Add your real numbers. AI makes things up. You have actual data. Use it.
7. Make it messier. Add a fragment. Drop a period. Start something with "And."
8. Read it out loud. Does it sound like you talking? Or a press release?
This takes 2-3 minutes per DM. That's the real time investment.
"But that's slow!"
10 well-crafted DMs beat 100 robotic ones. We have the data. A client switched from volume (50 DMs/day, zero personalization) to quality (15 DMs/day, full process above). Reply rate went from 1.2% to 9.4%.
Do the math. That's 7.5x more replies from 70% less work.
The Uncomfortable Truth
If your outreach sounds generic, it's because you're being lazy about the editing step.
AI gives you a first draft. That's all it does. The human part, the part that actually gets replies, is what you do after.
Every DM that "doesn't work" is really just a DM that didn't get enough attention before you hit send.
The people crushing it with AI-assisted outreach aren't using better prompts. They're spending 2-3 minutes per message instead of 20 seconds.
That's the whole difference.
Quick Checklist Before You Send
- ☐ Does it reference something from their last 48 hours?
- ☐ Could I send this to 100 people? (If yes, rewrite)
- ☐ Any hedge words left? (Delete them)
- ☐ Is there at least one real number?
- ☐ Does it sound like me or like a robot?
30 seconds to run through this. If anything fails, fix it before sending.
Your Call
You can keep doing what you're doing. Copy-paste AI outputs, send 50 messages, get 0-1 replies.
Or you can slow down. 15 messages. 2-3 minutes each. Actually edit them.
One of those gets you conversations. The other gets you blocked.
Your move.
