You've been lied to about AI.

If you believe in "AI safety," this email is going to make you lose all trust in AI companies.

I’ll explain exactly why in a second but…

have you ever wondered why literally EVERY person who barely knows about AI thinks it’s going to end the world?

Just ask your friends…

Some of them will really tell you with a straight face that it’s a crisis.

Or a random dude at the gym who can’t even explain what GPT stands for will have a fully formed opinion that AI might end the human race.

how?

How does everyone just think this?

one minute nobody knows what AI is.

the next it’s:

“STOP. THIS COULD KILL EVERYONE.”

I don’t buy it.

I think it’s a scam.

I think “AI safety” is one of the greatest scams in the history of technology.

The same companies telling you AI is too dangerous for normal people are also trying to build it as fast as possible.

They're lying straight to your face.

AI isn’t going to end the world.

What, you think it’s going to autocomplete its way into nuking the moon so we all get crushed under the ocean?

then what?

has anyone even thought about what happens if the AI destroys the world?

IT GETS DESTROYED TOO.

So AI is apparently smarter than Albert Einstein, but it’s not smart enough to realise that if it kills us…

it dies too?

no way.

it’s all made up.

the companies say AI is super dangerous, then they ask for $100 billion in funding.

very normal.

very “we care about humanity.”

it’s like if a guy ran into your house screaming, "There's a bomb in here!”

and then asked you for more bomb parts because he’s the only one who can handle it safely.

But hey, I guess we’re not supposed to question the "experts," right?

Here’s why I told you all of this:

You need to stop reading AI news like a tourist.

You need to read between the lines and see what’s actually going on.

Like the SpaceX and Claude partnership deal this week. Most people saw that and thought, “wow… claude and elon team up!” “cool… now OpenAI mad”

It genuinely hurts seeing that type of slop on the internet.

Because that is the surface-level idiot version.

The bigger picture is this:

The “safety” company just went and got one of the biggest piles of compute on earth.

That’s not a random partnership.

That’s not “Claude and Elon team up.”

That is a move in the AGI throne war.

And once you understand that, the whole AI industry starts looking different.

Because “AI safety” does not mean:

“let’s stop AI from becoming too powerful.”

It means:

“let’s decide who is allowed to have the powerful AI.”

And I get it, you might not have realised this yet. That’s fine. But from now every time someone says “AI safety,” ask:

Who gets more power if I believe this?

What does this fear make me stop questioning?

What would I build, use, or support if I wasn’t scared?

I’m telling you, once you see it, every AI safety headline starts sounding like a guy asking for more bomb parts.

Don’t be scared of AI.

Be suspicious of the people telling you to be scared.

MSA

P.S. If you’re wondering why this wasn’t a normal roundup, it’s because I think this was more important.

I thought it'd be better if I let you know about this instead of blabbering off boring talking points about CNN articles. If you liked this type of post, reply and let me know. I really appreciate feedback!

And if you hated it, I’d like to hear your feedback even more.

Either way, I hope I’ll be hearing from you.

Keep Reading