AI vs AI: The New Cyber Battlefield

When attackers automate faster than you do, here's how to catch up

In partnership with

Hello Fellow,

AI is powering both sides of cybersecurity now. Defenders use it to automate alerts and threat detection. Attackers use it to automate chaos and scale their operations. One side builds copilots, the other builds crimebots. Both are getting faster.

The question isn't whether AI will change your security projects. It's whether you can keep up.

In today's issue, learn:

  • How ransomware gangs are scaling attacks through AI

  • Why Microsoft's new Security Store matters for project managers

  • A 4-question framework to govern AI adoption safely

  • Practical steps to close the automation gap this quarter

Read newsletters, not spam

Tired of newsletters vanishing into Gmail’s promotion tab — or worse, being buried under ad spam?

Proton Mail keeps your subscriptions organized without tracking or filtering tricks. No hidden tabs. No data profiling. Just the content you signed up for, delivered where you can actually read it.

Built for privacy and clarity, Proton Mail is a better inbox for newsletter lovers and information seekers alike.

When Attackers Automate

Ransomware groups now run their operations like tech startups. They feed AI models stolen data to calculate ransom amounts, personalise phishing emails, and triage new victims automatically. No human review needed.

This turns cybercrime into a production line. What once required skilled hackers now runs on scripts. The result is attacks that are faster, cheaper, and far more convincing.

For Cyber PMs, this creates a scaling problem. Traditional defence projects built on manual monitoring simply can't match machine-speed threats. If your incident response depends on someone reading every alert, you're already behind.

How Microsoft Is Responding

Microsoft launched the Security Copilot Store to address this problem. It gathers security copilots, plugins, and automation tools into one ecosystem, similar to an app store for protection.

The goal is simplifying threat detection and response whilst reducing tool fatigue. For project managers, this signals something important. AI isn't just another tool. It's an entire ecosystem requiring planning for interoperability, change management, and training from day one.

A smarter platform only works if the people and processes behind it are ready.

What the Best PMs Do Differently

When AI changes the threat landscape, strong PMs adapt their approach.

They match automation with automation. If attackers use AI to scale, defenders automate response and reporting at similar speed.

They simplify the security stack before adding new tools. Map overlaps, identify gaps, then consolidate before deploying anything new.

They retrain teams for current reality. Security awareness sessions now include AI-crafted phishing samples because that's what employees actually face.

They expose assumptions early. Every status update highlights what AI can't do and what could go wrong, not just what it promises.

The Gold standard for AI news

AI will eliminate 300 million jobs in the next 5 years.

Yours doesn't have to be one of them.

Here's how to future-proof your career:

  • Join the Superhuman AI newsletter - read by 1M+ professionals

  • Learn AI skills in 3 mins a day

  • Become the AI expert on your team

A Framework That Works

Before approving any AI-based security tool, document answers to these four questions in your Risk Register:

Automation readiness: Which workflows can safely run without human input? Which absolutely cannot?

Integration impact: What changes or breaks when new AI connects to existing systems? Who owns testing?

Training plan: How will teams adapt to AI-driven decisions? What's the timeline and who delivers it?

Response loop: How will lessons from incidents improve your automation? Who reviews and updates scripts?

Review quarterly, not just at deployment. Automation without governance means failing faster.

Learning Through Real Scenarios

Common situation: Your security team wants to deploy an AI tool that automatically blocks suspicious user accounts based on behaviour patterns.

Ineffective approach: Deploy immediately because it promises faster threat detection. Assume the AI knows what it's doing. Wait for complaints when legitimate users get blocked.

Effective approach: "Before we deploy, let's test this on a subset of accounts for two weeks. Security, define what 'suspicious' means in our context. IT, document the unblock process for false positives. Comms, draft the message for affected users. Then we evaluate and decide on wider rollout."

Fact-based news without bias awaits. Make 1440 your choice today.

Overwhelmed by biased news? Cut through the clutter and get straight facts with your daily 1440 digest. From politics to sports, join millions who start their day informed.

Final Thought

AI is rewriting the rules of cybersecurity. The only question is whether your defences can learn as fast as your attackers already are.

P.S. Share this with any PM who still thinks automation is "future work." The future arrived while we were planning for it.

Next Week: How to test your AI security controls before they fail in production.

Was today's newsletter helpful?

Login or Subscribe to participate in polls.

Reply

or to participate.