When a PS5 Controller Becomes a Key to Thousands of Homes: The DJI Romo Debacle

Picture of Ronnie Huss
Ronnie Huss

When a PS5 Controller Becomes a Key to Thousands of Homes: The DJI Romo Debacle

Sammy Azdoufal just wanted to hoover his living room with a games controller. Genuinely – the man bought a DJI Romo robot vacuum, grabbed his PS5 DualSense, and thought: how hard can it be? He knocked together a small app using code he’d got from Claude, bridging the two devices. A fun weekend project, nothing more.

Key Takeaway

The DJI Romo security incident – where a PS5 controller firmware vulnerability potentially exposed home access for thousands of users – illustrates the fundamental risk of connecting consumer gaming peripherals to security-critical physical infrastructure without isolated control architectures.

What he found instead was rather alarming. His app inadvertently gave him control over roughly 7,000 Romo vacuums worldwide – along with their live camera feeds, built-in microphones, and detailed floor maps of strangers’ homes.

No malicious intent. No sophisticated hacking. Just a curious hobbyist, a backend flaw, and suddenly far more access than anyone should ever have.

Key Takeaways

  • The Innocent Beginning
  • This Is Bigger Than One Vacuum
  • The Broader Conversation
  • What Can Be Done?

The Innocent Beginning

To understand how this happened, you need to know about authentication tokens. Every Romo communicates with DJI’s servers using a private key – your device’s way of saying “it’s me.” Azdoufal extracted his own token as part of his app-building experiment, then queried DJI’s system to pull his device’s data.

The response came back with far more than he’d asked for. Information from thousands of other units, spread across 24 countries, flooded in.

From there, he could steer other people’s vacuums remotely, listen through their microphones, and watch through 360-degree cameras that were designed for navigation – not for broadcasting someone’s sitting room to a stranger in another country. He tested the scope of it by bypassing a friend’s security PIN and waving at them through their own vacuum’s camera. Which, as proof-of-concepts go, is pretty unsettling.

DJI eventually patched the flaw in their MQTT protocol – the backend communication layer that had been grouping devices without proper user isolation. The fix was in place before public disclosure, but the damage to trust was already done.

This Is Bigger Than One Vacuum

It would be easy to file this under “weird tech story” and move on. Please don’t.

Devices like the Romo are sold on the promise of a seamless, connected life. They learn your home layout. They respond to your voice. They let you check in remotely. All of that is genuinely useful – until the architecture underpinning it fails, at which point a cleaning appliance becomes something rather more sinister.

Think through what was actually exposed here:

  • Floor plans detailed enough to blueprint a burglary
  • Live video feeds into private spaces
  • Audio recordings of conversations nobody consented to share

We’re heading towards a world with billions of these devices in homes. One backend misconfiguration shouldn’t be able to turn all of them into a distributed surveillance network. And yet, here we are.

The Broader Conversation

The obvious angle is to point at Chinese-made hardware and raise an eyebrow. That’s too easy, and frankly too convenient. Western manufacturers have had their own disasters – compromised baby monitors, vulnerable smart doorbells, mesh networks that phoned home more than their owners realised. This is an industry-wide problem, not a geography problem.

The real issue is that IoT devices are being shipped fast and priced to move. Security costs money and slows releases. When margins are tight and the market rewards whoever ships first, the security audit is the first thing to go. The Romo flaw is a symptom of that dynamic. If you want to see just how bad it can get, have a read of what we wrote about the AISURU botnet – a 31.4 Tbps DDoS attack that recruited home appliances into a digital army.

What Can Be Done?

The technical fixes aren’t complicated – they’re just not prioritised. The industry needs:

  • Mandatory security audits before devices reach consumers
  • Encrypted communications on by default, not as an optional setting
  • A serious conversation about whether non-security devices need cameras at all
  • Privacy-by-design baked in at the architecture stage, not bolted on afterwards

The Bottom Line

Here’s the irony worth sitting with. As AI coding tools like Claude make it easier for curious amateurs to poke around in systems, more of these flaws will be discovered – not by bad actors, but by people who just wanted to do something fun with their hoover. That’s probably a net positive. Citizen-discovered vulnerabilities, responsibly disclosed, might be the forcing function that finally makes manufacturers take this seriously.

The DJI Romo story isn’t really about one man’s robot army. It’s about the gap between how we sell connected devices and how securely we actually build them. We keep inviting these things into our homes. The least we can ask is that they don’t let anyone else in.

The next breach might not be accidental – and it could be knocking at your door.

Frequently Asked Questions

What happened in the DJI Romo PS5 controller security incident?

A user discovered that the DJI Romo robot, which can be used for home access and monitoring, could be controlled via a PS5 DualSense controller. A firmware vulnerability in the controller’s connectivity layer created a potential pathway where unauthorised parties could gain control of the robot – and by extension, physical access to homes it was configured to manage.

What is the broader lesson about consumer hardware and home security?

Consumer gaming hardware is designed for entertainment, not security-critical applications. When devices designed with consumer trust models are used in physical security contexts – controlling door locks, home robots, security cameras – the security assumptions of the underlying hardware rarely match the security requirements of the application.

How should companies building physical AI systems handle controller security?

Physical AI systems with real-world consequences require dedicated controller interfaces with security architectures appropriate to the risk level – not repurposed consumer peripherals. At minimum: isolated network segments for control signals, cryptographic authentication for every command, physical proximity requirements, and automatic lockout after authentication failures.

When a PS5 Controller Becomes a Key to Thousands of Homes: The DJI Romo Debacle

About the Author

Ronnie Huss is a serial founder and AI strategist based in London. He builds technology products across SaaS, AI, and blockchain. Learn more about Ronnie Huss →

Follow on X / Twitter · LinkedIn

Written by

Ronnie Huss Serial Founder & AI Strategist

Serial founder with 4 successful product launches across SaaS, AI tools, and blockchain. Based in London. Writing on AI agents, GEO, RWA tokenisation, and building AI-multiplied teams.

SearchScore AI Visibility Badge
Get your free AI, SEO & CRO audit — instant results
Audit link sent! Check your inbox.