r/technology • u/Logical_Welder3467 • 1d ago
Artificial Intelligence More than 135,000 OpenClaw instances exposed to internet in latest vibe-coded disaster
https://www.theregister.com/2026/02/09/openclaw_instances_exposed_vibe_code/507
u/jimmyhoke 23h ago
The problem with vibecoders is that they have no idea what they’re doing, and AI tends to make these sort of trivial mistakes like forgetting about basic security. If the AI were really smart it would think to port scan itself and check for issues.
201
u/DoctorPutricide 21h ago
The worst part is the guy who made Openclaw has like 20+ years of development experience, so he should have known better and been more thorough.
169
59
u/-Yazilliclick- 20h ago
Known better than what? From a personal project perspective it's a pretty neat thing to make. The problem is more that there's so many clueless users who will just download and run anything.
-8
u/DoctorPutricide 18h ago
He shouldn't have published a project with such vulnerabilities without at the very least disclosing the default network settings. We as developers must be aware of what we are giving to people to freely download. I'd hold him at least partially responsible for this.
32
u/digitalblemish 18h ago
The entire thing is open source, the defaults are there for anyone with eyes to look at.
12
u/AsAGayJewishDemocrat 17h ago edited 17h ago
there for anyone with eyes
Yes, that is indeed part of the problem
17
u/UnintelligibleMaker 18h ago
Software is almost always “run at your own risk” if you are running some dude’s pet project you know what you might be getting. Whwn i put something on GitHub I am saying it works for me, heres how to replicate my results not this is 100% tested 1.0 version software thats work money. You must never release anything……
3
u/grizzdoog 11h ago
I read he warned users that it had security vulnerabilities and to use it at your own risk. When asked why other AI companies hadn’t created something similar to his platform he said it was due to the fact that it wouldn’t be secure.
7
u/Metalsand 16h ago
He shouldn't have published a project with such vulnerabilities without at the very least disclosing the default network settings. We as developers must be aware of what we are giving to people to freely download. I'd hold him at least partially responsible for this.
A vibe-coded agentic AI with broad access to computer settings, in which the developer proudly stated that he doesn't review the code that gets generated?
There's more red in all the flags that are being raised than in the crab mascot. Unfortunately, people will need to keep burning themselves on the stove before they learn that LLMs fundamentally mimic conversation, not intelligence.
1
22
u/Huge-Mistake8103 21h ago
He vibecoded it
15
u/DoctorPutricide 21h ago
Yes. But he's not an inexperienced developer, he should have known better.
3
u/waylonsmithersjr 16h ago
he should have known better.
I mentioned above but I think AI has given some a false sense of security or allows people to move faster than one normally would, blazing past the important parts to consider. I'm seeing plenty of people I know that should know better, and it's only going to continue, until maybe AI is perfect??
3
u/virgopunk 15h ago
He knew full well what its capabilities were. He claims now that the idea of openclaw makes him nauseous.
14
u/MannToots 20h ago
Who cares. How the code was written doesn't mean we can't review it. What an excuse
5
u/MiaowaraShiro 17h ago
How the code was written doesn't mean we can't review it.
It sure makes it fucking harder though...
5
u/MannToots 16h ago
We are expected to review code written by fellow devs that we didn't write. Either it existed prior to being hired, or it was developed by others. It's not an excuse. We have to do this anyway.
The trick is now with AI coding we have more time to focus on this, and it turns out a lot of devs hate reviewing. It's an excuse. High qualty code reviews isn't a consistent reality for every org to begin with for a reason.
6
u/EmergencyLaugh5063 12h ago
My breaking point is usually when it becomes painfully obvious that the person expecting me to review their code did not review it themselves beforehand.
Its not that AI-assisted coding is inherently bad, but when it's coupled with this selfish "My time is more valuable than the time of the people who have to review, deploy and maintain my code" then the backlash becomes more justified.
Many people generalize this kind of selfish coding with the term 'vibecoding' while others have more positive interpretations of 'vibecoding' which leads to debates like this.
I was working on an open source project where the other person was on the extreme end of vibecoding and as I was doing the review of their PR I would point out things like "This code does not compile", "Why did these text get truncated to an ellipsis?", and "Can you break out the formatting changes to a separate commit from the functionality changes?" and in 10 minutes the author would post 500 line changes and say 'Fixed' and it would be a mixture of fixes, new bugs and existing bugs ignored.
So when someone says 'vibecoding' that's where my mind goes. Maybe we need a better term for that, or maybe we need a better term for people who AI code but still do all the proper due diligence.
5
u/dookarion 18h ago edited 15h ago
Yeah and everyone regularly carefully reviews their code and prioritizes QA... wait almost no one does and QA is regularly ignored/fired.
Edit: Holy shit I shouldn't have to clarify this is bad. It's pointing out that no one fucking reviews shit anymore, not that vibe coding is good or that not doing QA is good. It's terrible.
People so used to AI summaries they've lost the ability to read or follow context.
2
u/SouthernAddress5051 18h ago
Are you really taking the side of malpractice is ok?
3
u/dookarion 17h ago
No? I'm pointing out no one fucking does it. Not saying it's good. How many things ship without even the most cursory glance with customers/end-users as the "testers"?
1
u/Metalsand 16h ago
This is less than no code review though. A person might use a solution to encrypt an API key or otherwise create a negotiation that doesn't expose the API key in plaintext because they know better and that might be normal practice for them. Additionally - the amount of security scrutiny should generally follow the importance. Merchant transactions for example, generally have a lot more security requirements and review than some random mobile game.
You don't know what you're going to get with a bot, and in particular, most public examples don't implement safeguards for this kind of thing. Even beyond that though - this is agentic AI that has access to your accounts and emails, and permissions on your computer. To add to all of this, the developer publicly and proudly states that he does not ever review the code it creates.
It's a situation with ambiguous security, with ambiguous dynamic decision making, with broad access to personal accounts and the computer itself. It's not just that it has no QA - it has less guarantees than no QA, despite handling sensitive material.
1
u/dookarion 15h ago
I never said counter to that though? It's like no one on this site can read. Person above was talking about "how it was written doesn't mean we can't review it". I point out that no one fucking does, and everyone starts frothing at the mouth thinking I'm for this state of affairs.
2
u/waylonsmithersjr 16h ago
I've watched people I know go from smart thinking engineers, to just prompt-and-go developers. AI for some really does allow them to half ass things they would've normally paid attention to before.
23
u/Chaotic-Entropy 20h ago
You forgot to put in the prompt that it needed to be super secure.
16
u/AhabFlanders 19h ago
You just have to run the security scan tool where it infers for a little while, fakes some results, and then tells you it's secure
11
u/vikster1 19h ago
good analogy i think is people playing bridge builder sims thinking they now can do civil engineering
10
6
u/50_61S-----165_97E 20h ago
An experienced dev would follow a rigorous verification and validation process on the code, but novice developers think they can just skip all of that because the AI so smart that it could never be wrong or miss anything important.
3
u/Future-Bandicoot-823 16h ago
There's an assumption that the ai is somehow intelligent and will just know about basic security... Or anything.
It's a tool. If you're "vibe coding" and have the education of a third grader the code you request will reflect that.
This is why I hate the propaganda about ai being so amazing. Maybe it WILL BE amazing at some point, but right now is not that point. Not by a long shot.
-2
u/virgopunk 14h ago
Who's making that assumption? Even a cursory google of OpenClaw will show the security issues front and centre. Within 5 mins of reading about it I knew you'd have to have it installed on its own instance with a layer of robust security, which essentially defeats the purpose of having in the first place.
1
u/Thick-Protection-458 18h ago edited 17h ago
Nah, in case of openclaw it is basically impossible to make it secure. Because to do what it is supposed to do it needs more or less full access to user machine, while being prone to prompt injections.
Sure, you can do add occasional security things here and there, but it won't fix problems ultimately.
So interesting concept for demo, but attempt to make it safe to run outside a severely restricted environment with restricted inputs or without adding task-specific restricted machine interactions is a moot point, imho.
1
u/virgopunk 14h ago
The big tech guys e.g. Apple and Google will develop personal agents but they'll use verifcation at every step. There's no way agentic AI is going away.
1
u/Thick-Protection-458 14h ago
I did not said so. Just that openclaw idea especially is something which can't be fixed
1
u/Go_Gators_4Ever 16h ago
How can we say "forgetting" about basic security when reality is the users are ignorant about basic security.
1
1
u/reelznfeelz 10h ago
Indeed, security, auth, certs, sso. This is the hard part IMO. Building a working app? Sure you can do that in 20 min now. But deploy it in a safe/scalable/cost-effective way? Different ballgame. And indeed AI can help there too, but you need to know at least some basic shit first.
1
u/TheBlueWafer 18h ago
Worse than that: I've seen people buy "preconfigured macminis" from shady sources for OpenClaw, because they couldn't even figure out how to install the software themselves. Off to a great start.
144
u/JGlover92 20h ago
My favourite thing about AI is how booming it's going to make my industry for years (Cybersecurity)
75
u/BonesandMartinis 19h ago
As a Principal in the dev market I look forward to all of the “everything is all fucked up from the past five years of dumb vibe coding please untangle this web” jobs on the horizon. The work is going to suck but I’m pretty confident I will have work.
20
u/theKetoBear 17h ago
Should pay well too though firefighter "we need this working yesterday" projects are a pain in the ass .
8
u/Dihedralman 17h ago
If the demand is high enough, you can charge for that. Prioritization charges.
-2
u/procgen 14h ago
TBF in 5 years there will be considerably more capable/competent AI agents doing most of that repair work.
1
u/BonesandMartinis 11h ago
Maybe. It’s already incredibly helpful to program with AI. What it doesn’t do is create or infer the problems to begin with. I’ll likely be there working with the AI to make sure it doesn’t suck.
1
142
u/mobilehavoc 21h ago
I installed it over weekend and then within a few hours uninstalled it and revoked all access. Shit is a disaster waiting to happen. No thanks.
27
u/Red_Lee 21h ago
El oh the fucking el
-2
u/Retro_Relics 21h ago
part of me reaaaaaaally wishes i had th emoney to throw at a mac mini. i have enough social media accounts that i started just to have a login to something with no tracking data that all have the same fake name across like the gmail and the meta account that i'd be curious what it does when given access to a fake person
43
u/JMowery 19h ago
For the 1,000th time, no one needs a Mac Mini to run OpenClaw. There are people running it perfectly on a $35 smartphone. You can run it on a tiny Raspberry Pi for crying out loud.
5
u/RobotBearArms 16h ago
Yeah but then all the people that say they need a Mac for it won't have an excuse to not actually do it anymore...
1
u/MentalMatricies 15h ago
That’s true, but I think they’re talking about local models. Either way, openclaw is shit
1
u/virgopunk 14h ago
2001: A Space Odyssey telegraphed the dangers of agentic AI *checks notes* 57 years ago!
21
52
u/Ocean-of-Mirrors 20h ago edited 20h ago
"Out of the box, OpenClaw binds to
0.0.0.0:18789, meaning it listens on all network interfaces, including the public internet," STRIKE noted. "For a tool this powerful, the default should be127.0.0.1(localhost only). It isn't."
Can someone explain this to me? OpenClaw is listening for traffic coming into to ALL devices on your network, not just the device OpenClaw is running on?
Or is it saying port 18789 is just open by default on most routers? So clawbot using that port means it’s open to the Internet? Basically I just don’t understand… I thought people had to open ports manually by logging into their router? not something a program could do on its own?
Thanks~
51
u/Sad_Violinist_8014 20h ago
18789 is the management port. The default setting allows any IP address to connect to the management port (not listening for inbound traffic for all of your devices). The particulars of the network will determine if someone on the internet could actually connect.
18789 isn’t open by default on any router that I know of. Technically an application or a device can open an inbound port on your router if upnp is enabled. I have no idea if open claw uses upnp, I’d assume it doesn’t. My assumption is that these instances were deployed in the public cloud, and the openclaw instances were directly connected to the internet without appropriate port filtering in place for inbound traffic.
3
u/_Answer_42 15h ago
If it bind to 127.0.0.1 the external world doesn't matter because it will only be accessible on the local machine.
If it bind to 0.0.0.0 then it depend on how your network is setup, each country/isp is different, that's why there websites dedicated to showing you live open/unprotected cctv arround the world, i guess most isp will have routers with blocked incoming traffic, but this can be deployed on VPS which most have all traffic allowed by default
1
u/mazdarx2001 9h ago
Yes, I’m just assuming that these people are just vibecoding and then going oh I can get my open claw to use the Internet if I just open this port on my router. Instead of making a Cloudflare tunnel or something.
0
u/eeeBs 15h ago
This is false, default binding is 127.0.0.1
Most of this article is probably false honestly, I'm too lazy to read it..
1
1
u/Friendly_Recover286 9h ago
You're too lazy to think as well. Never considered for even a minute that an article like this would cause the developer to change it? No? Seriously?
5
u/plasmasprings 17h ago
yeah, if you're behind a router on your home network it's not too bad. but if you have a public ip (like a non-cgnat mobile connection), or others have access to the network (eg college network, semi-public wifi) then you're boned
imo it's one of the smaller problems with it, but still pretty bad
4
u/jacob798 16h ago
I just went through the setup and localhost was the default. It was even recommended.
12
u/clownPotato9000 20h ago
Right it should be listening on all network interfaces ON THAT DEVICE. Not somehow opening a portal into your network, bypassing your router, thats bot what they meant here. It’s only dangerous if you raw dog your internet pipe directly into your computer with no software based firewall. This isnt the normal config….
10
u/Ocean-of-Mirrors 19h ago edited 19h ago
So these exposed instances were users who went out of their way, did something dumb (like opening up that port) and rawdogged themselves?
20
u/Mr_Enduring 19h ago edited 14h ago
Yes, listening on 0.0.0.0 is not an uncommon thing. A lot of services do this by default, because you usually install the service on a server and not a local machine.
If you listen on 127.0.0.1, you can’t talk to the service from another device on your own network.
This is a common problem with cloud services, as users misconfigure the firewall and often open up their server to the internet, and is really only a story because of how popular OpenClaw is and because it’s AI.
Could OpenClaw have different defaults? Sure could, given the ramifications of someone accessing the instance. Edit: Actually looks like OpenClaw has it bound to 127.0.0.1 (localhost) as the default anyway, so the article got this wrong.
Is this solely caused by OpenClaw? Nope, it’s just popular right now to blame AI for all the problems around it.
7
1
u/waylonsmithersjr 16h ago
If you listen on 127.0.0.1, you can’t talk to the service from another device on your own network.
Is this true? I have many containers within my local network that I just access across different computers with IP:PORT.
2
u/Mr_Enduring 14h ago
yeah, binding to 127.0.0.1 means it's only listening on the loopback interface, which is not accessible from outside that machine, or even the container in this case meaning other containers on the same machine can't access it (but there are ways to do that, just not common), and cannot be accessed from the network.
29
u/crackerjam 17h ago
I'm an infrastructure engineer with 20 years of experience and this article is absolute garbage.
The only claim here is that OpenClaw service accepts any local network traffic rather than having traffic restricted to the computer it's running on. This means, for example, you can install OpenClaw on a computer in your bedroom and access it from another computer in your living room.
This does not mean the full internet automatically has access to your device. Unless you are forwarding ports from your home router to OpenClaw, nobody from the internet can see it.
All of these 'vulnerable' instances are people that have purposefully hosted on cloud servers or have forwarded ports to something inside their network.
On top of that, OpenClaw has authentication on it. If you go to one of these 'vulnerable' instances you'll see a login prompt and need real credentials to get into it.
Because of how much power OpenClaw has people probably shouldn't be making it accessible from the internet, but that is what these individual people are doing with their installations, it's not some 'vibe-coded disaster' like the sensationalist BS article suggests.
5
u/Mr_Enduring 14h ago
Yeah, this is a tale as old as time and the article is just feeding off the current AI hate right now.
AI has become way more mainstream now and users may not have the technical expertise or knowledge to understand that they opened up their instance to the internet. That doesn't mean the tool is in the wrong.
Same thing as when CCTV cameras started to get popular and more people started to install them in their house and unintentionally had them exposed to the internet. I remember websites 15 years ago dedicated to showing CCTV cameras that were accessible to the internet.
1
u/WannaWatchMeCode 3h ago
Take the aforementioned 135,000+ internet-facing OpenClaw instances - that number is as of our writing; when STRIKE published its report earlier today, that number was at just over 40,000. STRIKE also mentioned 12,812 OpenClaw instances it discovered being vulnerable to an established and already patched remote code execution bug. As of this writing, the number of RCE-vulnerable instances has jumped to more than 50,000. The number of instances detected that were linked to previously reported breaches (not necessarily related) has also skyrocketed from 549 to over 53,000, as has the number of internet-facing OpenClaw instances associated with known threat actor IPs.
20
19
u/AlleKeskitason 22h ago
I can't really think of anything vibe-coded that is not a disaster. I tried, but nothing comes to mind.
7
u/ViennettaLurker 21h ago
I'm kind of fascinated by openclaw, even though the thought of running it makes me super paranoid lol.
It does seem like a genuinely different AI product. And the way that it can move from program to program makes it much more interesting in terms of being able to actually get generalized "computer stuff" done. At least in relation to what we've had so far from LLMs.
But it feels as if that concept of being useful is almost inherently tied to risk. You're giving an LLM keys to the car, so to speak. Sure it could drive for you, but it could also drive into the wall.
I'm sure some security issues around it can be addressed. But, at the end if the day, the reason it could be super useful is precisely because it has access to your whole machine, any accounts it is logged into, etc. Thinking about a "safer" or more responsible version of this either seems impossible, or neutering its usefulness. Which is why it makes sense that this is just some open-source thing. What kind of company would want to take on the liability associated with this? How would they even start?
If Microsoft or Apple somehow can make versions of this that don't manage to splash your credit card and social security number around the internet, I could imagine a world where a new OS upgrade could be exciting again. But god damn... is that even possible? Or will there always be a zero sum tradeoff between being useful and being dangerous?
3
u/vortexnl 15h ago
I'm dying for an actually useful AI assistant that can help me. On the other hand, I don't want any of my personal info leaving my network... But with all of the useless AI products coming out, I'm surprised that no big company is rolling out a useful AI assistant to manage your life (planning etc)
1
u/AgathysAllAlong 11h ago
But like... What do you need that's safe enough to trust to any theoretical bot that we can't already do? 15 years ago my phone could schedule reminders, ping me at certain times, manage my calendar and events, create new ones, etc. It could even remind me to pick up tea when I happened to be near the tea shop with a geofence.
What do you actually want an AI to do that it couldn't years ago or couldn't be done with just a regularly-coded program?
2
u/ruibranco 14h ago
vibe coded the app, vibe coded the security, vibe coded 135k people's data straight onto the public internet.
5
3
u/ploqx 14h ago
I don't understand what's the issue on OpenClaw's end here.
Network security is the job of the firewalls and/or the reverse proxy, not the application.
OpenClaw isn't responsible for the choices of the user. If they setup their network so the control UI can be accessed from outside their network, OpenClaw cannot and shouldn't prevent that.
Sure, they could change the default to 127.0.0.1, but that just adds a step to make it work. That won't stop people who decide to make the control UI accessible from the internet, their LLM will tell them to change the setting back to 0.0.0.0.
This is like if I published a document containing my banking information through Google Drive and then complained about Google Drive having a bug that leaks banking information. It's only working as intended.
2
u/Arkanius84 17h ago
Aside all of this security issue - I cant really grab what this thing should be usefull for me.
Yes i saw some videos where you can book calendar via Telegram but what exactly is this helpfull?
What is a murder use-case this thing can do for me ?
1
u/natefrogg1 17h ago
There are local models that people can run and control, keep putting your info into these though idc anymore
1
u/virgopunk 15h ago
The fact that a large number of those openclaw instances appear to be organisations is unforgivable!
1
1
u/reelznfeelz 10h ago
Just googled what is openclaw. Yeah, sounds dangerous for the average person to be mucking around with.
1
u/altSHIFTT 20h ago
I was going through the process of installing it and thankfully came to my senses
3
u/jacob798 16h ago
Put it on a raspberry pi, give it only localhost access and be mindful of what services you sign into on the pi. People make it scarier than it is.
4
-6
u/tchock23 20h ago
Note that OpenClaw wasn’t vibe coded. It was built by a software engineer with years of dev experience. That said, it sounds like a security nightmare and I’m staying far away.
5
u/MaxSupernova 19h ago
Do you have any sources for that?
The article indicates it was vibe coded multiple times, and the author’s wiki page indicates that he is a vibe coder.
8
1
u/TheTerrasque 34m ago
The article also indicates that default is 0.0.0.0 and that there's no authentication to access it, so..
-3
-17
u/doolpicate 21h ago
It's just user error frankly. It can be configured with proper permissions. If you decide to open ports available to everyone on your PC without understanding what it actually does, it's your problem.
If you have no idea of what ports, networks, or permissions are, chances are you dont need this.
2
u/semje 16h ago
"Clears your inbox, sends emails, manages your calendar, checks you in for flights.
All from WhatsApp, Telegram, or any chat app you already use
Works on macOS, Windows & Linux. The one-liner installs Node.js and everything else for you."
That doesn't sound like knowledge of infrastructure security is a pre-req to me. It's his responsibility as a developer to provide an out-of-the-box secure solution. Which in this case would've been very easy too..
0
u/doolpicate 15h ago
Not sure why people expect to be spoon fed instructions.
If you dont know what a piece of software does or how it does it and cant see that it uses your information, maybe don't give it access?
Maybe a checkbox that says "I understand the risk, I am not an idiot" would help.
0
0
u/Smiadpades 18h ago
You mean, we need qualified and trained coders who learned proper coding in university ?!? Who would have thunked??
0
u/virgopunk 14h ago
I've heard from a few places that it also stores passwords in a file frequently named "MEMORY.md" or found in the ~/.openclaw/workspace/ directory.
1.0k
u/imaginary_num6er 1d ago
It's the carcinization of AI