Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My boss installed 2 camera DVRs a year or so ago. All I did was provide 2 external IPs for the DVR and didn't worry much about it. The password was the default "123456" that comes with these things, but since we don't care much about who sees the footage. At worse people will change the password and we need to factory reset (never happened).

Last week the internet for the whole office was going down. Weirdly I could remote in, but the DNS was not working. I first thought our DNS server crapped out but it was working fine. After some investigation, the firewall was not responding. After rebooting the firewall, it would work fine for a while, but go down shortly after.

Long story short: the DVRs my boss got (unbranded) come with telnet access on some nonstandard port. A botnet got access to it and was making thousands of dns and telnet queries, overloading the firewall.



1. Just because you don't mind if people have RO access doesn't mean you should use default passwords. Privilege escalation is a thing, and is often far easier than getting a foot-hold. The number of developers who "don't bother with" disciplined input validation in areas that are supposed to be accessible only be trusted users is staggering.

2. Don't expose entire hosts to the internet. Punch only necessary holes in the firewall. That way the device at least needs to phone home in order to cause a problem like this.

3. Do you have a similar policy with other hosts are your network? I.e., do you figure "well that's inside the firewall so we don't need to worry about encryption/timely application of security updates/resetting default passwords/etc."? If you're not 100% sure (or if the answer is "no"), you now have a lot of cleanup work to do.


Not the OP but I just was in a similar conversation: "Oh it's behind a firewall, so let's just disable the extra security (passwords, HTTPS, etc)." The idea of defense in depth is very important, but many people seem to think one security layer is enough sadly.


The network is always compromised.


Ever since the smart phone, this is the only acceptable perspective. Assume bad people are already on your network.

Remember, smart phones are literal bridges from one network to another.


Given the amount of effort I had to go to to "literally" bridge a smartphone to my network to give myself Internet access when my fibre connection went down, do you think this is slightly incorrect?

I'm assuming what you mean is that smartphones may be connecting to your internal network and bringing malware with them.

That said, the corporate networks I've seen have a separate network for phones/laptops and you need to VPN in if you want other access.


You are correct that activating both networks at the same time is hard, but what you have is a device which is traveling between untrusted and trusted networks. Assuming a compromised device anything is possible.


Sysadmins at places I've worked have used "defense in depth" as an excuse to create layer upon layer of frustrating hoops to jump through in order to get any work done. I'm pretty sick of it. One perfect layer is vastly preferable.


There are sysadmins that use complexity to maintain draconian control, hide laziness or mask a lack of knowledge but don't throw the baby out with the bathwater. No matter what these people will find some way to obstruct you or maintain control. Even if their hearts are in the right place and they are using security best practices it sounds like they weren't doing a good job of automating the complexity and processes. Complex doesn't have to mean complicated.

A security design that takes advantage of multiple layers and compartmentalization is your ally against attackers. They love networks with hard shells and squishy insides. Once they are in via a service, no matter how innocuous, they can move laterally to the real targets with impunity.

But ultimately this kind of stuff is a culture issue. Culture issues are hard to fix but it's usually the root cause of bad blood between operations and development. It generally needs to be addressed on both sides though. It's really easy to think it's just a bunch of grumpy and possessive ops people but those behaviors are often rooted in how the dev teams interact with them. Things like punting releases over a wall and calling it a day, not participating in oncall duties despite causing many outages and a disparity between how credit (for releases) and blame (for outages) are assigned are often cited as issues that create what devs think or irrational BOFHs.


"One perfect layer" does not exist. Doing defence in depth is of course not a good thing, and making people do a lot of hoop-jumping isn't helpful either. But say, using a smartcard and a OTP isn't all that hard, and vastly more secure than just a username and a password, to name a random option someone might implement.


There's always a balance, but I'll echo the other comments: One layer is not enough. Do you actually think that, if a DVR is behind a firewall, it shouldn't need a password for admin access?

Strong passwords, two-factor auth for privileged services, access control policies (ACLs/firewalls), access logging, etc. are all requirements of any secure network. And that was just "off the top of my head on a Friday" kind of stuff.


>One perfect layer is vastly preferable.

Ah, hello, every manager that has ever made a decision causing the problems people further up are grousing about.

The point of defense in depth is there IS no "one perfect layer."


There's no such thing as a perfect layer.


Do you mind telling me what this one perfect layer is? If you'd like to turn it into a business, i'll fund your seed round.


Do you have to open a firewall rule request for every src:dst host/port/protocol pair? Even for 3rd party applications you don't think you should have to understand, they should "just work"? Do you have the least privilege necessary at any given point in time?

If not, you have relatively little to complain about.

And, I'll add, if you're a developer, we'd all prefer you just crank out perfect code. That way we never have deployment issues, get paged for outages, never have to work around poor architecture or assumptions that don't scale or aren't load tested. thanks!


>One perfect layer

That's the problem.


Well you need at least one layer to protect from the outside world, and another for insider attacks. Many times they can be invisible to the user. For example many places have a policy that all internal services must be Internet hardened, as though they were exposed on the broader 'net (even though they're behind a firewall).


In addition to openftp4[1] (where some folks already discovered their own server[2]), I recently published c4[3], one of the more exhaustive list of public facing IP cams that use the default password. Many of them can even be controlled remotely. The majority of them is vulnerable against simple Perl exploits.

[1] - http://git.io/ftp

[2] - https://www.reddit.com/r/sysadmin/comments/53cor1/someone_ju...

[3] - https://github.com/turbo/c4


Thanks for destroying the web, btw!

Open FTP servers are an asset, not a risk, as they're ideal for distributing downloads, uodates, packages, etc.

Thanks to your efforts, many groups which used to provide all downloads via HTTP and FTP have since stoppes the FTP access, and don't provide wget-able HTTP URLs either.

Congratulations, now I have to run a full browser on my servers to be able to download many packages.


> Thanks for destroying the web, btw!

Works for me.

> Open FTP servers are an asset, not a risk, as they're ideal for distributing downloads, uodates, packages, etc.

I agree, I use many of them frequently.

> Thanks to your efforts, many groups which used to provide all downloads via HTTP and FTP have since stoppes the FTP access, and don't provide wget-able HTTP URLs either.

I can't confirm that. Quite the opposite. The number of functional servers actually increases between oftp4 scans. The current one looks like the number is rather increasing.

Public FTPs have always been public. That's their purpose - as you said. Their addresses are listed in many public lists. I don't get how an FTP that is publicly listed and serves the public would suddenly cease operation because it's URL is in yet another public list.


> because its URL is in yet another public list.

Not because the URL is in the list directly – but the way the list is created seems like public shaming, and I’ve seen unknowing managers go "I’ve read somewhere on the internet that there’s a list of open servers, and I found ours on it, everyone can hack us, remove it now!".

So, that’s the problematic effect – with the way the list is presented, especially the way it seems to publicly shame them (and the "fix it to get off the list" statement, too), can be counterproductive.


> and the "fix it to get off the list" statement, too

Is addressed as "(This doesn't concern FTP servers that are public by design.)"

> seen unknowing managers go

These are not the people managing the FOSS mirrors that are public by design.

In fact, that's one of the reasons openftp4 now stores the complete banner. It makes it easy to identify FOSS mirrors by just grepping for "mirror" et al. and maybe even find new mirrors that you didn't know about. Or older software archives with some awesome abandonware.


Well, I’m not just talking about FOSS mirrors (they wouldn’t require a browser to download stuff in the first place).

I’m concerned about companies that used to host drivers, software, etc on FTP, publicly available, but moved it now behind a clickthrough-wall, impossible to wget.


> At worse people will change the password and we need to factory reset

Turns out this was a false assumption. We're all learning this the hard way.

All internet enabled devices need to ship with a unique, resettable default password. Many ISP provides modems/routers do this now and it's great.


> We're all learning this the hard way.

I'd be very disappointed if most HN readers are still learning this.


You mapped all ports from the external IPs to the DVR?


The firewall has a 1 to 1 NAT feature. It maps an external ip to an internal one. I had no firewall policies set. Rookie mistake, I've blocked all outgoing access and only allow port 80 and the video port incoming.


I wouldn't even allow port 80 incoming from the internet. Even if the web interface appears to have authentication on it, there may be vulnerable CGIs. They left a telnet interface wide open--do you trust them to write a secure website?


I'd suggest blocking all incoming traffic except a VPN. It's not that hard to set up a VPN.

Even better if the high-risk devices like cameras are on a separate VLAN from critical stuff.


Blocking outgoing traffic is generally a mistake. It prevents the device from retrieving updates, assuming there are any. And if there are no updates then there had better also be no undiscovered or unpatched vulnerabilities, which isn't likely.


I read it as he gave the DVRs external IP adresses without NATing them to internal addresses.


They could also use the device to pivot into your network. Once you've compromised the DVR, they can go wherever they want.


this sounds like a horror story from r/sysadmin.


Great example.

In perfect world ISP should drop you in addition to charging $penalty$ for breaking its ToS. This would result in someone from IT being fired (maybe you) and company starting to actually care about security from now on.


But that's mean...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: