Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>So? Isn't that the point though? Having regular audits should have caught this issue? I thought this being 'open source' this would made this even easier.

You have it the wrong way. Testing, audits, and open source are all best practices. They should be done. None of them are guarantees of security.

Open source is not guarantee of finding all bugs, it's a necessity to allow anyone to look for bugs (and backdoors).

Audits can not be passed. They can only be failed. Kind of like how RNG tests can not be passed, they can only be failed. Example: Use SHAKE256 to extrude any keystream on initial value 0x00. It will not be secure, but it will pass any statistical test.

>this issue does not give me any more confidence in Signal being secure.

No application can actively prevent a bug like this. As an author of high assurance comms system, see what I wrote under threat model:

"If hardware such as computers/optocouplers user has bought is pre-compromised to the point it actively undermines the security of the user, TFC (or any other piece of software for that matter) is unable to provide security on that hardware."

This also applies to software issues that actively undermine the security of the user. So the thing is, a software bug that outputs sensitive data to wrong contact, can not be absolutely prevented. You would need a friendly MITM-guard node that runs a Google-grade image recognition algorithm that detects you're trying to output a legal document to the wrong client, or a nude to not-your-SO.

Again, bugs are unavoidable, what matters is the incident response, and is Signal actively trying to protect you from everyone, including themselves.

Another PoV: If you punitively fire people that get caught in social engineering pentests, you're replacing a person who now has real-life experience with social engineers, with someone who may or may not have such experience.

Sure, if the person fails multiple times, it's time to let them go, but Signal's reaction is indication of a good employee who takes personal responsibility in making sure it won't happen again.

I'm extremely careful about what I recommend, and I have serious trouble finding a way to agree with your assessment that just because a rare bug is open 6 months is of serious concern. It wasn't being sat on for six months. But you're very keen on giving that idea. Would you care to elaborate?



Completely irrelevant.

Nobody mentioned anything about 'guarantees', this is a matter of urgency and priorities.

I don't care if this was a 'rare' issue, Signal knew this was open for half a year and what were they doing? Testing cryptocurrency payments.

If security was really that important to Signal, where was the urgency there?

If this was any other app that did this (especially Facebook) you'd rain down on them like a ton of bricks.


>If security was really that important to Signal, where was the urgency there?

If the cause is a random database key collision you can't immediately discover it obviously. You have no idea what was causing it, so you'd have to do logging.

>and what were they doing? Testing cryptocurrency payments.

Yeah I'm sure they just decided to abandon their core value because they wanted to hurry a feature they had advertised to no-one, and were thus in no rush to deploy.

If this was an actual issue I wouldn't care if it was my own app, I would pour a truck load of bricks on top of that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: