Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[dupe] Scientist banned from revealing codes used to start luxury cars (theguardian.com)
109 points by qwertzlcoatl on Aug 2, 2013 | hide | past | favorite | 35 comments


He should put it on a t-shirt and call it free speech.

http://www.cypherspace.org/adam/shirt/uk-shirt.html

Or turn some portion of it into a flag?

http://en.wikipedia.org/wiki/File:Free-speech-flag.svg

Maybe he can get Bob Dylan to write a song with the codes and perform it live to a group of hackers. The possibilities are endless when knowledge is arbitrarily outlawed due to an inconvenience for the privileged. Then again, this is the UK where the first amendment doesn't apply.


This is in the UK. They don't have free speech like in America.


Civilized countries have freedom of research written into their constitutions. On the other hand, not every civilized country's constitution guarantees free speech. Freedom of political expression is usually what's protected, there's a difference.


When will people, especially the tech illiterate ever learn that security via obscurity doesn't really work when you've got marginal cost of duplication. [Of course, it's a bad idea for even tangibles but in the electronic world it's a totally broken concept]


Actually it is even worse. It also damages brands much more. From what is known for the hack it requires some very direct access to the car.

So we transform "Researchers are able to unlock Ford car given they have few hours to bruteforce and a laptop an a secluded place" to "Research that is so dangerous, the court outlawed it"


This theme has been around for decades and only underlines the inability of decision makers to make rational decisions on things they lack a fundamental understanding of.

I suppose the lesson here is: honor and honesty gets you in hot water when you deal with people who have lots of money. Better to make the devices in your garage and sell them to criminals. I'm sure this paper isn't the thing holding criminals back from making them anyway... anyone who's looked at their key-fobs knows they aren't exactly high security RSA encrypted signals. ( http://hackaday.com/2010/07/13/key-fob-programming/ )


Which is why I added the "marginal cost of duplication" part. I understand that security via obscurity has been practically the only thing that's been happening. Hell, a simple talk to our own parents (mine're 50+, somewhat tech inclined but nothing deeper than the basic word editing, and web browsing) in relation to this reveals that they think "you shouldn't tell your secret to the whole internet", little realizing that publishing it will actually help you plug the holes.

The problem is, as long as it was a physical object it was all fine, sort of atleast. You could put anti tamper mechanism (like safes with relockers) that would destroy the core technology/secret if someone tries to pry it apart. But with computer code and mathematics, it's guaranteed that the attacker/pryer is able to make millions of duplicates at zero extra cost. So if approach #564 doesn't work, all he has to do is cp ../downloads/file ../project/reverseengineer/ and boom he has another copy to work with.

The only thing holding an intruder out of a system is the solidity of the mathematical concept that system is based on. And now that the intruder knows that there's a break, and that he potentially will gain a lot by discovering it, he can continue unhindered.

Obviously I'm preaching to the choir, but if you've encountered such people you should give this analogy. In the real world, the guy needs to buy your widget each time he fails to reverse-engineer the anti-intrusion mechanism, but if he were able to make a 100% replica copy of the widget and work on that copy he could try everything. So if he fails once, he'll just make another copy and try a new approach at opening your widget. He knows it's broken, and there's no real extra "cost" associated with duplicating the widget to try again, there's literally no stopping him until he gives up.

Another thing that crossed my mind was this. The manufacturers would want to keep this knowledge (i.e. that the exploit exists) secret, but ~~if~~ when he discovers the backdoor, do you want to be considered responsible for your car being stolen instead of the manufacturer for the manufacturer's mistake? Would you not prefer that the manufacturer calls you and recalls your car to the garage and replace it with a better part? It's a shame because Bentleys, Porsches, and Audis are NOT cheap. You're paying for the name, and at times like this, when the name comes under fire they should do something and stand by their customers instead of against them.


Previous discussion: https://news.ycombinator.com/item?id=6110575 (88 comments)


Any chance this is already being used nefariously? It would explain stories like this: http://www.today.com/news/police-admit-theyre-stumped-myster...

(also discussed on HN a month back: https://news.ycombinator.com/item?id=5826486)


Of course it's already being used. When Average Joe thinks about a criminal he thinks some thug hood with a brick.

The professional criminals are affiliated with syndicates and mafias have money and resources. They can hire the talent able to figure out decoding these trivial radio signals.


> They can hire the talent able to figure out decoding these trivial radio signals.

I don't expect even older radio access systems to be that simple to abuse. For example this application note from Atmel[0] describes such a system. It uses an AES-based MAC and a rolling window counter to prevent message spoofing and replay. I wouldn't bet this implementation is actually secure, but it's not so trivial to attack.

Please note I'm not saying that criminals haven't abused vulnerabilities in these systems, just that it's not a simple matter of 'decoding these trivial radio signals'.

[0] http://www.atmel.com/images/atmel-2600-avr411-secure-rolling...


Well, I guess this decision could not be a better advertisement for the hacker, in case he happens to be a little short on cash:

Lots of eyeballs on this and the fact that the exploit stays "monopolized" is poised to drive its price on the market up.


So how did examining the hardware allow them to unlock any car? Ideally, shouldn't each car have its own secret key, and no amount of examination of one car or its ignition key would reveal the secret key of another one?

The way I interpret this, the manufacturer has thrown a backdoor into the system, allowing access to anyone who knows the backdoor key - and the researchers have managed to extract the backdoor key.


Fully baking an individualized key into each car is certainly possible, but would require exquisite record keeping and distribution to allow dealers to create new keys to replace lost keys. So instead the individual code is generated based on some information about the car.


Why not both fully unique and based on information of the car? I could easily imagine a system where [one-way algorithm X] is used to transform the car's VIN into a good cryptographic key, and the dealers then just use some piece of software to program a blank fob with the appropriate key. Done correctly, it would be extraordinarily difficult to reverse engineer.


That's what it sounds like they do just with an added secret. They have some function which maps VINs to key-codes. Just doing that though doesn't prevent someone from uncapping whatever processor is used to do that mapping and then everything is broken. So doing that is still relying on obscurity of the mapping function and isn't any better. Hashes aren't meant to be cryptographically secure, they're meant to protect against corruption.


I don't think such record keeping qualifies as "exquisite" these days, more like trivial. If Apple can store all relevant info for checking the warranty and unlock status of every iPhone they sell, a car manufacturer can pull it off without too much trouble.


These sorts of actions are why I firmly believe that anonymous full disclosure is the best way to go for disclosing vulnerabilities.


I don't think this is wrong. Now everyone knows there is a method to break it, why reveal specific details to the public where it can only be used to help steal cars?

I think dangerous information in general should be censored, though that is a very dangerous road to go down. But if it was possible to do so without corruption or having good things censored too, then I think it should be done.


I see nothing wrong with preventing the publication of the exploits UNTIL they are resolved. If the company responsible for the security system does not want to resolve the security vulnerability, then they should be published.

Even though this scientist first discovered the vulnerability, it doesn't mean that someone else won't do so in the near future.


Quite a few companies are based on finding those exploits and reverse engineering them in order to allow 3rd party remote starters to function correctly.

It's nothing new and has been around for a while. I've worked for 3 years at one, very exciting job!


Indeed, once the existence of a flaw is known, but not the specifics, it's often easy to find the flaw since you know where to look.


> especially a sophisticated criminal gang

Yeah right, like the theorized "sophisticated gang" can't break into and steal the paper/research. Or, more easily kidnap/extort/blackmail/bribe scientists to give them the info.

Criminalizing information means only the criminals will have access to it.


How is this not illegal prior restraint?


There is no first amendment in the UK.


I'm not fammilar with British law, but it might be legal prior restraint. Once they publish, they cannot unpublish; and the arguement for censorship is strong enough that the court decided they should wait until a decision is reached, otherwise it would be pointless.


Because it was approved by a judge, and is therefore legal prior restraint.


Is there such a concept in the UK?


The distinction between prior restraints (injunction preventing publication) and subsequent punishment (being prosecuted after publication) actually originates in English common law, as does the view that prior restraints are worse and should generally be disfavored.

Here's what Blackstone's commentaries had to say about it in the mid-18th century:

The liberty of the press is indeed essential to the nature of a free state; but this consists in laying no previous restraints upon publications, and not in freedom from censure for criminal matter when published. Every free man has an undoubted right to lay what sentiments he pleases before the public; to forbid this, is to destroy the freedom of the press; but if he publishes what is improper, mischievous or illegal, he must take the consequences of his own temerity.

It's true that the U.S. version of the doctrine has developed in a much stronger form, however.


Great security by obscurity. Didn't we try that before a failed miserable?

Then again if they start enforcing it like piracy with ridiculous fines and jail time they best researchers would be criminals.


So a British judge has placed an injunction against publication in the USA. How does such a conference fall within UK juristriction?


"The scientists said it had probably used a technique called "chip slicing" which involves analysing a chip under a microscope and taking it to pieces and inferring the algorithm from the arrangement of the microscopic transistors on the chip itself – a process that costs around £50,000."

£50,000?! Good Lord, that's a lot of money! All one needs is a microscope and a razor.


And an extensive knowledge of integrated circuits at the semiconductor level...


Why not publish it with some kind of irrevokable public license and open source the project?


leak it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: