Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Exploitable vulnerabilities found in Kaspersky Anti-Virus (googleprojectzero.blogspot.com)
223 points by EthanHeilman on Sept 22, 2015 | hide | past | favorite | 109 comments


So it's 2015 and they ship software with a trivial stack buffer overflow. They don't even bother to turn on the "mitigation" options in the compiler (not that it makes such mitigations less of a terrible set of hacks). Sounds fairly incompetent. Also further validates the intelligence in creating things like Rust.


> Also further validates the intelligence in creating things like Rust.

Yeah, but many of the Rust features are present in the branch of Algol derived languages, namely Mesa, Modula-2, Modula-3, Oberon derivatives, Ada, Pascal dialects....

Yet it was required decades of security exploits for developers start grasping the major flaw it was to adopt C.


> Yet it was required decades of security exploits for developers start grasping the major flaw it was to adopt C.

Can we please stop with the C bashing already? Each time there's a story about a buffer overflow, the top comment says how that would be impossible in Go, Rust, JavaScript or the next shiny thing.

First of all, hindsight is always perfect. I'm sure adopting C made sense at the time. We didn't use to stroll down the street with a supercomputer in our pockets you know.

And second, the constant stream of news about trivial SQL injections, cross-site scripting vulnerabilities and other such things should make it plenty obvious that you can write unsecure code in any language.


> I'm sure adopting C made sense at the time. We didn't use to stroll down the street with a supercomputer in our pockets you know.

Except Algol, PL/I, PL/M and many others already existed and pre-dated the computers used to develop C on.

"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously,they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."

- C.A.R Hoare at his 1980 ACM Turing Award Lecture

Algol was created in 1960 and run in computers much more humble that the PDP-11.

> And second, the constant stream of news about trivial SQL injections, cross-site scripting vulnerabilities and other such things should make it plenty obvious that you can write unsecure code in any language.

Those don't suffer from memory corruption issues. So there is a whole class of security exploits one doesn't need to worry about.


> Algol, PL/I, PL/M and many others already existed and pre-dated the computers used to develop C on.

True... and I bet the performance was significally slower. Also I think the portability in those languages was also limited.


> True... and I bet the performance was significally slower.

Obvious, since those systems pre-dated PDP-11 hardware.

> Also I think the portability in those languages was also limited.

Have you ever coded C before ANSI C was approved?

It was only portable to UNIX itself, just like those other OSes and their systems programming languages.

Each C compiler outside UNIX implemented its own subset and semantics, hence why ANSI C had to adopt so much UB to avoid breaking too much code.


> Obvious, since those systems pre-dated PDP-11 hardware.

Well, I was actually referring to the equivalent running on PDP-11 (like Algol68)

> Have you ever coded C before ANSI C was approved?

>It was only portable to UNIX itself, just like those other OSes and their systems programming languages.

I meant machine portability. Before C, to have a lower level like performance you needed to use machine-specific assembly.


C compilers generated so bad code that games only started using it in home computers instead of Assembly during the mid-90's.

On workstations and mainframes there was hardly any difference to other heigher level system programming languages. In fact C prevented many compiler optimizations due to its aliasing and decay of arrays into pointers.


Amazing how few people remember that Fortran compilers always outperformed C compilers in the "bad old days".


I think it's late enough that it's no longer hindsight. (And note they didn't even enable stack cookies for the extra protection that'd have given them.)

C gives you all those SQL injections, XSS, shellshock, etc. in addition to all memory unsafe exploits - it's a superset of vulnerabilities (actually, I'd not be surprised if C makes it worse, as the code is far, far more verbose and focused on minutia and hence higher-level logic failures are more difficult to notice.). And, going from widespread commercial software, the other issues seem to account for much less of the critical security bugs.

Even so, if they had some amazing backcompat reason to stick with C, they should have been sandboxing and otherwise limiting the impact of the apparently-inevitable, trivial memory corruption issues they're sure to have created.


> Each time there's a story about a buffer overflow, the top comment says how that would be impossible in Go, Rust, JavaScript or the next shiny thing.

The top comment made by the same person.


It's just frustrating. Same stupid shit, over and over. This seemed particularly egregious as it was buffer overflow 101 example; no fancy exploit needed.

Though the comment seems to get reasonably upvoted, you're right that perhaps it's not a very high signal. Sorry.


> And second, the constant stream of news about trivial SQL injections, cross-site scripting vulnerabilities and other such things should make it plenty obvious that you can write unsecure code in any language.

Find me an unintended trivial SQL injection or cross-site scripting vulnerability in Rust. Or Elm or Ur/Web, if you prefer more web-oriented languages.

All you have done is demonstrate that there are languages other than C that one should not write in, either. But of course there are. You have not demonstrated that all languages have these problems to the same extent.

It is rather like arguing that since you can get concussions in football, and you can also get concussions in ice hockey, it doesn't really matter what sport you play. While it is technically true that you can get concussions from golf or cross-country running, it takes quite a bit of skill and (un)luck to make it happen.


> Find me an unintended trivial SQL injection or cross-site scripting vulnerability in Rust.

Well, if there's no web support in rust, that's going to be kind of hard. There's no SQL injection "in C" either. For that matter, there aren't any buffer overflows in the C language.

Now, if you meant "in some rando web forum written in rust", I think you'll need to wait a bit. Let's wait until there's a forum with more than a dozen users, then I'll tell you how the developers fucked up.


Sadly Rust does little to deal with these higher-level bugs. We give you tools to deal with them (tracking taint with PhantomData), but it's not clear to me that anyone's bothering to actually do that.

e.g. crates.io does no such tracking, but it also just punts the problem completely to our postgres bindings and making sure to build queries separately from their args (and the postgres arg injection does escaping, presumably). But this is no static or novel guarantee. It's just Alex being a great developer and reviewing all the patches.

I wouldn't be floored if crates.io was exploitable.

Actually I know it is; [I've exploited it](https://github.com/rust-lang/crates.io/issues/176).

Rust solves many interesting problems, but it is no panacea for program correctness!


I agree. If/when anyone starts building programs people care about in rust/go/etc - there will be vulnerabilities.

Suggesting that throwing thousands of lines worth of virtual machine behind a piece of code will solve security problems is shortsighted.


> Yeah, but many of the Rust features are present in the branch of Algol derived languages, namely Mesa, Modula-2, Modula-3, Oberon derivatives, Ada, Pascal dialects....

None of these have memory safety without GC (while supporting dynamic allocation and deallocation).


You are doing a great job with Rust, but I think you always jump to quickly to criticize Wirth languages and push for Rust.

Mesa => RC with local GC for collecting cycles.

Modula-2 => True, does manually memory management, but avoids all other C memory corruption errors.

Modula-3 => Uses GC only for types allocated via NEW. Library contains generic RC pointers. Untraced pointers can only be used in unsafe code.

Oberon derivates => Uses GC only for types allocated via NEW. Active Oberon added support for untraced pointers only in unsafe code.

Ada => No GC. All memory allocation is controlled. Deallocation is only possible via regions or explicitly in unsafe code. There are generic RC libraries as well.

Pascal dialects => No GC. Suffer from manual memory management, but none of the other typical exploits regarding out of bounds or pointer misuse.

Yes, they may lack affine types, but had they succeed in the industry instead of C, the need of something like Rust would be greatly reduced.

As such now we have to place our hopes in something like Rust, because the new generations never heard of those languages and think C was the very first systems programming language.

However every time I look at Rust sample code I would like to see the same amount of _unsafe_ being used like in the those languages.

Instead, many Rust users seem to sprinkle it everywhere.


> You are doing a great job with Rust, but I think you always jump to quickly to criticize Wirth languages and push for Rust.

I'm not criticizing those languages. I think they're valid points in the design space. I'm simply saying that they didn't do what Rust does: memory safety with no GC (and dynamic allocation/deallocation).


Rust's approach to built on what Cyclone and ATS have done is great work.

My point is that we would be less in need of Rust, if those systems programming languages have had taken C's place.

So given that they failed to gain mindshare, except for Ada in a niche domain, we now need new alternatives.

Still be prepared for an uphill battle with C++17, as language is only part of the equation, as I learned from being in the Wirth languages side for a few years.


> new generations never heard of those languages and think C was the very first systems programming language.

That's just it. Rust looks enough like C++ to catch on. This was not an accident though I can't say I'm jazzed about it.

Rust brings a lot to the table with its type system too. I would rather be using Rust tomorrow than having had the benefits of say Pascal derivatives for the past 45 years. I'm not saying this just for effect: after the obvious ML influence, the parts of Rust I like most are the nuances.

> Instead, many Rust users seem to sprinkle it everywhere.

Is that true? You will necessarily see it in bindings, or to implement certain essential features that can't be implemented otherwise. Beyond that there is the motivation to use unsafe code for speed, hopefully this is mostly restricted to modules.

Are you complaining that people are using when they could reasonably avoid it or that it is too often necessary?


> Are you complaining that people are using when they could reasonably avoid it or that it is too often necessary?

They are abusing it, specially as workarounds for constraints that cannot be expressed in the type system.

While in the languages I mentioned, unsafe constructs are related any operation that might lead to memory corruption, some Rust devs are using it for anything they assume isn't logical safe.

So you then get talks like the Session Types one at ICFP, that uses unsafe to control functions being called outside the protocol Traits that are being defined.

Which lead to discussions like this one:

https://github.com/rust-lang/rfcs/pull/1236#issuecomment-136...


> They are abusing it, specially as workarounds for constraints that cannot be expressed in the type system.

That doesn't sound like abuse to me, as long as the exposed interface remains safe.

> some Rust devs are using it for anything they assume isn't logical safe.

That sounds like an objection to the scope of what Rust defines as safe. Unless you mean to imply there are reasonable, Rust-safe ways to rewrite the transgressions. In the former case I have to disagree. It would be nice if we could prove more. But you can't prove everything.

I haven't seen the ICFP talk but I might watch it later if you can link to a video. I've skimmed that discussion before but you will need to point out your objections—IMO it is great that Rust devs are having that discussion.


My objection is that I see unsafe just for memory integrity.

For everything else something like contracts should be used, from my point of view.

As for voicing my objections to the Rust community, being a language geek is one of my hobbies, but it doesn't mean I will ever use those languages at work.

So I would only contribute noise to the discussions, which why I eventually moved away from Go and D forum discussions as well.

My time is already quite filled following the JVM, .NET and C++ eco-systems, the ones I get payed to be an expert on.


> My objection is that I see unsafe just for memory integrity.

Either you think this way because of a sense of orthodoxy or because you think memory safety is important in a way the other kinds of safety aren't. I disagree, especially since "memory safety" may or may not imply protection against aliasing of mutable data. Those nuances I like about Rust probably wouldn't exist without the broader scope of safety concerns.

> As for voicing my objections to the Rust community,

About the discussion you linked? I meant to me, in the service of explaining your objections.

> My time is already quite filled following the JVM, .NET and C++ eco-systems, the ones I get payed to be an expert on.

Well, that was important information. Thanks for making the time to share!


> Either you think this way because of a sense of orthodoxy or because you think memory safety is important in a way the other kinds of safety aren't. I disagree, especially since "memory safety" may or may not imply protection against aliasing of mutable data. Those nuances I like about Rust probably wouldn't exist without the broader scope of safety concerns.

Memory safety is more fundamental than most other sorts of correctness: it is a necessary (but not sufficient) condition for them. If you can't assume memory safety, you have no assurances about any other invariants, because any piece of data can be modified/outdated/invalidated at any time in the program's execution. (This is especially bad if the compiler will itself assume that memory safety can never happen, and use this undefined behaviour to optimise the program in unintended ways.)

I'm fairly sure that some definitions of memory safety that works for Rust are "there are no use-after-frees" or "the language is type-safe"; all 'other' problems with memory safety (iterator invalidation, data races, etc.) can be used to violate those and hence need to be outlawed. I'm not sure there's that much wiggle room for what memory safety means, at least in broad terms. In any case, aliasing of mutable data can definitely lead to memory unsafety, and is hence covered/controlled by Rust's guarantees. In some ways, it is really the fundamental cause of memory unsafety: remove/control either "aliasing" (e.g. Rust, many GC'd languages) or "mutable" (e.g. Haskell) and you've got memory safety.

Also, Rust definitely doesn't only try to help the programmer with memory safety, it's just memory safety (and hence type safety) is the form that has hard guarantees. Other forms are generally handled by "lints"/warnings or by building libraries on top of the type system.


To be clear, we on the Rust team see 'unsafe' as being strictly about memory safety as well, and have actively fought against using it as a general "pay attention here" kind of annotation.

Eventually, effects would be nice to mark other kinds of "here be dragons" as well, but for now, unsafe == possible memory unsafety.


If you read the comment you'll see that I'm talking about different interpretations of memory safety. You can't just decide what it means in Rust applies to all languages, or that everyone knows what you mean.

I was trying hard to have a conversation while avoiding arguing about semantics, so thanks for that.


So your objection is to how certain users (notably not the core dev team) utilize "unsafe", now with the language itself. It seems like the easiest way to mitigate this is with better documentation; surely, those other languages' equivalents of "unsafe" could also have been abused similarly, no?


No, because on those languages unsafe is only related to memory integrity.


What were the reasons that C was adopted over these other options? (Honestly don't know, I wasn't around in those days...)


AT&T made UNIX source code available to a few American universities like Stanford, Berkeley, Carnegie...

Some of those students tried their way into the recent workstation market using what they knew from their university. Sun was such case.

Back in those days we payed for development tools, so everyone used to stick with what the OS vendors had, why shell out money for another language that implied the extra effort of writing bindings.

So like the mobile and browser space nowadays, C as the UNIX language was the way to go.

Meanwhile, those university students and professional coders with access to UNIX at the university and work, wanted to be able to take work home.

So C dialects that where able to be used in the tiny home computers started to be developed and distributed over code listing in magazines, books and BBS.

When UNIX eventually took over the mainframes market, C got into the same spot like JavaScript in the browser.

The other languages, just like any systems programming language had their OSes, but they failed against UNIX. And as I mentioned before back then you needed a very good reason to convince someone to pay extra for compilers that weren't part of the OS SDK.


Unix was insanely successful in its day: an indication is that when Richard M. Stallman decided to create a Free Software operating system in 1984, he chose to call it "GNU" for "GNU's Not Unix".


I have a thought and a question. First, we can only write secure parsers for single file types through enormous effort. My guess is writing secure parsers for dozens of file types is functionally impossible in C.

What more secure languages besides C are easy to call from C/C++ codebases? afaik ocaml isn't, at least in the sense that I don't think you can produce a c lib without sucking in the ocaml allocator and other stuff [1].

[1] http://www.mega-nerd.com/erikd/Blog/CodeHacking/Ocaml/callin...


I have to disagree; for the case of parsers, we must go to enormous effort to program them in languages that make it difficult to parse them safely. Safe parsing in this case is pretty much just being buffer/array-safe, nothing really fancy. Pretty much the only extant language in which the simple act of parsing files is dangerous is C, or non-idiomatic C++ being used as "super C".

If you're writing security code, put down the damn C.

(I mean, my personal opinion is just flat-out put down the damn C anyhow; even non-"security code" has a way of turning into security-sensitive code when you're not looking. But certainly if you're writing security code.)


I think x0x0 actually agrees with you here..


This is one of the big reasons Rust is so exciting. ABI compatibility with C and no real runtime (no GC etc.).

Though Kapersky would have been better off even using C++/CLI and "It Just Works" to hook into code written in, say, C#. The IJW stuff is cute, as it seamlessly brings the CLR in-process and lets you transition from native to managed without noticing. (And MSVC's C++->CLI targeting is good enough to compile MS Office except producing MSIL instead of native code, fwiw.)


so it's easy enough and fast to have c++ calling rust, so you can spot replace pieces of an extant codebase?


Yes. And that’s what mozilla has started to do recently.


Any idea where I can find such commits?


They aren't checked in yet, but https://bugzilla.mozilla.org/show_bug.cgi?id=1161350 has some patches (marked obsolete) and https://bugzilla.mozilla.org/show_bug.cgi?id=1151899 has a work-in-progress patch.


Nifty. Thanks for the links!


Yes, precisely.


Apart from obviously Rust, there's a subset of C++. One can, of course, do all of the insecure things in C++ that one does in C -- and more, in fact. This is not required, though: you can have bounds-checked "arrays" (vectors accessed with .at(), never operator[]), various types of managed pointer (from boost, or with C++11 the stdlib), and require variable initialization.

One of the big problems with C++ is that few people actually do this. It basically requires putting a linter into your build process, to check for C-isms and unsafe C++ behavior. Otherwise, people will use these things because they are easier.

My favorite thing about Rust is that it's carefully designed to make the safe things easier. It's much better if you need to go out of your way to do something dangerous than the outher way around.


> One of the big problems with C++ is that few people actually do this. It basically requires putting a linter into your build process, to check for C-isms and unsafe C++ behavior. Otherwise, people will use these things because they are easier.

Which is one reason, why although I like C++, I am happy to only use it in side project, and mostly settle with JVM/.NET stuff for work nowadays.

Most of the enterprise code bases I have seen on my career, are mostly C with Classes style and most companies don't use analyzers. I was quite happy when Clang introduction started to change the mentality in regard to analyzers.


It also validates the opinion that anti-virus software is increasing the attack surface, possibly outweighing the benefits.


Wow look at the smart guy here everybody.

Java had a basic arithmetic bug until what 2005? Bash shell had a serious bug until last year?

The people developing these software are all 10x smarter than you.


Well, yes. I'm not anywhere near as smart as the people writing Java or bash. That is why I want a computer to be smart for me. If they can't get things right in an unsafe language, what hope do I have?

We have powerful computers these days. Software is eating the world. Let it eat the process of writing correct software, too.


No one's saying this will eliminate all problems. Just the vast majority of remote execution bugs in widely-used codebases seem to be related to memory corruption. I examined all of MS's security notices for a couple of years, and the number of critical notices that'd be eliminated with memory safety was well over 90%. Mozilla says the same is true of Firefox.


How many other AV programs are extracting/manipulating files without a sandbox? I bet most of them, can probably directly port this method and get same results


The blog post links to previous analysis of vulnerabilities in Sophos and ESET products.

"Many of the vulnerabilities described in this paper could have been severely limited by correct security design, employing modern isolation and exploit mitigation techniques. However, Sophos either disables or opts-out of most major mitigation technologies, even disabling them for other software on the host system."

"Unfortunately, analysis of ESET emulation reveals that is not the case and it can be trivially compromised. This report discusses the development of a remote root exploit for an ESET vulnerability and demonstrates how attackers could compromise ESET users."

So, yes, definitely.


What I'm not hearing is that what that really means. It's not just the buffer overflow thing. It seems that most of these tools (he exploited 3 so far) are disabling protection feature in the OS and are just blind to the risk parsing generally presents and specially for a tool that runs with such privilege.

I wonder if he has failed to exploit any of the antivirus that he tried or it was 3 for 3. What does it really mean, does using any antivirus at this point puts you at even more risk to targeted attack specially if you are a sensible user? What does that mean for corporate roll out of antivirus which I'm guessing would be the most likely candidate of such targeted attacks.


I wonder if this is at all related to the targeted attack Kaspersky suffered a few months back?

http://www.kaspersky.com/about/news/virus/2015/Duqu-is-back

Maybe someone was exploring the possibility that these vulnerabilities are features.


The issues which are observed are the logical subsequence of becoming the big company with talent of different qualification level : tons of code, junior engs, lack of control.Google writes report on that, but bunch of Google code itself is far from being ideal, despite of strict talent sort out and interview.


So the Russian regime "infiltrated" (ordered) one of the most well known Russian software companies, a famous security company nonetheless, to craft a few back doors in one of the worlds most trusted security products.

Just another conspiracy theory, of course...


99% of conspiracy theories are false, unrealistic. Reality is worse then the average Joe's wildest dreams. The backdoors are probably there, they're just not the obvious exploits that exist out of incompetence and neglect and are being exposed first.


and 70% of statistics are made up...


What's the incentive for Google in doing this?


http://googleprojectzero.blogspot.com/2014/07/announcing-pro...

Google's goals are so grand they see things like securing software and getting faster broadband everywhere as things worth doing. I've been reading their blog since launch and the team is very skilled, professional and interested in proving theory with actual exploits.


They fight back exploit peddlers like Zerodium or criminals who sell their findings to the highest bidder. You know, PR. They're basically do a huge service to their users and software companies by allowing the best exploit finders to focus on some software and then responsibly disclose their findings to the vendors, free of charge.

http://googleonlinesecurity.blogspot.de/2014/07/announcing-p...

The headline is kinda silly in the sense that if Project Zero focuses on any software in the world, they'll most likely find exploitable vulnerabilities. With Kaspersky the ridiculous thing is that they have stuff compiled without /GS, which e.g. since VS2005 has required you to actively disable it, as it's on by default.


It's slightly more than PR... Google sees the trustworthiness of people's interactions online as an existential necessity for the company. Things like tradeable zero-day exploits are a risk to the entire business model of having people search for things and conduct business on the Internet.


Maybe Google could only hire people of this caliber if they let them spend some of their time doing work they will be able to talk about, besides the work on securing Google's internals which must remain secret.


It seems like a pretty straightforward grey hat operation to me. Google pays crackers to find competitors' vulns, publishes them to give said competitors bad PR, and maybe even gets a bit of good PR for itself for "internet security" mumbo-jumbo.


Kaspersky is aligned with Putin's FSB:

http://www.bloomberg.com/news/articles/2015-03-19/cybersecur...

http://www.wired.com/2012/07/ff_kaspersky/

You'd be a little crazy to run this outside of Russia. I suspect there are more backdoors disguised as "vulnerabilities we knew nothing about, totally, its news to us!" This reads like a trivial overflow which would have been found with some basic testing. This is a common ploy with untrustworthy software.

Maybe Kaspersky was safe to run once, but with Russia's new brazen anti-West attitude, its a liability now.


Well obviously anti-Russia type arguments make sense, as it is a Russian product. But most of these "popular anti-virus" programs are riddled with issues. This is not just a Kaspersky thing, Norton has it's own set of issues, and so does McAFee. And I'm sure the others are not bullet proof.


Indeed, most of all software is riddled with issues


Yes, but we should hold programs purporting to increase security to a higher standard.


Not most. All. All programs have bugs and most people can't write even a simple "hello world" program that doesn't contain bugs (in the program itself, the runtime or, underlying OS or even hardware firmware below it). Fact is: execute a program and there will be an exploitable bug somewhere. Fact of life. The system is fucked - royally!


I guess it would be more correct to speak of hidden or embedded biases or conflicts of interest.


I don't use their products and the place I used to work was a little surprised when I pointed out his multiple connections to the current regime.

I also thought it was odd he always seemed to have no problem catching US nation state malware, but all of those Russian state cyber weapons other companies keep finding seem to allude him and his team.

Coincidence?


> but all of those Russian state cyber weapons other companies keep finding seem to elude him and his team

Kaspersky blog has written up analyses of malware campaigns from many different countries. Here are some Russian-origin APT campaigns.

1. https://blog.kaspersky.com/turla-apt-exploiting-satellites/9...

2. https://securelist.com/blog/research/69731/the-cozyduke-apt/

3. http://www.kaspersky.com/about/news/virus/2014/crouching-yet...


I was reading about how Estonia and Ukraine were victims to all these client side hacks from the Russian government which Kaspersky magically couldn't detect. I think its pretty obvious they're in cahoots. I imagine these governments are no longer running Kaspersky.

Brian Krebs also writes about popular malware that first does a check to see if its being run on a computer in Russia. If so, it stops running. There's a lot of government sponsored malware coming from Russia. There's a public private partnership to put profitable malware on non-Russian computers and Russian officials turn a blind eye due to corruption, bribes, etc. Its all fairly ugly.


Every security vendor is aligned with it's "host" nation, that is how business works. Not to mention that it's one of the only few sources for these types of enterprises to recruit from in the first place. Most of the security software coming out of Israel like Checkpoint goes even beyond that and it's actual code that was written in the IDF and was released for commercial use. The NSA also has a technology transfer program that enables commercialization of many technologies which were invented or developed by the NSA, they also release quite a abit of their TTP software as open source.


I think you're certainly overstating the case here. We don't see US derived malware being ignored by the USG. In fact, almost all high profile hacker arrests stem from US investigations. US researchers are the ones who take out Russian and Chinese botnet C&C servers. We see almost no action on the nation states that profit from malware, namely Russia and China.

On a cyber weapons level, who knows, but citing things like TTP which releases to FOSS or Israel's defense industry as a sign of corruption is asinine and not remotely comparable to what is the status quo in Russia. Cyber weapons will always be here and, when used correctly, can't be detected by signature based AV because they have no idea what to look for and the exploits they use are typically zero days. Stuxnet used, I believe, 3 or 4 different zero day attacks.

Nor did you bother to read the Kaspersky articles where the proof is laid out in a pretty obvious way. I think its foolish to knee-jerk to "Oh Russia does this, so must everyone else." Certainly there are degrees of corruption, and Russia is on the extreme end of this scale. Brian Krebs and Tavis Ormandy aren't on some NSA payroll to make Russia look bad. Russians do that for free. Lets stop playing the "every government is the same" card. Its been historically untrue.

>Every security vendor is aligned with it's "host" nation

Also, I really doubt the guys writing rules for Snort, ClamAV, or mod_security or OSSEC are aligned with anyone. Your view is incredibly cynical and very much an example of the disingenious tactic of "whataboutism" Russians use to defend their wrongdoings. Those rules are public, pray tell, which ones are NSA backdoors? I suggest you come up with some proof if you're making such accusations. The articles I linked to about Kasperky are significant and well-researched.

edit: I cant reply below so I'll type it here. Clinton-era crypto limitations are a non-issue. Clinton lost the crypto wars after the Clipper chip was never passed or funded and after Phil Zimmerman wrote PGP and helped end crypto restrictions. I'm talking about things happening right now. 20+ years ago whining is not helping nor relevant. Use whatever crypto you like.


Ok, every commercial security vendor of significance better? RSA and other security vendors had to introduce work reducers for the NSA in the 90's because that's what was mandated to them to be able to export their software. Being aligned with national interests doesn't mean introducing backdoors it means cooperating with them, considering the breath of knowledge that a national organization has and their resources it's a mutual beneficial relationship in most cases. It's not conspiracies it's the simple fact it's like universities doing research for national defense that's a given fact, but you don't expect saint petersburg polytechnic to perform research for DARAP just like you don't expect caltech doing research for the Russians. Nowhere did i hinted that it's a sign of corruption or that i even see this as negative thing as i don't on both count. This is the simple reality of how businesses and academia work for national interests and I'm not sure what surprising about this.


We do see US government malware being ignored by US security firms. Why do you think it's always Kasperskey and other non-US companies that report stuff like Duqu?


I wouldn't go as far as claiming that US agencies are putting gag orders on such investigations.

Symantec did a very extensive study on Stuxnet they were the ones that confirmed that it was intended to damage the centrifuges by fooling the industrial motor controllers.

What is more likely is that a national intelligence organization will use the local security vendors for counter intelligence purposes i.e. tipping them to suspected cyber intelligence operations that they've identified through other means.

This is a much more likely scenario than simply telling them not to talk about certain malware, it's easier to enforce and it provides them with both deniability and a more favorable outcome.

Geopolitics also play an important role here, different vendors have different market share in different regions. Kaspersky for example is more common in lower income countries, as well as countries that are under direct US sanctions like Syria or Iran, or countries to which US companies will have hard time exporting too or developing their market due to past relations.

So when you have a virus that infects many machines in Iran or Syria it's if any of the computers will be running fully licensed and supported commercial anti-virus software it wont be Symantec or McAfee that they'll be running but rather Kaspersky or any other non-US/Western software.


I feel pretty much this is the case as well, hence my relative trust for Kaspersky for my own use. US/Israel never detected Stuxnet either :), its a big geopolitical chessboard and anti-virus companies are knights that fight with their own color, they're not neutral.


Maybe he doesn't like polonium.


Sounds pretty circumstantial. Adobe for example has had many security vulnerabilities in flash over the years. I doubt that they were intentional back doors.


This is getting into conspiracy territory, but we are talking about a government that intercepts networking equipment while it's being shipped, disassembles it, installs hardware back-doors, and then delivers it. Strong arming any US software company that has a near universal install base isn't really a stretch of imagination.


I believe most AV vendors, if/when persuaded by powerful agencies, won't need to introduce an specially crafted backdoor.

An average antivirus software has everything necessary already. Personal licenses as a way to identify the specific machine or person, automatic streaming software updates as a way to deliver the payload and almost unrestricted privileges on the target system enough to infiltrate and stay concealed.


I agree with you on the Adobe software issues, particularly PDF reader. But you raise a question: how can we tell if a security vulnerability is an intentional back door or a goof? It's pretty easy to say "intentional" in some cases, and "goof" in others, but what about the vast majority that will inevitably lie between the easy-to-tell ends of the spectrum?

As an example, there's still room for argument about the Dual_EC_DRBG algorithm, and RSA making that the default PRNG for some or all of their products. RSA denies taking money for it. Nobody can make an airtight case for the NSA deliberately weakening it. Yet we still all kind of view Dual_EC_DRBG with suspicion.


The RSA deal was 10M, this was after its acquisition by EMC so 10M out of 25 billion in revenue makes it a bit an odd sum to introduce a backdoor.

Not that RSA hasn't done so in the past, but it was public when encryption software could not be exported RSA cam to an agreement with the US government to export it's 64bit encryption, it would use a 40bit private key and append the message with an additional 24bits which are transmitted in clear text and complete the private key to it's 64bit size.

This was a government mandated "work reducer" so the NSA if need be could decrypt the message as they had the ability to break 40bit encryption and the rest of the 24 bit of the 64bit encryption key was known for each message. This wasn't hidden, this was even released in a conference with great pride that RSA could now export it's mail encryption suit to Europe. Germany made a fuss about this 5 years after the fact, but everyone pointed and said well they announced it in a conference.


People are trivially easy to bribe. The KGB bought an FBI agent for 22 years for a total of only $1.4M.[1] And an army intelligence officer for only $250K over 25 yeas.[2]

It's strange to think that corporate employees would be that much harder to corrupt, especially for their own country.

1: https://en.wikipedia.org/wiki/Robert_Hanssen 2: https://en.wikipedia.org/wiki/George_Trofimoff


Corrupting an RSA employee sure, making a deal with a corporation for measly 10M nope.

Human assets is a different story, 1.4 and even 250K while not being a large sum is quite a large amount of money. Those assets are usually developed by other means, in most cases the money is largely irrelevant even if the asset refuses to take money tradecraft mandates that they'll be forced to take it it just to leave a money trail that they could then be threatened with if they no longer with to comply. Additionally being paid also makes the asset more invested in their duty because it creates a link like with a would be employer, and allows them to quantify their assignment with a positive reinforcement no matter how big or small it is. So money which is paid to long term assets isn't really a bribe, an initial sum might be used to turn the asset in the first place but it also usually require them to be in a position to need it e.g. gambling debts, medical bills etc. Generally assets that can be bribed will not be farmed in such manner as the cases you've mentioned, people that can be easily bribed cannot be trusted which isn't a trait you want in an asset.


Eh are we sure the 10M was the only thing being paid? Perhaps the NSA sweetened the deal for key decision makers.

Even without that - it's free money for a benign reason, while doing the security services a favour. "Hey we've got this new crypto thing that's amazing but people don't believe us. Add it to your product, and we'll give you token compensation. Also we'll also make a note of what great guys you are."


Way to risk too much for a measly 250k.


Yet he got away with it! They only popped him after he got greedy in retirement and they setup a sting!

I know of employees busted for internal schemes they cooked up (it was pretty cool working with BigCorp to setup an international sting to get them). It simply cannot be that hard to find people that need or want money and get them to compromise things for relatively small amounts of money.


And Dropbox is aligned with the US government as you surely know. Does that matter though?


Well that is a reason that people choose not to use it. Any non-US state would be silly to store their documents there.


They've been gunning for Kaspersky ever since Kaspersky released information about state-sponsored malware.


Who is "they?" Tavis Ormandy is a respected security researcher who often makes the news. Hell, not too long ago he found exploits in Sophos and Symantec products. He likes to target AV. Sophos, a UK product, was embarrassed internationally by the exploits he found. He is not playing any favors here. We need more people like him. AV has gotten a free pass for far too long.

If you're attacking Ormandy's character, I'd appreciate some proof over the usual conspirtard stuff that often gets upvoted uncritically on sites like reddit and HN. As far as I can tell, he is certainly one of the good guys and we are lucky to have him in such a high profile position at Google.

>Kaspersky released information about state-sponsored malware.

Kaspersky acutally is the dirtiest of the bunch with ties to Russian KGB/FSB. I suggest you rethink who your heroes are.

http://www.bloomberg.com/news/articles/2015-03-19/cybersecur...

http://www.wired.com/2012/07/ff_kaspersky/


Kaspersky is smart. He enjoys breathing and wants to avoid radioactive tea.


I'm guessing most people will miss the reference to https://en.wikipedia.org/wiki/Poisoning_of_Alexander_Litvine... so I'll just add a link.


This is my concern with any software from an obviously corrupt and autocratic state that attacks its citizens with impunity. How can I trust anyone there when they certainly, and rightfully, value their lives over my rights? No Russian is going to say no to the FSB torture machine. No one is going to become any sort of whistleblower. I would see any Russian software as being dangerous to run at this point and things only getting worse considering Putin's brazen anti-West attitude.

Maybe Kaspersky was safe to run once, but that's just not true anymore.


Plus, if I'm not confusing him, Tavis used to make some kick ass FVWM configurations. The kind of window manager stuff that you see in SF movie computers.

A really cool dude.


You're insinuating that Google is gunning for Kaspersky at the bidding of their NSA masters (or something)?

How deep does the rabbit hole go?


It's rabbits all the way down!


I'm suggesting a higher-up or someone else suggested Tavis target Kaspersky. I'm not suggesting Google execs all sat together and decided to destroy Kaspersky.

Is it that hard to believe that there are certain people that would like to dissuade Kaspersky from revealing further information about state-sponsored malware? I'm not even sure where this incredulity is coming from? Let's see you target an intelligence agency and see what happens.


Are you trying to imply Kaspersky are somehow a victim...? Installing their security product actively decreases your security in this case. That destroys their entire reason for even existing.

Kaspersky is no victim here. They've been so negligent in their own product's security that it is actively harmful to have it installed. That's literally the end of their product being useful for me and it applies to any other security company as well. I hope the rest of them get 'suggested' too, and soon! I'll be clear - this vulnerability is the one thing Kaspersky should have been spending their engineering resources on and they have failed utterly. Pack it up and close shop.


I would say Kaspersky should thank their lucky star Google has sponsored such research. This is priceless info.


Clearly you could not be less familiar with Tavis :)


Making their software safer, that'll learn 'em.


Bear in mind that they conclude the article with a complement to how quickly the issues were resolved and that the next one is due soon.


That doesn't make sense given that they have already broken two other antivirus in a similar fashion and they commented

  Thanks to Kaspersky for record breaking response times when handling this report, they’ve set a high bar to beat for other vendors! 
I wonder how fast they were given the comment about 'high bar'.


Suggesting that Google's allies want to keep the word on state-sponsored malware at a minimum?


How exactly are they able to introduce exploits into his code?


Into whose code? He found several bugs and is now writing about it. I'm not sure if I understand your question?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: