Hacker News new | past | comments | ask | show | jobs | submit login
We updated our RSA SSH host key (github.blog)
1265 points by todsacerdoti on March 24, 2023 | hide | past | favorite | 488 comments



The fact that this key was apparently not stored in an HSM, and that GH employees had access to this private key (allowing them to accidentally push it) means that effectively all communication with GH since the founding of the company has to be considered compromised. This basically means that, depending on your level of paranoia, you will have to review all code that has ever interacted with Github repositories, and any code pushed or pulled from private repositories can no longer be considered private.

Github's customers trust GH/MS with their code, which, for most businesses, is a high value asset. It wouldn't surprise me if this all results in a massive lawsuit. Not just against GH as a company, but also to those involved (like the CISO). Also, how on earth was it never discovered during an audit that the SSH private key was plain text and accessible? How has GH ever been able to become ISO certified last year [0], when they didn't even place their keys in a HSM?

Obviously, as a paying customer I am quite angry with GH right now. So I might be overreacting when I write this, but IMO the responsible persons (not talking about the poor dev that pushed the key, but the managers, CISO, auditors, etc.) should be fined, and/or lose their license.

[0] https://github.blog/2022-05-16-github-achieves-iso-iec-27001...


SSH uses ephemeral keys. It's not enough to have the private key and listen to the bytes on the wire, you have to actively MITM the connection. A github employee who has access to the private key and enough network admin privileges to MITM your connection already has access to the disk platters your data is saved on.

Regarding the secrecy of the data you host at github, you should operate under the assumption that a sizeable number of github employees will have access to it. You should assume that it's all sitting unencrypted on several different disk platters replicated at several different geographically separated locations. Because it is.

One of the things that you give up when you host your private data on the cloud is controlling who and under what circumstances people can view or modify your private data. If the content of the data is enough to sink you/your company without remedy you should not store it on the cloud.


Agreed; GitHub documentation refers to repo “visibility,” not “security,” and that is an intentional distinction.

When we signed on with GH as a paying customer over a decade ago, they were quite clear that private repos should not be considered secure storage for secrets. It’s not encrypted at rest, and GitHub staff have access to it. It takes only a few clicks to go from private to public.



Not criticizing you, your technical correction is valid, but the discussion is besides the point. "Encryption at rest" is basically meaningless for something like GitHub. Not being able to pull out a hard drive in a data center and read it at home has been table stakes for some time. But how few people are able to do that anyway? A blog post like the above is just necessary to tick some boxes to comply with this or that regulation.

The real question is how many services are able to access the data live and how many support and debug interfaces (indirectly) allow you to read it. Measure GitHub's success in securing the secrecy of private repos in how few employees can breach it without causing alarms. Even without cynicism I would be surprised if it was their main concern. Data integrity is far more important for code. (There are notable exceptions, of course. If applicable, don't put it in the cloud!)


Exactly. Host keys are about authentication, not connection security. Presumably the upthread comment is trying to say that "ssh communication with github could have been subject to an undetectable MitM attack by an attacker with access to this key"[1], which isn't remotely the same thing as "all communication with GH since the founding of the company has to be considered compromised".

[1] Which is sort of tautological and silly, because that's true of all sites and all host keys. What the comment was trying to imply was that the choice of storage of this key invalidates any trust we might have in GitHub/Microsoft regarding key management, and that therefore we shouldn't trust them. Which is also tautological and silly. Either you trust them or you don't, that's not a technological argument.


You shouldn't commit unencrypted secrets to git anyway, public or private, on-site or in cloud.

There are plenty of tools to either keep them encrypted (we just use simple GPG, but our team is small) or just auto-generate and never show to user in the first place (various key vaults that can be used from automation like HashiCorp's Vault)


The person I'm replying to is arguing that their source code itself is the secrets they're trying to protect.


I would also add that your ability to pretend to be the client to the server is also limited, if ssh key based client authentication is used. This means that even if the host key is leaked, an attacker will not be able to push in the name of the attacked client. The attacker will be able to pretend to be the server to the client, and thus be able to get the pushed code from the client (even if the client just added one patch, the attacker can pretend to be an empty repo server side and receive the entire repo.

If ssh token based auth is used, it's different of course, because then the server gets access to the token. Ideally Github would invalidate all their auth tokens as well.

The fun fact is that a token compromise (or any other attack) can still happen any point in the future with devices that still have outdated ssh keys. That's a bit unfortunate as no revocation mechanism exists for ssh keys... ideally clients would blacklist the ssh key, given the huge importance of github.


WKD also lacks key revocation and CT Certificate Transparency.

E.g. keybase could do X.509 like Certificate Transparency.

:

  $ keybase pgp -h
  NAME:
   keybase pgp - Manage keybase PGP keys

  USAGE:
   keybase pgp <command> [arguments...]

  COMMANDS:
   gen          Generate a new PGP key and write to local secret keychain
   pull         Download the latest PGP keys for people you follow.
   update       Update your public PGP keys on keybase with those exported from the local GPG keyring
   select       Select a key from GnuPG as your own and register the public half with Keybase
   sign         PGP sign a document.
   encrypt      PGP encrypt messages or files for keybase users
   decrypt      PGP decrypt messages or files for keybase users
   verify       PGP verify message or file signatures for keybase users
   export       Export a PGP key from keybase
   import       Import a PGP key into keybase
   drop         Drop Keybase's use of a PGP key
   list         List the active PGP keys in your account.
   purge        Purge all PGP keys from Keybase keyring
   push-private Export PGP keys from GnuPG keychain, and write them to KBFS.
   pull-private Export PGP from KBFS and write them to the GnuPG keychain
   help, h      Shows a list of commands or help for one command

  $ keybase pgp drop -h
  NAME:
   keybase pgp drop - Drop Keybase's use of a PGP key

  USAGE:
   keybase pgp drop <key-id>

  DESCRIPTION:
   "keybase pgp drop" signs a statement saying the given PGP
   key should no longer be associated with this account. It will **not** sign a PGP-style
   revocation cert for this key; you'll have to do that on your own.
/?q=PGP-style revocation cert https://www.google.com/search?q=PGP-style+revocation+cert :

- "Revoked a PGP key, is there any way to get a revocation certificate now?" https://github.com/keybase/keybase-issues/issues/2963

- "Overview of Certification Systems: X.509, CA, PGP and SKIP"

...

- k8s docker vault secrets [owasp, inurl:awesome] https://www.google.com/search?q=k8s+docker+vault+secrets+owa... https://github.com/gites/awesome-vault-tools

- Why secrets shouldn't be passed in $ENVIRONMENT variables; though e.g. the "12 Factor App" pattern advises to parametrize applications mostly with environment variables that show in /proc/pid/environ but not /proc/pid/cmdline

W3C DID supports GPG proofs and revocation IIRC:

"9.6 Key and Signature Expiration" https://www.w3.org/TR/did-core/#key-and-signature-expiration

"9.8 Verification Method Revocation" https://www.w3.org/TR/did-core/#verification-method-revocati...

Blockerts is built upon W3C DID and W3C Verified Credentials, W3C Linked Data Signatures, and Merkel trees (and JSON-LD). From the Blockerts FAQ https://www.blockcerts.org/guide/faq.html :

> How are certificates revoked?

> Even though certificates can be issued to a cohort of people, the issuer can still revoke from a single recipient. The Blockcerts standard supports a range of revocation techniques. Currently, the primary factor influencing the choice of revocation technique is the particular schema used.

> The Open Badges specification allows a HTTP URI revocation list. Each id field in the revokedAssertions array should match the assertion.id field in the certificate to revoke.

Re: CT and W3C VC Verifiable Credentials (and DNS record types for cert/pubkey hashes that must also be revoked; DoH/DoT + DNSSEC; EDNS): https://news.ycombinator.com/item?id=32753994 https://westurner.github.io/hnlog/#comment-32753994

"Verifiable Credential Data Integrity 1.0: Securing the Integrity of Verifiable Credential Data" (Working Draft March 2023) > Security Considerations https://www.w3.org/TR/vc-data-integrity/#security-considerat...

If a system does not have key revocation it cannot be sufficiently secured.


It's worth saying that GitHub also has GitHub AE, which has some requirements (e.g. 500+ heads) but is a lot better for paranoid administrators, offering stricter audit results (FedRAMP High is no joke), data residency, stricter auth requirements, etc etc. I'd _imagine_ that such an environment is deployed as an isolated managed GitHub Enterprise instance, and at the very least in that environment I'd expect all data to be secured encrypted at rest.


> you should not store it on the cloud.

Well, at least not without encryption that is under your control.


> How has GH ever been able to become ISO certified last year?

ISO/IEC 27001:2013 doesn’t say you have to store private keys in HSMs? It just requires you to have a standards compliant ISMS that implements all Annex A controls and all of the clauses. Annex A and the clauses don’t specifically mandate this.

If you can convince an auditor that you have controls in-place that meet the standard for protecting cryptographic media you basically meet the standard. The controls can be a wide variety of options and don’t specifically mandate technical implementation details for many, many things.

You shouldn’t rely on ISO/IEC 27001:2013 as attestation for technical implementation details here. Just because your auditor would red flag you doesn’t mean all auditors would. The standard is only as effective as the weakest, cheapest auditor, and there are perverse incentives that make auditors financially incentivized to certify companies due to recurring revenue.


> You shouldn’t rely on ISO/IEC 27001:2013 as attestation for technical implementation details here.

Thanks for the insight, good advice.

But also from the same GH article [0]:

The ISO 27001 certification is the latest addition to GitHub’s compliance portfolio, preceded by SOC and ISAE reports, FedRAMP Tailored LiSaaS ATO, and the Cloud Security Alliance CAIQ.

Do you have any knowledge on one of these certifications (for exmaple FedRAMP) that puts any restrictions on handling key material?

[0] https://github.blog/2022-05-16-github-achieves-iso-iec-27001...


I think you're looking for something like PCI DSS compliance, which requires you to store keys in a HSM, and is much more prescriptive with key management.


PCI-DSS is incredibly aggressive with key management considerations. They get down to radio frequency concerns in pin pad hardware, etc.

It took us ~2 years of back & forth with various parties & auditors to get per-client exclusions for accepting end-customer debit PIN codes in-branch on an iPad screen. These banks do not have fully-compliant solutions and must have exceptions on file.


You may be confusing PCI-DSS versus PCI-PIN, which are a little different. You’re right about the requirements around acquiring pins though.


Wasn't SPoC supposed to help with that?


It showed up too late. We did our integration 2016-2017. We were held to far more unrealistic standards at the time.


SOC isn't very strict either, it's more opinionated and allows more leeway with auditor standards. The CSA Star CAIQ is open and free, but it doesn't mandate HSMs (https://cloudsecurityalliance.org/artifacts/cloud-controls-m...). ISAE is a precursor to SOC 2 (ish). The only one of these that I'm not familiar with is "FedRAMP Tailored LiSaaS ATO".

As the adjacent commentor 1970-01-01 states, PCI DSS is actually pretty strict and requires use of HSMs. However, that level of PCI compliance is only required for institutions that actually handle the full credit card number. GitHub uses Stripe as a payment gateway, so they don't need to meet it.

FIPS 140-3 levels 3 and 4 often are met by using HSMs, but technically speaking there aren't any standards that I know of that exist outside of the payments industry that have hard requirements for HSM use for all cryptographic key media.

I think the unfortunate reality is that many organizations struggle to deploy HSMs widely for all cryptographic media because they aren't very scalable and deployment can lead to other operational constraints that many companies can't or won't deal with. It's much easier for low-frequency high-importance tasks like signing new releases of OS images or packages, rather than I/O heavy high-frequency operations for a site like GitHub.

So, in-summary, look for PCI DSS or FIPS 140-3 Levels 3 and 4, but, be prepared to discover that "creative" solutions may let a company meet even the highest levels of FIPS compliance without HSMs for all cases.

I know it's sucky advice, but for ISO in-particular, I suggest just acquiring and reading 27001 if you want to use it as a basis for decision making. It does offer a lot, and I think it's a very well-written standard, but all of the implementation advice is in ISO 27002, and it's not required. The advice in 27002, when used to meet 27001, leads to a very compelling program. But a 27001-compliant org can completely forego 27002 and DIY it as long as it meets the test criteria set-out by the auditor.

Edit: Also, every auditor will include standard disclaimers in their report that they perform sampling-based testing, and that the testing is not a complete guarantee of the state of the company. ISO in-particular is performed by getting an initial audit, followed by two years of surveillance audits that test the entire suite of controls. But due to sampling-based methodology, something can be overlooked or evidence can be provided that doesn't holistically reflect the org in all cases. If it's any consolation, this particular issue will certainly be in their audit for next year, as a security incident and probably lead to an opportunity-for-improvement from the auditor.


This is an important point. Most certification standards are not tech literate, only process literate.

So you may be able to get certified for policy, process but not necessarily the programming to allow something in the first place.

Hopefully now knowing this will improve under Microsoft.

For those who value this, self- hosting or on-prem instances of git might shoot up in importance.


> [...] depending on your level of paranoia, you will have to review all code that has ever interacted with Github repositories [...]

Not to diminish the problems of having a large entity like Github handle a private key like that, but if that was your level of paranoia, you probably should have used commit signatures all along and not relied on Github to do that job for you.


As usual on HN, I find the pragmatic response about 3 pages down in the replies to an extremely hyperbolic top-level comment.

I also don't want to diminish the concerns around Github or similar orgs losing control of a private key, but the far more realistic concern for the vast majority of threat models is often put to the wayside in favor of what amounts to a scary story. Rather than the straightforward key removal and replacement that this should be, I (and surely many others) have spent all morning combatting this specific FUD that cropped up on HN with leadership and many engineers. It's actually quite detrimental to quickly remediating the actual concerns introduced by this leak.

I understand that security inspires people to be as pedantic as possible - that's where some big exploits come from on occasion - but I really hope the average HN narrative changes toward "what is your actual, real-world threat model" vs. "here is a highly theoretical edge-case scenario, applicable to very few, that I'll state as a general fact so everyone will now wonder if they should spend months auditing their codebase and secrets". Put simply: this is why people just start ignoring security measures in the real world. Surely someone has already coined the term "security fatigue".

It's all just a bit unbalanced, and definitely becomes frustrating when those suggesting these "world is burning" scenarios didn't even take the available precautions that apparently would satisfy their threat model (i.e. commit sigs, as you suggested)

Ok, end rant :)


Very cogent explanation, and an important point that you highlight - factual real-world risk/threat model is far more important than hypothetical "the-world-is-burning" scenarios.

Having a correct threat model is the first step towards building reasonable security controls. But far too many are willing to pander to the "It rather involved being on the other side of this airtight hatchway" [0] scenarios.

[0] https://devblogs.microsoft.com/oldnewthing/20060508-22/?p=31...


Git provides the ability for authors to sign their commits with their own private key. To ensure the integrity of code in a repository, this method should be relied on rather than whatever hosting provider(s) have a copy of the repository.

Requiring all commits to be signed by trusted keys avoids the risks associated with someone tampering with a repository hosted on GitHub if they are able to get access to it, although it doesn't protect against code being leaked.

See here for details: https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work


Parent comment is concerned with privacy, not authenticity. They're not worried that someone modified their code, they're worried that someone saw it.


They specifically called out the need to review all code that ever interacted with github. The implication is that you can't trust it hasn't been tampered with.


The parent was assuming full compromise.

The risk of disclosure is pretty obvious with GitHub, and I’d assume anyone with low risk tolerance here is using something else, including the on-prem GitHub. I can think of a dozen higher risks.


About signed commits, but off topic for the article:

Many signed commits ("Verified" in green) on GitHub are signed with GitHub's own private key¹, not the committer's key. There's a good technical reason, but many committers don't realise their own signing key isn't the one used on their signed commits to the main branch.

That GitHub private key would be a fun one to have leaked!

It would invalidate most of the "Verified" flags on commits on most repo main branch histories.

(¹ GitHub uses GitHub's own commit-signing key when you use the GitHub GUI to merge or squash-merge commits if you already signed them, and the resulting commits show as "Verified" the same as conmits signed by your own key. So you do have to sign with your own key,, but it's not the key used on the main branch commits everyone sees whrn they visit and doenload. Many controlled workflows require approved PRs to be merged to main/master through GitHub's tools, and many users default to using the GUI for PR merges by choice.)


> The fact that this key was apparently not stored in an HSM, and that GH employees had access to this private key (allowing them to accidentally push it) means that effectively all communication with GH since the founding of the company has to be considered compromised.

For a host key? Like I get that being able to impersonate Github isn't great as far as state level actors having the ability to do this but you do know the actual transport layer keys are ephemeral and aren't derived at all from the host key, right?


> state level actors having the ability to do this

Not just nation state actors, but basically anyone in a position to MITM.

Also, you don't have to be a nation state actor to extort a GH employee. Any bad guy can do a "give me this key or I'll hurt your kid". People are being extorted for a lot less.

There are billions of dollars of assets flowing through GH's infrastructure, for the sake of safety (!= security) of Github's employees, nobody should ever have access to key material.


Spot on. Most people will remain absolutely rational when faced with irrational threats. The only protection against that is ensuring that condition cannot be encountered.


"Obey so they don't carry out their threat." may be prescribed by classical decision theory, but I wouldn't call it rational when it's bad for you to be known to do it. I just asked classical decision theory what decision theory to pick and I think it said "take action x such that, if you do x, and everyone had known since 15:53 UTC Mar 24, 2023 that you'd do x, you'd have done as well as possible.". So what deserves to be called "rational" may be to do what the person you wish you'd always been would do.


it doesn't take a state actor to MITM this. It takes a Wifi Pineapple advertising a fake AP and tired devs in Blue Bottle smashing `ssh-keygen -R github.com` without verifying the fingerprint. Very simple. Even easier than trying to MITM a site accessed via browser, which will probably have at least HSTS to help you out.


Excellent reference to Blue Bottle. I enjoyed visualizing this scenario.


>There are billions of dollars of assets flowing through GH's infrastructure

Do you mean source code here? I have a hard time believing source code holds that much value.


> As of January 2023, GitHub reported having over 100 million developers and more than 372 million repositories, including at least 28 million public repositories

If there are ~350 million private repos then they'd only need to be worth an average of $30 each to be worth a billion dollars in total. Which doesn't seem farfetched.


Considering the looooong tail of these repos are forks with no changes, sample code, toy projects abandoned after a single commit, etc. etc., I'd say it's pretty far fetched.

For proof, try searching for a mundane string in GH Code search. The vast majority of repos you see will be basically garbage.


I think that is incredibly farfetched. If you got access to 1,000 random private Github repos, I don't think you could sell them, or otherwise utilize them for anywhere near that value, if anything.


A better way of quantifying this would be to look at the impact of real life source code leaks. I'm not aware of any significant monetization of the windows source leak, for example.


You've also got to factor in all the software that relies on projects developed primarily on GitHub.


Yes for a host key. It’s like accidentally publishing the tls key for https://accounts.google.com

The host key is the only thing ensuring you’re actually talking to GitHub.com when you push code.

To add to sibling comments, it should not have been possible to make this mistake. That it was possible is concerning.


> [...] the actual transport layer keys are ephemeral and aren't derived at all from the host key, right?

Great! Then I can communicate confidentially with whomever is MITM'ing me.

/s


A bit of an overreaction, right?

Number of people just not blindly using TOFU with github over ssh must be quite low.

Who here went here https://docs.github.com/en/authentication/keeping-your-accou... and added the keys manually to known_hosts before using github for the first time?


I would have guessed that the significant majority would be using TOFU, but in any case the actual key was leaked so it wouldn't matter which method was used.


The point is about how much people care, not about whether verification works or not when the key is leaked.


> Number of people just not blindly using TOFU with github over ssh must be quite low.

Oh, I missed the "not" the first time I read it, which changed your overall meaning entirely. My bad.


In this case "not just" would have been more legible than "just not".


> any code pushed or pulled from private repositories can no longer be considered private.

Do you realize that the code just sits on GitHub servers even if it's private?

If you have any degree of paranoia, why do you put your code into GitHub?!?!

Like, if you work on code which


Ah, I see that the men in black got there just in time ! XD


> How has GH ever been able to become ISO certified last year [0], when they didn't even place their keys in a HSM?

ISO 27001 certification does not require you to put keys into an HSM. The standard requires you to have controls in place, be aware of your risks and to maintain a risk register. But in no way does the standard require HSM's.

The standard would even be OK with storing this on a floppy drive if the risks surrounding that were identified and mitigated (or accepted).


I have never knew a single person put ssh host key into HSM.

In fact, this is not a supported option in openssh.


> I have never knew a single person put ssh host key into HSM.

You probably also never met a single person where the SSH interface sees millions of sessions as day with valuable assets (code) being transported over said sessions.

> In fact, this is not a supported option in openssh.

This definitely is supported. Though documentation for this is often HSM vendor specific, which if heavily NDA'd. So that's why you probably haven't found much information about it.


What I expect has happened here is that you've remembered that your HSM comes with instructions for how to use PKCS11 to make user authentication rely on the HSM and you've just assumed that's relevant here. While I'm sure the vendors make it seem like this is all very secret, it's just a pretty boring C library and it's probably half-arsed in real world implementations.

AIUI OpenSSH does not provide any way to use PKCS11 to protect host keys, which are the concern here.

You can use PKCS11 to sign OpenSSH certificates, so if GitHub had elected to use certificates here, it could have protected the CA keys for those certificates in an HSM, but it did not.


Correction: It was pointed out elsewhere that you can just tell sshd to use PKCS11 keys via the SSH agent mechanism, and so yes that would allow use of an HSM for host keys


> This definitely is supported.

Agreed. I have seen some crazy stuff in the payment card industry. I can't recall what I can and can't talk about so I'll just say "Atalla".


Yes, but that would either be a fork of OpenSSH, private or open source (both are possible since it's BSD-licensed), or a different SSH server (which Github is of course free to use, since the protocol is standardized and their scale absolutely justifies any efforts in protecting their SSH host key). But GPs comment was about OpenSSH.

Edit: Apparently OpenSSH's sshd also supports the SSH agent protocol for host keys, and ssh-agent does support PKCS#11 – so I stand corrected!


> This definitely is supported. Though documentation for this is often HSM vendor specific [...]

How can openssh documentation be vendor-specific?

Or are you saying that vendors commonly provide an openssh fork/patchset/plugin allowing for HSM-resident host keys?


Why is everyone just authoritatively dismissing this, when this has been supported for >7 years and is easily found with a google search?

There is the HostKeyAgent configuration directive, which communicates over a unix domain socket to make signing requests.

https://framkant.org/2017/10/strong-authentication-openssh-h...

https://github.com/openssh/openssh-portable/blob/12492c0abf1...


> How can openssh documentation be vendor-specific?

I isn't, because the cryptography is (in case of HSM) not handled by OpenSSH itself. So OpenSSH's configuration has nothing to do with the HSM.

Usually, the actual cryptographic functions are not performed user-space, but handled by the kernel, which in turn can offload this to dedicated hardware. Basically if you compile OpenSSH for it to use kernel level cryptographic function, then OpenSSH can work with a HSM without it even knowing it.

Disclaimer: this is simplified explanation, there is a lot more to this, and I am by no means an expert on this matter.

Edit: meant to say kernel level cryptographic functions, not TLS.


So you‘re saying that OpenSSH has an interface for that on the host key side?

I‘m aware of the PKCS#11 integration in the OpenSSH client and have dabbled a bit with it but was not aware of any server side equivalent.

And how does TLS fit in here? SSH is a very different protocol from that, no?

Update: I can't find any OpenSSH documentation of either (server-side) PCKS#11/HSM support or kernel-mode cryptography (which also in the case of Linux only addresses symmetric encryption to my knowledge, at least the mainline kernel version I know of).

Maybe you're thinking of some other SSH implementation? The protocol definitely allows for server-side HSM usage, and Github at their scale is not bound to OpenSSH in any way.


> I can't find any OpenSSH documentation of either (server-side) PCKS#11/HSM support

OpenSSHd talks to an ssh-agent that then talks to the HSM:

> Identifies the UNIX-domain socket used to communicate with an agent that has access to the private host keys. If the string "SSH_AUTH_SOCK" is specified, the location of the socket will be read from the SSH_AUTH_SOCK environment variable.

* https://man.openbsd.org/sshd_config#HostKeyAgent


Interesting, I didn't know that OpenSSHd supported the agent protocol. Thank you!


It's just the agent protocol, used by sshd instead of ssh, to make signing requests with a host key (instead of a user's identity key).


That's cool, I wasn't aware that the server supports the agent protocol as well. Thank you for the pointer!

It makes a lot of sense, since it avoids having to link the HSM/PCKS#11 stuff against sshd.


For what it's worth, Github uses libssh (https://www.libssh.org/) for their ssh servers.

It looks like they currently use the `ssh_bind_options_set` function with `SSH_BIND_OPTIONS_HOSTKEY` to set the host keys which means they exist on disk at some point. HSM aside, I believe it would be possible to use the `ssh_bind_set_key` and deserialize them from a secret vault so they only exist in the memory of the ssh daemon.

Obviously they also just straight up have enough resources to fork the code and modify it to use an HSM.

Source: looking at their ssh server portion of `babeld` in ghidra right now as part of hunting for bug bounties.


It would work with OpenSSH's HostKeyAgent option.


There is the HostKeyAgent configuration directive, which communicates over a unix domain socket to make signing requests.

https://framkant.org/2017/10/strong-authentication-openssh-h...


It's easy to say "should have used an HSM" (or, in truth, many HSMs), but I can appreciate the technical challenges of acutually doing that at their scale. It would not be a trivial project. There's a ton of operational concerns here, including figuring out how you would go about rotating the key on all those HSMs in an emergency.


These would also need to be very distributed and high-throughput HSMs: You'd need to talk to one for every single SSH login! This is in contrast to e.g. having a CA signing key in a HSM, but distributing keys signed with it more widely.

I suppose (Open?)SSH's PKI mode could support a model like that, but as others have noted here, this requires much more manual work on the user's side than comparing a TOFU key hash.

Maybe that model could be extended to allow TOFU for CAs, though? But I think PKI/CA mode is an OpenSSH extension to the SSH protocol as it is, and that would be a further extension to that extension...


SSH CAs would make the challenge a lot easier. It sounds like they are using RSA keys here for the widest possible compatibility, and while OpenSSH's certificate support is not at all new, it still may be too new for this application.


Using SSH certificates would tie every Github user to OpenSSH extensions though. I'm not sure if many git clients use something else, but it's at least worth a consideration.


There's a lot of daylight between "use a HSM" specifically and "use a system that prevents junior developers from accessing the key and checking it into public repos."

Storing the key in some kind of credential vault that can only be accessed from the hosts that need it at startup would usually be enough to prevent this particular kind of error (unless you're giving root on those boxes to people without enough sense to avoid checking private keys into git, in which case you've probably got worse problems).


I'm far from junior, and that's far enough to know that this kind of error is very much not limited to junior developers.


> the responsible persons (not talking about the poor dev that pushed the key, but the managers, CISO, auditors, etc.) should be fined, and/or lose their license.

By no means do I want to see the dev get fined or blackballed from the industry.

But if there is any 1 person responsible, it’s the person who did it. The reason why the dev shouldn’t be fined/blackballed is because it’s not just 1 person’s fault. I mean, fining or booting his manager out of the industry? Really?


There's a few reasons I wouldn't worry too much:

1) Nation state level actors can probably insert or compromise high level staff, or multiple high level staff, at any given company, and perform MITM attacks fairly easily. And some could compel turning over your code or secrets more directly anyway. Not worth worrying about this scenario: nobody working on anything truly sensitive should be using any externally hosted (or even externally networked) repositories.

2) It is much more difficult for other actors to do a MITM attack, and if they did, they'd probably have access to your code more directly.

3) Your code actually isn't worth much to anybody else. Imagine someone launching a complete clone of HN or any other site. Who cares? Nobody. What makes your code valuable is that you have it, and that you have a network and relationship with your customers. If somebody stole my company's codebases, I'd feel sorry for them, that they are going to waste any time wading through these useless piles of shit. The only potential problems are if secrets or major vulnerabilities are exposed and provide a path for exploit (like ability to access servers, services, exposing potential ransomware attacks).


Information has different levels of value depending on what the user needs to do with it. It's kind of like how two individual pieces of "unclassified" info are...well, Unclassified but putting the two together as a cohesive whole that provides further context turns it into "classified" info. All it takes is a little bit of time for actors working with the funding and compute capacity of a major nation to scrape the entirety of Github, dump it in a data processing tool none of us know about, and make the correlations you and I cannot.

This leak opened a time window big enough for that to happen. We may or may not know if it did. I doubt this info would be offered to the public because it would sink the business.


Your 100% right to hold critical infrastructure to higher standards. Putting Solarwinds aside, how many companies could to grind to a halt via this 3rd party.


> The fact that this key was apparently not stored in an HSM, and that GH employees had access to this private key (allowing them to accidentally push it) means that effectively all communication with GH since the founding of the company has to be considered compromised.

I think this suggests we need more information from github. For instance GH employees may not always have had live access to this key, this could have happened as part of an operation that gave temporary access to an employee only recently. Or it could have been stored plaintext on multiple employees' home computers since creation.

When was the leaked key created anyway?


Looking from the outside, for many of these companies: GitHub, OpenAI, Cloudflare, Facebook, and so on, it seems that they torture their hires with ridiculously code challenges. Spend a lot of time on elegant engineering blogs. Write about how many Phds work at their locations, can't stop talking about how selective of the 0.01% Developers they are... But then, internally, everything seems more or less tied up with Shoestrings and Rube Goldberg machines.


"Temporary" solutions and countless "TODO: Fix this" simply are endemic to development, even at the top companies. They just like to pretend they're better than everyone else.


It is so incredibly rare for public-facing service keys to be stored on an HSM that I don't think anyone could reasonably have expected this to be the case?


Makes self-hosting git look more preferable.

The cloud is always the convenience of someone else’s computer over some amount of security.


I don't think there will be any lawsuits. The user agreement precludes that. I don't even know how anybody could be angry about this - your code's on somebody else's computer and if you didn't know that that's a huge risk, you do now.


I wonder if they found it by turning on their own secret detection system?


What license?


Auditors require a license/accreditation to do certain certifications.


Please before replacing your local fingerprint with the new one, double check it is the expected value. This is an opportune time for man-in-the-middle attackers to strike, knowing everyone has to replace their stored signatures, and that some will be lazy about it with a blind "ssh-keygen -R github.com" command.


It never fails to amaze me how most incident mitigations seem completely oblivious to such security side effects.

"We have no reason to believe that the exposed key was abused, but out of an abundance of caution, we are going to expose 50 million users to a potential MITM attack unless they are extremely careful."

Not a single word in the post about whether this impact was even considered when making the decision to update the key. Just swap the key, and fuck the consequences. Same with the mass password resets after a compromise that some services have done in the past years. Each of those is any phishing operation's dream come true.


Don't trust corporate PR. They're obviously lying when they say "out of an abundance of caution". The private key was exposed in a public GitHub repo, it could literally be anywhere.

So MITM for some of 50m users is strictly better than MITM for all of 50m users.


After reading first paragraph, I was sure they don't have any specific reason to replace it.

> leaked in public repo

Me: Yeah, that's why they are doing it.


It could be, but also GH might be logging inbound requests long enough to see whether the file was requested.


> The private key was exposed in a public GitHub repo.

How do you know this?

Github runs scanner for private keys in public and private repos and notifies owner (I did it once so I know ... ;)). So some Github engineer likely would have received such an email if what you say is true. Hilarious.


It says exactly that in the article:

> This week, we discovered that GitHub.com’s RSA SSH private key was briefly exposed in a public GitHub repository

Hilarious.


I didn't read it fully before commenting. I'm sorry.


If everyone read the entire article of each HN submission the comments section would be wildly different :-)


I can’t see that could ever happen. Is the key just floating around on their employees computers and skeins accidentally committed it?


Maybe because the article says so?

> This week, we discovered that GitHub.com’s RSA SSH private key was briefly exposed in a public GitHub repository


From the OP:

> This week, we discovered that GitHub.com’s RSA SSH private key was briefly exposed in a public GitHub repository.


I'm always amazed at this kind of posts. Did these 50 million users (surely none of them use git+https!) check the host key the first time they connected to github? Did you?


The point being made here is that because there are millions of users forced to change keys now all at around the same time, and because they are doing so due to seeing an error in Git, this creates a good opportunity to strike. Normally, most users would have connected to GitHub before and a MITM attack has a high chance of failure.


It doesn't matter because it didn't change! That's the beauty of TOFU.


The point is, what if it you were MITMed from the beginning?


Sure, but the difference is that it's now both a plausible moment to go MITM (because they got that key), and furthermore the hypothetical attacker now has good reason to believe users won't be scared by a host-key-change warning, and the hypothetical attacker would know this opportunity exists for a large set of users simultaneously. If some malicious network operator were to try and exploit users, now would be a good moment - they'd likely catch many more people in the time it takes to be discovered than on an average day.

The MITM-at-the-start risk is of course real, but I think this new everyone -restarts-simultaneously risk is qualitatively different enough to be worth at least considering.


Much more concerningly, there is an activated-by-default OpenSSH extension (`UpdateHostKeys`) that allows the server to install new host keys into `.ssh/known_hosts` after every successful server authentication.


The bad guys would also have to have MITMed it every time I connected for the last 15 years, or I would have seen authentication failures when it connected to the real thing. MITMing someone once isn't that hard, but doing it consistently is.


It doesn't matter because it didn't change! That's the beauty of TOFU.

One way to solve this in TOFU is to have a time window where both keys are presented.


If we're starting from the assumption that the first key was compromised, then you're still vulnerable to MITM. The only solution is communicating the key through a different, trusted way. Which is exactly what github did - inasmuch you can trust that github.com is github.


> Did you?

I sure did. Doesn’t everyone?


> (surely none of them use git+https!)

well, yes

github doesn't accept https push any more


Not sure what you mean. You can push via HTTPS: https://docs.github.com/en/get-started/getting-started-with-...

Maybe you’re referring to how they no longer accept passwords for HTTPS auth? You have to auth for HTTPS push with a personal access token.


That's funny, I do it every day. It's frankly easier to install git credential manager (even integrate into WSL) for 2FA authentication on Github (and other git hosts).

I get a bit paranoid when having to deal with Tokens on various CI/CD environments as it stands. And the things that start breaking every year when I forget to update them. Note: this is personal/hobby projects, not corporate stuff, where I'm strictly in the codebase and try to keep my fingers out of CI/CD beyond getting a Docker image built, and someone else configures the keys/auth.


How are you using git credential manager for 2fa on GitHub? They stopped supporting user/password auth for HTTPS git access a while back, and started requiring personal access tokens (which do not require a 2nd factor)


GCM will use an embedded browser so you can authenticate with the UI including your second factor, which will then give you a credential/token that can be used in the git environment over HTTPS. It's still a (differt, oath vs reference generation) token, but you aren't having to go generate, configure and update it yourself.


While their reaction is more likely to cause a security breach, consider the psychology.

If the key was breached and Github just didn't know it, then a breach happened, then only Github would be to blame.

If Github rotates its key, and somebody suffers a MITM attack, the blame is more diffuse. Why didn't they verify the key out of band?


How is there an alternative here?


Jump into a time machine, go back to the creation of SSH, and adopt SSL-style trusted third-party certificate authorities. Somehow get it adopted anyway, even though loads of people use SSH on internal networks where host-based authentication is difficult; SSH is how many headless machines are bootstrapped; and that you've got to do it 19 years before Lets Encrypt.

Jump into a lesser time machine, go back to when Github were creating their SSH key, and put it into a hardware security module. Somehow share that hardware-backed security key to loads of servers over a network, without letting hackers do the same thing. Somehow get an HSM that isn't a closed-source black box from a huge defence contractor riddled with foreign spies. Somehow avoid vendor lock-in or the HSM becoming obsolete. Somehow do this when you're a scrappy startup with barely any free time.


Ssh certificate authorities are a thing that exists.

We also have a way to put SSH host key fingerprints in DNS records already.


Yeah like how HTTPS CAs exist. There are some very nice three letter ones who can issue any certificate and your browser / OS happily accepts it.


SSH doesn't have any CAs that it trusts out of the box. It's up to you to tell it which one to trust.


Yes but the option to do verify host keys using ("VerifyHostKeyDNS") is not enabled by default.


Unless it has changed recently, you can't have a trust chain of OpenSSH certs though so it's cumbersome that your signing key is not only the root ca but also basically has to be 24/7 accessible to sign any server/client you want to bring up.


This just kicks the can down the road to DNS.

I'd guess that most systems aren't using DoH/DoT or end-to-end DNSSEC yet. Some browsers do, but that doesn't help tooling frequently used on the command line.

I suppose you could just accept X.509 certificates for some large/enterprise git domains, but that pokes up the hornet's nest that is CA auditing (the browser vendors are having a lot of fun with that, I'm happy that the OpenSSH devs don't have to, yet).

And where do you maintain the list that decides which hosts get to use TOFU and which ones are allowed to provide public keys? Another question very ill-fitted for the OpenSSH dev team.


No browser uses DNSSEC.


Thank god. Someone needs to take that protocol out back and give it the old yeller treatment.


That was in reference to the former, i.e. in-browser DoH/DoT lookups.


DNS can trivially be mitm'd. DNS-stored fingerprints are strictly less secure than TOFU.


If you use DNSSEC (cue inevitable rant from Thomas) this just works. If you have DoH (and why wouldn't you?) and your trusted resolver uses DNSSEC (which popular ones do), you get the same benefits.

https://en.wikipedia.org/wiki/SSHFP_record


> and adopt SSL-style trusted third-party certificate authorities

So that any large entity can own your servers with easy. (Well, they already can, but not through this vulnerability.)

Anyway, the only thing CAs do is to move that prompt into another, earlier time. It's the same prompt, the same possibility for MITM, and the same amount of shared trust to get wrong. You just add a 3rd party that you have to trust.

SSH does have a CA system. Anybody that isn't managing a large datacenter will avoid it, for good reason.


> So that any large entity can own your servers with easy.

Eh, let's not pretend existing SSL certificate validation is anything to write home about.

Even without any ephemeral servers involved, barely anybody is validating cert fingerprints on first use.

And among people using ephemeral servers, 99% of applications have either baked a certificate into their image (so that any compromised host means a compromise of the critical, impossible-to-revoke-or-rotate key) - or every new server gets a new cert and users have either been trained to ignore certificate warnings, or they've disabled strict host key checking and their known hosts file.

The existing SSL cert validation options are perfect if you're a home gamer or you're running a few dozen bare metal servers with all your SSL users within yelling distance in the same office. But we all know it's a joke beyond that.


There could be an update to the protocol that enables certified keys to be used and allows them to be accepted without warning or with less of a warning.

There could be a well known URL that enables people to fetch ssh keys automatically in a safe manner.


There isn't an alternative, really. The private key has been exposed, and presumably it is unknown if or how far it has spread. The SSH keys must be changed, and the sooner the better. All that can be done is to notify people after the change has occurred.


> and presumably it is unknown if or how far it has spread

Why would that be unknown? GitHub has HTTP and SSH access logs, right?


One H4X0R gave it to four friends, who in turn gave it to between 9 and 14 friends, who in turn gave it to between one and 6 friends.

If train A leaves New York going 60 miles per hour with 167 people on board and train B leaves Chicago one hour later with 361 people on board going 85 miles per hour, how many people now have the key?

The answer is 31337.


Since the post doesn't mention anything like "we reviewed logs and confirm the key was not accessed", it is very likely that they either don't have logs that are reliable enough to rule it out (e.g. they may be sampled or otherwise incomplete), or that the key was accessed.


Keeping a complete log of all GET requests to random files in a public repository in a reliable way would be insane.


No, it wouldn’t be - assuming by “insane” you mean “silly to do”. I build systems at Google that do exactly that.

Whether it’s worth the cost is a decision each company makes. Also, you don’t need to keep the log forever. Max of a few weeks retention would be common.


What guarantees do these systems provide? Are 100% of requests where data was served guaranteed to either end up in the log or at least create a detectable "logs may have been lost here" event?

Or does it log all the requests all the time as long as everything goes well, but if the wrong server crashes at the wrong time, a couple hundred requests may get lost because in the end who cares?


Presumably, keeping 'last remotely accessed' and 'last remotely modified' for every file (or other stats that are a digest of the logs) is sane for pretty much any system too. Having a handle on how much space one is dedicating to files that are never viewed and or never updated seems like something web companies that have public file access would all want?


It's not just GET requests. Someone could have cloned/refreshed the repo using ssh. The repo might have been indexed by github's internal search daemon which might not use the public HTTP API but uses internal access ways however those might look like. You might have purged the database of that daemon but what about backups of it? What about people who have subscribed to public events happening in the github.com/github org via the API?

You'd have to have logging set up for all of these services and it would have to work over your entire CDN... and what if a CDN node crashed before it was able to submit the log entry to the log collector? You'll never know.


That's why you need certificates and not just a key pair. Certificates make key rotation easier, and you want key rotation to be easy.

I guess the proper way forward is a small utility that gets the latest signature through http+tls, and replaces the line in your known_hosts file, all in the background.

Looking long term, maybe we need to get rid of all the security stuff in ssh and just pipe the rest of its functionalities inside a TLS pipe. Let the os do its certificate management, reuse security bricks that are way more studied, ...


Certificates just add more keys to worry about. The beauty of SSH is that it does not add hugely trusted parties in the name of convenience, while the UX of TOFU (trust on first use) is pretty decent.

The real solution to break out of these UX/security tradeoffs is to put domain names on a blockchain: then you can simply rotate the key in your DNS record, while the blockchain model is such that you need to compromise many parties, instead of "one out of many parties", as with CAs.

Tracking Bitcoin chain for DNS updates is lightweight enough that it can be built into OS alongside other modern components such as secure enclave, TCP/IP stack and WiFi/BT/5G radios.


Those keys can be worried about on a better secured computer, and don't need to be spread out on every frontend ssh server. Also it allows you to have each machine have a different host key pair, so if one leaks, only that single machine may have some trust issues, and not the whole fleet.

Also it's way better than TOFU, you can just add the CA key to known_hosts and avoid TOFU for each machine.

(Nevermind that you'll probably not accidentally commit some semi-ephemeral host key that's rotated often somewhere, because it will not be some special snowflake key you care about, but something handled by your infrastructure software automatically for each machine)


> The beauty of SSH is that it does not add hugely trusted parties in the name of convenience

Even with a certificate authority model, you don't have to trust any CAs if you don't want to. Not having the option to do so is more of a problem.


We should use a separate system that could reliably verify which certs belong to which entity.

Blockchain is a perfect solution to this. I wonder why it is not considered yet.



Thanks, I was not aware!


Are there any cert solutions that dont involve having to maintain a revocation list? I only used certs with openvpn years ago and the CRL was a potential footgun.


This is one reason people are issuing certs with 2 week expiry.


> Certificates make key rotation easier

How easy is it to rotate the keys of your CA?


Same as with any other decision: Do a cost/benefit analysis of whether the security risk created by rotating the key is actually outweighed by the security risk of doing nothing, taking into account logs that should tell you whether the exposed key was indeed accessed by unauthorized parties.

To be 100% clear: Both courses of action come with associated security risks. The problem is not choosing one course of action over the other, the problem is thinking you can just skip the cost/benefit analysis because the answer is somehow 'obvious'. It's not obvious at all.


No, you cannot keep using an exposed key. You must replace it. There is no cost/benefit analysis needed in this situation.


Wrong. A CBA is always needed. If the potential damage from MITM attacks made possible by rotating the key is greater than the potential damage from a rogue key multiplied by the likelihood that someone actually accessed the key, then it is wrong to rotate the key. It's that simple.

The only way a CBA would be unnecessary is if rotating the key didn't have any security risks. But it does.


Here I’ll do the CBA:

- if they have evidence that the key was exposed to one person, even with zero usage of the key, failing to rotate the key is tantamount to knowingly accepting widespread compromise at a potential attacker’s whim. At GitHub’s scale, that’s untenable.

- rotating the key is the only correct reaction to that

- they should have better communications in place to help users mitigate MITM

- there really isn’t an option, because they’re critical infrastructure; I’m glad they know that and acted accordingly

- on principle this speculation makes sense, but understanding the threat makes it moot

- you hopefully know that, and it’s good to insist on thoughtful security practices but it’s important to also understand the actual risk


There is a MITM risk regardless of whether they rotate the key. Except one is a one time risk and the other is a perpetual risk.

Thus rotating is the only logical course of action.


Only if you know for certain that the key has been accessed by a third party.

If you don't know for certain, you have to factor in the likelihood that it has been, and at that point, the two risks aren't equal anymore so that logic doesn't work.


Are you arguing for the sake of arguing and technical correctness or do you actually believe Github shouldn't rotate their key in this situation?


What if you don't know for certain ?

You just ignore it and hope for the best ?

Only if you are certain (and better be really sure you haven't missed any cache/cdn, temp files backus etc.) it wasn't accessed you do nothing.


It was publicly exposed, and if they are making this announcement it’s essentially guaranteed they can’t rule out it was accessed.


What? This is a terrible way to reason about risks in general. If you don't know for certain, you should assume the worst case scenario, especially since it's impossible for you to calculate the probability distribution of the likelihood of a leak.

You should only keep moving along without key rotation if you know for 100% certainty a leak didn't happen and no one accessed the key (not theoretically impossible if they had the server logs to back it up), but anything minus that and you have to assume it's stolen.


Clone repos using oauth2 with two factor enabled - both GitHub and GitLab support that though their CLIs.


The alternative would be to use certificate authorities (ssh has CA support) which allow to effectively have private keys at different levels and allow you to keep the root private key in a physical vault and use it very rarely to issue other private keys


This would just offload the problem to a separate entity. CAs can be (and have been) compromised.


Sure, but isn't it more likely that a key that has to be shared by who knows how many ssh load balancer machines at GitHub and can't be easily rotated because it's pinned by millions of users, isn't it more likely that that private key gets eventually compromised or thought to be at risk at being compromised?

We need to compare the relative risks within the same context, namely within a company like GitHub

So it's not relevant to bring up failures of other CAs


And then don't forget to setup key revocation as well, and make sure that an attacker in a position to MITM the connection cannot cause the revocation checks to fail-open.

I hope you don't need that SSH connection to fix your broken CRL endpoint!


Well, at least SSH could allow for signing a new key with the old one. So they could say it's signed, and people would know to accept only a different prompt.

There is DNS verification, but people have been trained all their lives to accept insecure DNS information (and set their systems accordingly), and I really doubt the SSH client checks the DNSSEC data.


How would one stage a MITM attack without knowing the private key corresponding to the old key?


The fact that users have to delete the old Github key from their systems and accept a new one is what could lead to a MITM attack.

If your system doesn't know the public key of an SSH server, when you connect the first time, the SSH client will display a warning and ask you if you accept the server key. An attacker could be between you and Github and if you accept without checking it's the correct key, you would be toast.


Would it be more secure to access a https secured server to get the keyfile then?


Yes, GitHub's announcement provides the correct new public RSA key, and it also provides instructions for a curl invocation which does all the work if you don't trust yourself to copy-paste text or don't understand how.


Only if the https server cert wasn't compromised at the same time as the ssh key. For all we know, this entire announcement of "we have a new key" could be staged.


By pretending to be the host that the user is trying to connect to. You can then present the client with a key you generated yourself. Of course, SSH will warn the user that the fingerprint has changed, but they'll just think "Ah yes, GitHub changed their keys so it's probably fine." This is why updating the key creates a potential MITM risk, unless people actually bother to verify that the fingerprint is correct.


What specifically can you verify that cannot also be spoofed? If I go do this now I (and I’m sure millions of others) have no idea what to look for. I’d be blindly accepting like a sheep if it weren’t for this thread.

edit: I found this helpful and honestly had no idea I should be doing this (I’m a hobbiest not a professional) https://bitlaunch.io/blog/how-to-check-your-ssh-key-fingerpr...


The key has changed, meaning that every user has to accept a new key.

Meaning that a lot of users will blindly accept whatever new key (even when it might be the one owened by attacker doing MITM) because Github, their college or random person on internet said that that's what you have to do to get rid of error.


> Meaning that a lot of users will blindly accept whatever new key (even when it might be the one owened by attacker doing MITM)

This is less likely because unlike for TOFU the SSH client just rejects the mismatch and insists you take manual action, and the likely manual action will be "Paste this stuff from the announcement".

So an adversary needs to either subvert whatever messaging you see (which is tricky, likely impossible for a random user visiting the github web site wondering what's wrong) or hope that you just try to muddle along and do TOFU again, putting you in the same spot as a whole bunch of users every day at GitHub.


Fortunately this will become evident once they connect from elsewhere and the key changes again


Not strongly evident. I suspect most users would assume they did something wrong, or that GitHub was still making changes.


if you run a MITM attack today the victim gets the warning in the blog post. Thier most likly action is to google the blog post and see it is expected and and so accept your fake key. Having said that I dont see what choice Github had, they can't continue to use a leeked key.


Their solution is the second thing in the blog post which is demand into your known host file.

The problem here is that their first command that they advise using is the one that removes the old key and most users are just going to stop right there because it solves the problem of getting the key warning.

The right solution here is to provide a command that most users are going to copy and paste that deletes the old key and adds the new key all at once.


They don’t need it. Millions of users are going to blindly trust the “new” GitHub public SSH key they see the next time they connect without checking to see if it matches the published signature.


Is there a benefit (and practicality) in recording encrypted traffic by an adverse intermediary waiting for keys being exposed sometime? Like now?


No, the risk of losing an SSH host key is less this (because of forward secrecy), rather impersonation of the server.


Here are the expected fingerprints (since they don't publish those via SSHFP RRs): https://docs.github.com/en/authentication/keeping-your-accou...

    SHA256:uNiVztksCsDhcc0u9e8BujQXVUpKZIDTMczCvj3tD2s (RSA)
    SHA256:br9IjFspm1vxR3iA35FWE+4VTyz1hYVLIE2t1/CeyWQ (DSA - deprecated)
    SHA256:p2QAMXNIC1TJYWeIOttrVc98/R1BUFWu3/LiyKgUfQM (ECDSA)
    SHA256:+DiY3wvvV6TuJJhbpZisF/zLDA0zPMSvHdkr4UvCOqU (Ed25519)


Note the MITM here :)

We humans really aren't cut out for this, are we.


Indeed, at least for verification. I didn't mean for HN users to trust those, but perhaps should have warned about it: copied the fingerprints primarily for people searching on this page, so that they can follow the link to GitHub (and rely on PKIX to build a trust chain). I did `ssh-keygen -R github.com` myself, and saw the ECDSA key's fingerprint while connecting (which wasn't mentioned in the linked post, and wasn't on this page either), so figured it would be somewhat helpful for others following the same route.


What MITM? What are you talking about?


The poster of the fingerprints is in the middle, you are not getting them from GH if you use them instead of going to the linked page.


Why downvote this person! The parent post left plenty of ambiguity in their comment. Are they saying that an actual MITM attack is happening? That the fingerprints shared are actually the wrong ones?

Generally speaking, one would not consider an internet comment directing folks to GitHub's actual SSH fingerprints a "man in the middle" as the phrase in this context usually has a negative implication, where in this case defanor is in fact simply mirroring the actual information that GitHub has officially posted in a way that is much more helpful than yetanotherjosh's "double check it is the expected value". For most of us idiots, we don't know what the expected value is!

So thank you defanor for sharing, and thank you darthrupert for asking for clarification. Y'all contributed to educating myself and others and now we know more because of it.


Ah, okay. I thought this was obvious that the keys in the comment were just for show, and if anyone would need the actual keys, they would be looked via the GH link anyway.

Good clarifications everywhere, yes.


If someone wanted to trick HN users into trusting a phoney key, one way to do that would be to post the phoney fingerprint on HN claiming it to be the real one.


I mean, yes, but you'd also have to have a way to actually MITM the person you are targeting via HN comment, before anyone pointed out it was wrong. It'd be much easier to just use the MITM you already have and not raise the suspicion of posting in a comment.


Don't overthink this.


And if someone would actually fall for this, they deserve to be fired, and/or never allowed anywhere near anything related to computer security. :)


And within a few seconds someone will have called this out in a reply


Assuming the person doesn’t have some back door access to HN as well.


Or they don't simply wait a while and edit it when it's not under high scrutiny.


You can only edit for a certain amount of time


This is literally a man in the middle between you and GitHub.


On the other hand, this is a nice TOFU-style double check. The first time HN user "defanor" went to that page, these were the fingerprints; if later someone somehow invades the github documentation server (or somehow MITMs your HTTPS connection to it), and changes the fingerprints there, they will no longer match the ones saved in the comment above.


Well, "defanor" says these were the fingerprints. Perhaps they are the MITM.

(Not genuinely concerned about this risk, but "Reflections on Trusting Trust" reverberates.)


They provide convenient commands to import the correct keys. It would probably be better to only include the block that contains both the -R and the update command, but at least they do provide them.


> double check it is the expected value

Not all of us are familiar enough with the SSH protocol to understand how to "double check the expected value"? Where can I determine what the expected value should be?


Run "ssh -T git@github.com" command.

It should error like this:

    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    It is also possible that a host key has just been changed.
    The fingerprint for the RSA key sent by the remote host is
    SHA256:uNiVztksCsDhcc0u9e8BujQXVUpKZIDTMczCvj3tD2s.
    Please contact your system administrator.
Note that the SHA256 present there matches perfectly the one github send. If you don't remember the very first time you connected to github you also had to accept the key. The warning above shows up because the server is saved as a different RSA, for the SSH client it seems that someone setup a server for github but has a different key, which could mean someone is trying to trick you into connecting into the wrong server. This could also mean that github changed their RSA key which is why they published this article.


The fingerprint is a hash of the key, so in theory -- say with a quantum computer -- I could create a key that's different and provides a hash-collision. Is that right?

It would just take many ages of the universe, at present, to calculate a collision, right?


There's a narrow window for that attack. The fingerprint is only used on the first connection, for manual verification. Any later connections would check the ~/.ssh/known_hosts which has the full public key.

If you somehow can MITM an SSH connection on the first connection, you can probably use any key. Most people don't check the fingerprint.

But you are correct, computing an SSH key with a collisionwis expected to take an infeasible amount of time/energy with current understanding of crypto and available computers.


A key part of avoiding MITM is to get the values from an authoritative origin, not comments on HN, so the link is here:

https://docs.github.com/en/authentication/keeping-your-accou...

Yes, this assumes the github-hosted docs and your SSL connection to them are not also compromised, but it's far better than not checking at all.


Look for the part of the article that says "the following message"

Or the parts below it about updating and verifying in other ways.


I've updated the key in known_hosts, then was able to connect successfully.

What do I have to do to ensure I connected to the right server?? I thought just making sure the correct RSA key was in known_hosts would be enough?


It depends on how you found out what the new key value is. By the sounds of your description, you're fine. But in principle there's more than one way people could proceed from here.

If you read the blog post on a verified domain and saw the new key and updated manually, or you deleted the known key and verified the key fingerprint when it warned you about an unknown key, you should be good to go. Here, you trust the people who issue TLS certificates and you trust github to be in control of their domain name, so you can be reasonably confident that the key they advertised on their website is the correct key. If your internet connection was compromised, you would have got an error message when you connected to https://github.blog (because they wouldn't have a certificate from a trusted certificate issuer) or when you connected to the git server (because they wouldn't have the key you just trusted).

If you saw the blog post and then removed the old key and told ssh to save the new key it's receiving without checking it matches the value on the webpage, you might have a problem. The connection to github's ssl could have been compromised, and if you just accepted whatever it told you, you have no trusted intermediate to verify that the key is trustworthy. All you know is that each time you connect to github's server hereafter, you're either connecting to a server with the same key (no error), or you're connecting to one that doesn't have the same key (error message). But whether you can trust that key? You don't know that. You just know it's the same key.

But even if you did the latter, all is not lost. You can look at the known_hosts file (on Linux and MacOS it's ~/.ssh/known_hosts) and check the fingerprint. If it's what they advertise, then you're good to go. If it's different, you should fix it and find people who can help you deal with a security incident.

The reason people are raising a flag is that today, lots of people will be rotating their key. That means if you're looking to target someone, today is the day to do it. Even if 90% of people do it the proper way, by manually verifying the key, that still means there's going to be a lot of people who could be victimised today.


That is enough, given that you've fetched or compared the key from a trusted GitHub.com server.


Double-check with what source? The one mentionned in docs.github.com?

I assume it's safe because the SSL cert for docs.github.com is probably not compromised, so it's giving us the right key, and compromising docs.github.com would be extra effort and is unlikely to happen.

However, I wonder what kind of steps an MITM attack would have to perform, I assume one of the easiest would be compromising my local DNS server, since regular DNS does not offer a high level of security, then github.com resolves to the attacker's IP and the attack works. Do you have examples of such attacks that don't involve a virus already active on the end user's PC? Maybe if someone owns an IP previously owned by Github that is still somehow advertised as being Github by some DNS lagging behing?


This is always a concern with SSH as it uses trust on first use. The first time you connect it shows you the fingerprint for out of band verification. That manual step is on you to perform, but most people skip it. Future visits check against the saved fingerprint.

The best practice is to verify the fingerprint out of band using a secure channel. In this case, that's HTTPS and docs.github.com. If (hypothetically) docs.github.com was also compromised, then you don't have a secure channel.

https://en.m.wikipedia.org/wiki/Man-in-the-middle_attack has some MITM examples.


There should be a StackOverflow Streisand effect, at first, I peeked at the end of your comment to copy-paste the ssh-keygen string "solution".


Or just use the posted command to fetch the fingerprint over ssh and automatically add it to your known hosts?


SSH host certs would make this a non-issue, and I've often wondered why GitHub doesn't use them.


Certificate pinning check built in when?

We should have a blockchain for certificates btw. That would be such an amazing solution to this problem. You could advertise ahead of the time that you are changing certificates and we could verify that it was in fact you.


Github had one RSA ssh host key, the most widely supported key format.

It was trusted to clone code into the infrastructure of hundreds of thousands of organizations. People pin it everywhere.

With this key someone could infect the infrastructure of fintech companies and steal billions of dollars. I know this well because I run a security consulting company focusing mostly on that industry. Mainly this is possible because almost no companies check for git commit signing, and Github does not enforce it anyway, and I digress.

This key held enough power over value that some might have used violence to acquire it.

With that context of course they chose to place this key in a hardware security module controlled with an m-of-n quorum of engineers to ensure no single human can steal it, expose it, or leak it. Right? ... right?

Nope. They admit they just stuck it in a git repo in plain text where any engineer could have copied it to a compromised workstation, or been bribed for it, for who knows how many years. Worse, it was not even some separate employee only intranet git repo, but their own regular public production infra and someone had the power to accidentally make it public.

I have no words for this level of gross negligence.

Maybe, just maybe, centralizing all trust and security for most of the worlds software development to a proprietary for-profit company with an abysmal security reputation was not the best plan.

I will just leave this here: https://sfconservancy.org/blog/2022/jun/30/give-up-github-la...


Wow, your assessment of the impact here (or even possible impact) is way way way overblown.

In reality, the vast majority of users don't pay attention to SSH host keys at all.

Even if an attacker got hold of this private host key, they'd have to be able to MitM the connections of their target.

Next, they have to decide what they want to do.

If they want to send malicious code to someone doing a 'git pull', they'd have to craft the payload specifically for the repo being pulled from. Not impossible, but difficult.

If they want to "steal" source code from someone doing 'git push' (perhaps to a private repo on GitHub), that's a bit easier, as they can just tell the client "I have no objects yet", and then the client will send the entire repo.

And, again, they'd have to have the ability to MitM some git user's connections. Regardless, there is no way that they could change any code on github.com; this key would not give them any access to GH's services that they don't already have.

So I think your anger here is a bit over the top and unnecessary.

I agree that it's pretty bad that some GH employee even had the ability to get hold of this private key (sure, probably should be in an HSM, but I'm not particularly surprised it's not) in order to accidentally leak it, but... shit happens. They admitted the mistake and rotated the key, even though it's likely that there was zero impact from all this.


End users do not pay attention but their clients pin the key after first use. Also everyone is using gitops these days and almost no one is using dns-over-tls.

Imagine you control the right router on a company wifi, or any home wifi a production engineer works from and suddenly you can cause them to clone the wrong git submodule, the wrong go package, or the wrong terraform config.

If you knew a CI/CD system blindly clones and deploys git repos to prod without signature checks, and that prod is a top 10 crypto exchange with 1b of liquidity in hot wallets, then suddenly a BGP attack to redirect DNS is a good investment. Myetherwallet got taken over for 15 minutes with a BGP so this is not hypothetical.

Should that be the case? Of course not. But the reality is I find this in almost all of the security audits I do for fintech companies. Blind trust in Github host keys is industry standard all the way to prod.


Sure I can imagine that. And in doing so, I imagine this attack is pretty unlikely.

I mean, think of the confluence of things that have to line up for this to work for someone. Many stars have to align in order for someone to successfully exploit this leak. Is it impossible? No, of course not, and so GH was right to rotate the key here, even if their server request logs suggested that no one had accessed it.

If people actually have this sort of attack in their threat model, there are ways to protect against it. Signing commits and verifying them. Pinning to a particular git SHA1 hash. Etc. If people are not doing these things, then it's possible they've made the decision not to worry about this sort of attack. Sure, you can disagree with, but I think you'd probably be in the minority. That's ok, though; you can certainly protect the things you're responsible for in stronger ways.


I have seen attacks along the lines I just outlined and well beyond in my industry many times.

A wildcard TLS cert or an SSH host key in plaintext is a loaded weapon laying around and it will be used on a high value target.

Sad fact is most of the companies that hold billions of dollars of customer assets do development exactly the same way a gaming company might. Those that even attempt security are unicorns. They bank everything on things like DNS, BGP, TLS certs, and ssh host keys working. This is like the medical industry before they learned hand washing was a thing.

I teach every one of my clients how to do things like commit signing, but for every one I help improve there are 100 more on track to be hacked any day now.

I can totally forgive a startup that cannot afford senior security engineers for a mistake like this, but Microsoft can afford that, or at least consultants, and yet they can not even enable optional signing for NPM, properly enforce git commit signing, or even protect an ssh host key trusted by millions in a TEE or HSM.


Yes, but they will have to un-pin the (now compromised) key if they want to continue using Github.

Any compromise would have to isolate them from the "real" Github hosts from today onwards, i.e. plausibly MITM them continuously, or they would just switch to the rotated key to be able to continue working. At least in OpenSSH, this means replacing the compromised trusted key, as there can only be one per host (or even IP in the default configuration).

This is still very bad, but much less catastrophic than e.g. a world in which `.ssh/known_hosts` allows multiple entries, in which case you'd really have a sustained compromise of most clients.


Woah, I looked around some more and it seems like the opposite is true. Multiple trusted keys per domain can exist, and additionally there is this option:

> UpdateHostKeys is enabled by default if the user has not overridden the default UserKnownHostsFile setting and has not enabled VerifyHostKeyDNS, otherwise UpdateHostKeys will be set to no.

This is an OpenSSH extension that allows any host to provide additional host keys the client will then silently add to `known_hosts`. This is really bad in this context as it can allow a one-time MITM to install rogue keys for github.com that can go unnoticed; check your `known_hosts` for any unexpected entries for github.com!


Crypto never fails to impress me with how many problems it creates for itself.


> Even if an attacker got hold of this private host key, they'd have to be able to MitM the connections

This is not hard. If it were hard, we wouldn't need encrypted connections.

If I were a nation state, it would be trivial to position an attack at the edge of a network peering point for a given service. How do I know? Our own government did it for 10+ years.

Cyber criminals often have the same tactics and skills and can find significant monetary reasons to assume heightened risk in order to pull off a compromise.

Random black hats enjoy the challenge of compromising high value targets and can find numerous ways to creatively infiltrate networks to perform additional attacks.

Even without gaining access to a network, virtually anyone on the internet can simply push a bad BGP config and capture traffic for an arbitrary target. Weird routing routinely happens that nobody can definitely say isn't such an attack.


The key has to be in memory on all of their front end servers. Do you think a quorum of engineers should get together every time a front end server boots or reboots?

Genuinely asking because I’ve struggled with this question.


Lots of cloud instances support remote attestation these days which gives you a reasonable path to autoscaling secure enclaves.

1. You compile a deterministic unikernel appliance-style linux kernel with a bare bones init system

2. You deploy it to a system that supports remote attestation like a nitro enclave.

3. It boots and generates a random ephemeral key

4. m-of-n engineers compile the image themselves, get the same hash, and verify the remote attestation proof confirming the system is running the bit-for-bit trusted image

5. m-of-n engineers encrypt and submit shamirs secret shares of the really important private key that needs protecting

6. key is reconstituted in memory of enclave and can start taking requests

7. Traffic goes up and autoscaling is triggered

8. New system boots with an identical account, role, and boot image to the first manually provisioned enclave

9. First enclave (with hot key) remotely attests the new enclave and obtains its ephemeral key (with help of an internet connected coordinator)

10. First enclave encrypts hot key to new autoscaled enclave

11. rinse/repeat


I don't understand how initializing cryptographic keys from an HSM at boot time is an untenable proposition. The quorum would be for accessing the key by human means. You can have a separate, approved path for pre-authorized machines to access cryptographic primitives across an isolated network.


The key doesn't have to in memory on all of their front end servers. Any respectable company that cares about security wouldn't put their TLS private key on all of their front end servers anyways. You expose a special crypto oracle that your front end servers talk to; the oracle can be a specially hardened process on a dedicated server or better yet a HSM; the point is, the private key is never in memory on any server that handles untrusted data.


> With that context of course they chose to place this key in a hardware security module controlled with an m-of-n quorum of engineers to ensure no single human can steal it, expose it, or leak it. Right? ... right?

This is unfortunately not how SSH works. It needs to be unlocked for every incoming connection.

You raise valid hypotheticals about the security of the service... but fixing it involves removing SSH host key verification from Github; better OpsSec would not fully resolve this issue.


I am well aware how ssh works. I have written ssh servers and design secure enclave key management solutions for a living.

Even if they wanted the most quick and dirty lazy option with no policy management, they could stick the key in a PKCS#11 supporting enclave every cloud provider supports these days. OpenSSH natively supports them today.

At a minimum they could have placed their whole ssh connection termination engine in a immutable read only and remotely attestable system like a Nitro enclave or other TEE. You do not need to invent anything new for this.

There are just no excuses here for a company with their size and resources, because I do this stuff all the time as just one guy.


Would these secure storage methods for the key be able to scale up to the ssh connection volume an outfit like GH sees? Genuinely asking; I don't know the answer. I just feel like having to hit a HSM or something similar every time a new ssh connection comes in wouldn't work. Or at the very least I would not see the sub-100ms connection time I can get right now.


Easily. You could have only the CA key in the enclave and have it sign throw-away session keys on stateless autoscaling machines, and/or you can have your actual ssh terminations happen in cloud enclaves like nitro enclaves which have all the same specs and cost of a regular cloud server.


Hardware security modules can perform key operations without allowing anyone to access the key data. Key material being (accidentally or deliberately) leaked has been a solved problem for a long, long time.


Used in such a way, the entirety of Github's SSH connections would be bottlenecked by this HSM. It wouldn't scale and it would be a single point of failure. As lrvick points out, you'd have to use a certificate-based host key scheme like PKCS#11 to make this scalable. That's fine, but it is a different scheme than RSA host key identification.


I gave two options but they are separate.

PKCS#11 is a protocol for talking to hardware security modules for generic cryptographic operations to keep a key out of system memory.

You can totally take a single host key scheme and use HSMs or TEEs at scale.

Openssh supports using openssl as a backend which in turn can use the PKCS#11 protocol to delegate private key operations to remote enclaves or TEEs. Nothing stops you from loading your key into a fleet of them and scaling horizontally just like you scale anything else.


> Nothing stops you from loading your key into a fleet of them and scaling horizontally just like you scale anything else.

Isn't having one key the root of the problem? If you've got one key, it has to be transferred to each HSM module, which means it's going to be in their deploy code. My understanding is that the safe way to scale PKI is to have an HSM generate a key internally, and have that key be signed by a CA in a different HSM, so private key material never leaves any HSM.

I guess you're saying that SSH RSA host keys support this, but I'm only familiar with doing it using looks up correct terminology X.509 certificates like for HTTPS.


> If you've got one key, it has to be transferred to each HSM module, which means it's going to be in their deploy code

There's products out there that allow for keys to be securely shared amongst multiple HSMs, without the key ever existing in clear text outside the HSM.


HSMs can key-wrap a key.

E.g. Foo can encrypt the key X with Bars public key and then Bar can import key X.


Most HSMs have modes that allow for load to be distributed amongst multiple HSMs.


> It needs to be unlocked for every incoming connection.

Yep. Well, certificates exist exactly to bridge the GP's requirement with your reality.


Which HSM are you looking at that would be able to handle the required number of transactions per second?


The same ones that terminate TLS for millions. Most are garbage but at least they keep the key offline. Also you can scale horizontally or only use the HSM as a CA to sign short lived host keys.

You could also use things like Nitro enclaves which have all the same specs as a regular EC2 instance.

Tons of options. They clearly chose none of them.


> The same ones that terminate TLS for millions.

Which ones?

> Also you can scale horizontally

For SSH? Only by having the same private key in all of them, which means it's still around somewhere.

> or only use the HSM as a CA to sign short lived host keys

That would be ideal, except that the user experience around SSH host certificates is currently woeful.


(Not your parent commenter.)

> Which ones?

Take Utimaco Security Server and you'll get 10k+ RSA signatures per seconds. Yes, you'll definitely need dozens of them, but you'll probably want several for high availability and low latency anyway.

> For SSH? Only by having the same private key in all of them, which means it's still around somewhere.

End-to-End encrypted key transfer between HSMs is well established. Ops for such a setup is definitely going to be a pain and lots of manual (and thus expensive) work but it is doable. The banking industry has been operating like that since forever – with symmetric cryptography only. Imagine two people being sent letters with XOR halves of a transport key and physically meeting at an HSM and entering the halves on a PIN pad (not a hexadecimal but a decimal PIN pad, mind you, where you need to press shift for A-F). From a modern perspective it's totally bonkers, but it works.

If I was tasked with building something for large scale companies like GitHub, I would probably pass up on HSMs and use commodity measured boot on a minimal Linux for my trusted key servers. Outside of these key servers the SSH key would only be stored split up with Shamir-Secret-Sharing with few trusted employees which will only restore the key on ephemeral offline systems. Is that overkill? Very much depends on your threat model. Investing in the security of their build chain runners and data-at-rest integrity might have higher pay off. But then again, GitHub has become such a big player that we should also hold them to very high standards. And the setup can be re-used for all their secret management, e.g. TLS certificate private keys.


Thanks! That's hugely helpful - everything I could find myself seemed to be an order of magnitude or so slower than that (and they support ecdsa, too, so I can't even object on the basis of algorithmic support). With hindsight my reply was somewhat flippant - really I just wanted to push back on the idea that this was a problem that could be solved by simply sprinkling HSMs on it rather than something that requires a meaningful infastructural effort. Github is a sufficiently core piece of infrastructure that I agree more should be expected, and I hope this does serve as encouragement for them to do that.


The HSM only needs to sign new host keys, transactions per decade at thier current rate.


Thoughts with the sysadmins and devops people out there on this wonderful Friday afternoon.

These kinds of changes suuuuuuck. Messing with known_hosts file is not always something easy to do. Might require a image rebuild, if you have access at all.


Is it negligence or just incompetence? I get the sense that security is such a tough problem that all of us, even CISOs and red teamers, are incompetent.


If hospital workers spread disease because they could not be bothered to do the obvious things we -know- prevent this like basic sanitation... then yeah, I would call it negligence.

Do not put long lived cryptographic key material in the memory of an internet connected system. Ever. It is a really easy to understand rule.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: