Hacker News new | past | comments | ask | show | jobs | submit login
Getting 2FA Right in 2019 (trailofbits.com)
260 points by dguido on June 20, 2019 | hide | past | favorite | 217 comments



The main problem I have with TOTP (What you're using when you use Google Authenticator) is that the migration path doesn't exist when you get a new phone. This article complains that users are trying to mitigate this problem, but doesn't seem to give any solutions.

It's entirely plausible you might have 20 accounts set up using TOTP on your phone.

If you now buy a new phone (which users might be doing once every 18-24 months), you need to log into each account and generate a new TOTP key and void the old. That's a couple of hours work.

Now, what if you lose your phone?

You now have to recover 20 accounts, which will take several days, and it's very possible you won't be able to recover at least one.

The common response is "Oh, you should keep one-time keys somewhere". Right, 20 * 10 one-time keys in a single centralised location, and make sure to update them to keep them valid. I thought we were trying to stop people writing their passwords down and storing them next to their computer?

Edit: I'm not sure "treat your TOTP keys like passwords and store them" is setting a very good example. Why are we developing systems that use TOTP if we are encouraging users to treat them like passwords, undoing the vast majority of the security benefit?


I use 1password for TOTP too, and this detaches my device from it. Now if I lose my phone, I just need to be able to access the app on my next phone and I'm all set.

I know that keeping the password and the TOTP at the same place is kinda not great, but I prefer this "risk" to the "hassle" you mentioned.

I'm open to better solutions!


At this point, what is TOTP actually doing for you? If you're using 1Password, it's already generating unguessable passwords for you. Your TOTP codes are literally just an extension of that password.


It mitigates phishing and password reset attacks. It also means you can give access to some account to a friend or family over the phone one time, without giving them permanent access. Also good on non encrypted public wifi or in a company network with a proxy that sees and logs everything.

The real word is very unperfect.


It mitigates the simplest forms of phishing. If someone is running a proxy server[0] and passing everything along to the real website, token (or SMS) based 2fa can't do anything.

[0]https://breakdev.org/evilginx-2-next-generation-of-phishing-...


Hence "mitigates" and not "cancels"


A single use password extension.

The important thing here is that you only enter the secret once. It’s different from a password which is reused by design.


Right, I know that's the obvious response, but what's it really buying you? The rationale I always see is "the server could log the request and my password would wind up in plaintext". But the problem with plaintext password logs is that in a breach, a substantial number of those passwords will be used to credential-stuff other accounts. A breach already requires you to reset all your credentials for that service, and if you're using 1Password, those credentials had no other value anyways.


I feel like I must be missing something here, especially given who I'm replying to, but doesn't it mean that, even if your password is compromised, it can't be used to log into the service which is also protected by TOTP? Regardless of impact on other services if someone reuses their password. So in theory, there's no desperate need for you to reset your password for the service (though of course you should) because it's still protected by your TOTP setup. (Well, unless your current TOTP was leaked and someone reuses your password and current TOTP before it times out, if the service allows that.) Of course, it depends on the nature of the breach. I've assumed the password leak via request logging as you mentioned (and possible leak of single TOTP), but if the seed for the TOTP is also leaked (e.g. DB breach), then naturally you're in trouble!


I'd love a dumbed-down answer to this. Here's where I think 1Password's TOTP is useful:

User creates a strong password in 1Password when signing up for a service. The password is used only for this service. The service stores all usernames and passwords in plaintext. These credentials are compromised without the service knowing. If I'm using 1Password's TOTP then, I think, an attacker is prevented from logging into the service with my credentials. If I'm not using 1Password's TOTP then the attacker can login to the service.


If the service is compromised, you can't trust your TOTP secret (the little binary string from which your TOTP codes are generated) either! The protections TOTP provide in this scenario are all based on magical thinking; that it "feels" secure. But really, with respect to a specific service, if they're compromised, your credentials are worthless and need to be reset wholesale.


It should be noted that the TOTP secret is probably kept in the same database (if not the same table) as the password hashes. I'm surprised we don't see more TOTP secrets in password dumps.


And you reset the password after you find out about the leak- but that may be a long time or even forever after the leak occurs, so with only a password, you more vulnerable.


You know, that’s actually right.

I’m going to reconsider why I’m using 2FA and go all U2F or remove it entirely.


You sound surprised I’m right! :)


2FA is strongly pushed in a lot of places. Creates an impression of importance.

You just saved me a ton of hassle next time I swap phones. Cheers.


Ha, I'd just not thought it through for some reason.


You're assuming that using 1password implies you're not re-using passwords, or using easily guessable ones. Is this assumption safe to make?


Not entirely. But 1Password does clearly flag weak passwords and reused passwords (i.e. used for two or more items in 1Password) with a message that's not dismissible.


In general, no, but for 1Password users sophisticated enough to store TOTP creds in it, I'm guessing: almost always yes.


Similar, when you are using a tool to generate something unique for logon why does it have to be long? 4 characters should be plenty if you do not reuse it.


It mitigates passive attacks: If someone spies me entering the password (by keylogger or in person) they only get my password but cannot login 2 minutes later (TOTP tokens should not be reusable for some time, so even this 1 minute window gets closed).

Plus, some buerocrat can tick the 2FA box.


Right, but you're mitigating an extremely marginal attack (a purely passive interception of a login) and enabling a somewhat more realistic attack, and for what? If you're just doing it to fake out 2FA requirements, sure, do whatever. But otherwise, just use 1Password passwords and skip the TOTP theater.


> Right, but you're mitigating an extremely marginal attack (a purely passive interception of a login) and enabling a somewhat more realistic attack, and for what? If you're just doing it to fake out 2FA requirements, sure, do whatever. But otherwise, just use 1Password passwords and skip the TOTP theater.

Passive interception is not the only possibility. How many password dumps show up each year? Having TOTP enabled, regardless of where it is stored, helps mitigate that threat unless the password that is leaked can also access whatever is storing your TOTP secrets. But if you're using a password manager and reusing passwords, I don't know what to tell you.

Personally I use a TOTP implementation that lets me move the backing store as a file. It's not stored in my password manager, but it's available to me should I need to change phones. That seems like the best of all worlds.


Password leakage is not a marginal problem. I find saved passwords in public browsers all the time. Those with TOTP are much better protected against this as well.


Yeah, but you are not finding 1Password passwords in public browsers, are you?


1Password also has a web UI, so yes it is still very possible to copy a password from 1Password and accidentally save it on a public browser.

Not to mention the simplest: type in password from 1Password on phone on public browser. Accidentally save password.


Sadly, I found ones from colleagues that use 1Password and even to me it happened that I clicked save for password that I didn’t want to save in that environment.


Ho does having TOTP in addition to a password versus just the password enable a more realistic attack?


Agreed that you don't gain anything if you're using 1Password. But users may be required to set up MFA to access something like GitHub organizations, in which case having it available in 1Password is convenient.


It's true, putting the TOTP in a synced app actually removes the "something you have" aspect of the second factor. Now you've created a way to intercept the "something you have" factor without touching either the user or the authenticating service, so it's just an extension to the "something you know".


TOTP implemented with 1Password is not 2FA.

All TOTP devices must store the symmetric key, yes. 1Password goes a step further and provides a UI to allow the user to simply copy the symmetric key out of the login record à la a password.

TOTP clients that make opinionated design decisions prohibiting a user from getting at the symmetric key are correct implementations.

That said, if one wants to mandate 2FA for one’s users, TOTP is not the right choice, given it allows users to do the wrong thing.


Reduction of chance for a successful phishing attack. Is it possible for a hacker to get both the password and the TOTP? Sure, but the timing of that is a 30-second window, in which the hacker needs to be extremely sophisticated in order to successfully compromise your account.


This is not at all true, and we’ve dealt with unsophisticated but successful ATO attacks on TOTP all year. TOTP does not defend ordinary users against phishing.


> TOTP does not defend ordinary users against phishing

I never said it ultimately defends ordinary users, just that it reduces the chances because it requires a more sophisticated attack.


You said attackers need to be "extremely sophisticated" to pull it off, and I've spent a year seeing nitwits – clumsy and trivially detected nitwits – do it without much trouble. You were wrong, and wrong in a way that's important to correct so people know that it's wrong.


And I've seen the opposite, what's your point? How do you qualify "clumsy and trivially detected nitwits"?

PS - Telling people they are "wrong" isn't convincing, and is downright condescending. Thanks for that.


Then it's not really 2FA anymore. Anyone with access to your 1Password account can login anywhere.


If anybody has access to your 1pass account, you are fucked no matter what.

The biggest threat the vast majority of people face is getting one of their accounts taken over because they re-use credentials and some site, somewhere, got their account db compromised and now those credentials are on a list. Any account that has 2FA, even ones that use "weak" 2FA like SMS, will be immune to being broken into. These drive-by people won't be breaking into your 1pass account to recover your 2FA secret either--that is too much work. They'll just move on all the accounts that don't have 2FA enabled.

(huge asterisk: what I said is only true for drive-by attacks where some bot is burning through a list of a million accounts to try. If somebody is specifically trying to attack you and your accounts... you've got bigger problems to worry about than simply having SMS-enabled 2FA or saving your 2FA keys in a password manager.)


Regarding people getting access to your 1Password account, things have gotten better recently.

1Password supports 2FA to login to the vault itself and most recently Webauthn security keys on browser, which I immediately switched to. On mobile it’s still TOTP, but better than nothing. I’ve got 2 physical keys and Google Authenticator as only 2FA methods.

Once U2F is supported on mobile, I’ll drop TOTP altogether for my 1Pass login. Probably buy another Titan key and throw into the bank box.


If your 1password is broken but 2fa is elsewhere, you are much less fucked than you would be in the case of a breach of 1password + 2fa.

It would be royally annoying, but salveagable.


It's two-step with a unique key every login. It wipes out phishing, password reuse and password leaks as vectors.


It doesn't wipe phising. Phising is getting more and more complex, now phising sites ask you for your ToTP to gain control to your account in real-time. And that's one of the origins of FIDO2. - https://fidoalliance.org/fido2/


Unique passwords wipe out password leaks as a vector. Anything where you type in a code is also still phishable.


People can still log into your account if your unique password leaks. It just prevents them logging into your other accounts.

Point is, TOTP isn't useless.


What's the second factor for 1password?


They added 2fa with duo security + a few other methods.


+1 to this! Moreover, if you are invested into the Apple ecosystem, seamlessly integrates with iOS and comes with a WatchOS app, so you can see your 2FAs in the Watch. A huge convenience when doing TOTP from the phone.


While you can use the watch for this, it's kind of a waste of time with the latest version of 1Password. 1Password automatically copies the TOTP to the clipboard for 1 minute after entering a password so you can paste the TOTP in immediately after 1Password has autofilled the initial login screen.

Works this way on Windows, iOS and macOS too. If you are using 1Password for TOTP this is synced anyway, making using the phone watch/app on a Mac or iPhone for the TOTP redundant. Just mash ctrl+v/cmd+v!


Use Authy. 2FA via account instead of device is hypothetically less-secure but as a practice for individual security, far better than being locked out of everything after the misfortune of losing a phone.


Note that by default authy is vulnerable to sim porting. You can take over an authy account with a phone number.


You could take over an authy account, but does that give you the ability to decrypt the encrypted blob with the TOTP secrets in it? Authy says no: https://support.authy.com/hc/en-us/articles/115001950787-Bac...


Haven’t you lost all your 2fa keys at that point though? Decryption aside, that seems incredibly inconvenient.


Well, maybe? If the SMS takeover attack results in the permanent loss of your phone number, then yes, you have lost everything. However, in most SMS takeover attacks, the attack only lasts some hours, where the attacker has control over your phone number and uses that to pivot into other accounts. With the Authy-style 2FA, they get your phone number, can then recover your Authy account, and get a copy of your encrypted blob, but they can't do anything with it. Any time they try to pivot to a different account, they don't have the 2FA and get blocked (ignoring account recovery attacks that bypass 2FA, that's out of scope). Eventually, you'd recover your SMS/phone account, and be able to download the blob, decrypt it, and have your keys. That's the model I'm seeing.

One protection that Authy should include is not letting someone who has recently performed an account recovery perform a blob deletion. That should require a delay.


Authy only for Chrome, no Authy for Firefox.


Is that really a problem when you have a mobile app and a desktop app?


Nobody paying attention was trying to get regular people to stop writing their passwords down. Threat models matter, there are no regular people for whom "Bad guys steal the book I wrote my passwords in" is a real threat. I bought my mother a nicer (but still inconspicous) Password Book for Xmas. Do I use one myself? Of course not, I have a password manager - but I run vi and flood-wired my home with Cat5 and my mother won't be doing either of those things either.

Regular people get done by credential stuffing, by phishing and straight up guessable passwords. Two out of three of those is fixed by having lots of separate passwords written in that old diary kept in the third desk drawer.

Phishing is hard, against online phishing (as opposed to lazier attacks that collect credentials offline to use later) the only really good defence is WebAuthn/ U2F and too many sites ordinary people care about don't offer that.


I agree to some extent, however most people I have seen with password books still have very simple passwords and often reuse them or create them with simple rules (pet's name + year born + first three characters of the site I'm logging into).


Why use GAuth and not AndOTP and do backups to synced folder?

https://play.google.com/store/apps/details?id=org.shadowice....


+1 on AndOTP.

It's also available on f-droid [1] and of course open source [2].

[1]: https://f-droid.org/en/packages/org.shadowice.flocke.andotp/

[2]: https://github.com/andOTP/andOTP


I'd even recommend Aegis [1]. Also open source with encrypted backups, but has better crypto than andOTP (both devs talk a bit about it here [2]). Plus, it can do imports from other OTP apps for easy migration.

[1] https://github.com/beemdevelopment/Aegis

[2] https://old.reddit.com/r/androidapps/comments/b45zrj/dev_aeg...


Thanks for this, I really like the discourse between these two in the second link. The andOTP author is open about their crypto being sub-optimal and giving the Aegis dev a thumbs up, reason enough for me to give Aegis a shot to replace it. Perhaps they'll join forces going forward and we all win. :)


Where are you storing those backups? If it's the same place as the passwords, you're weakening the second-factor assumption.


In same place as passwords, but with different password and both use different algo for encryption. Well, it's better that nothing.


There are services, like Authy, that let you back them up encrypted. Whether that is appropriate, when considering the security ramifications, is up to you. I, in the past, have printed out the QR codes and put them in a safe as backups.


Just a heads up, but the last time I checked, Authy did not have secure defaults for 2FA.

Authy supports two standards -- the Google Authenticator method, and their own internal standard. Any tokens that go through their internal standard can be recovered on a new phone using just SMS verification, which defeats the entire point.

Your encryption password only applies to Google Authenticator tokens.

https://twitter.com/DanielShumway/status/1092095395478556674


You are correct.

Calling them up one time the person on the phone seemed to be able to 'see' my Cloudflare TOTP code (back when cloudflare had beef with Google about their CEO account getting hacked) but wasn't able to 'see' what my manually added Google Authenticator codes were.

So I'm not even sure if Authy's own stuff is secure at all, perhaps someone from there will jump in.

But using the Google Authenticator way it's a decent option. Just be sure to treat your backup key as a critical component that needs to be stored securely.


Only if you setup SMS. Don't do that. Also, turn off the multi-device feature.

2017: https://authy.com/blog/understanding-2fa-the-authy-app-and-s...


You’re correct. I was referring the the standard TOTP here and not their weird Authy variant


Exactly what happened to me. I accidentally left my phone in my pocket and did a load of laundry. My phone was destroyed and could not be recovered.I had iCloud backups, but that didn't do anything for Google authenticator.

Now it's been a nightmare to recover my accounts.


Same here. What shocked me was how willingly companies would provide secondary ways to access your account. I feel like if I was trying to gain access to an account I didn't own and was blocked by 2FA, I would just say that I had lost my phone.


1Password offers a very usable solution to this problem. It can store your TOTP secret alongside your passwords and generate the necessary codes. And the database is synchronized across devices.

I imagine other password managers offer similar functionality as well.


But that also means that if your password manager is compromised, 2FA is broken as well. This is certainly more convenient, but also less secure.


> But that also means that if your password manager is compromised, 2FA is broken as well.

You either trust the implementation or you don't. If you think a breach of your password manager will result in the hackers ability to decrypt the vault them you need a new password manager.

At some point you have to trust the encryption.

Plus, if it's really on your mind you can store the keys in an offline vault with something like Keepass.


The point is defense in depth. If you must rely on one app, no matter how secure, it does nothing for depth. The whole point of MFA is to have multiple unrelated authentication steps so that when (not if) one is compromised, you're still protected.


It still does provide protection against the attack where someone looks over your shoulder while you type the password, since the TOTP is a rolling code


I don't disagree with at all and I should be clearer in what I'm saying. One has to think logically about where the dangers are and what MFA is used for.

If a cloud based password manager is breached and that leads to decryption of the password vault then that password manager is not fit for purpose under any circumstance, whether you store TOTP keys there or not.

So by default we have to approach from a position of trusting encryption, right? MFA in the realm we're talking about here plays no role in encrypting the vault (yes you can use keyfiles or hardware keys as part of the decryption/encryption process), under this context MFA is about authorising access to an account.

So you can go ahead and store your keys in a second vault, but that vault must have as good security as the password one in order to be secure - i.e. it must not be decrypt-able.

So if each of the vaults much not be able to be decrypted in order to be secure there is no reason to use two, as one will be 'good enough'. not decryptable x not decryptable =/ more not decryptable.

What I would say is that there are key accounts that need to be secured with offline physical protection. For me, and I'm guessing this is the point you're making, those would be the password managers MFA.

If you trust your password manager then you only need to secure access to it in order to secure any other keys/passwords stored inside. So you only need to remember a password (something you know) and use a keyfile (something you have) to get access. You can substitute MFA TOTP key here to if you like, but you just have to secure those two things.

If you trust your password manager, you don't need more than that. You obviously have to trust your machine, that's a rather obvious truism I shouldn't have to point out.

So my point is, be realistic about where you store thing and what you store and recognise where the defence in depth is of value.


I think you are missing the parent's point. It's not just a matter of "secure/trusted" or "not secure/trusted". You could trust your password manager but that doesn't mean that compromise is impossible. One reason for having a second factor is to hedge your bets against the possibility that you mistakenly trusted a service that turned out to be insecure or to have attacks that you didn't consider or know about at the time.


But that leads to something this whole discussion has made me think about - if the service is insecure my data can be hacked anyway. Add that to the fact I should be using unique passwords for each account I have.

So if both of those things are true - what does MFA get me?


You mean if BOTH services are insecure. That's the point, with 2FA you'd have to compromise TWO services to get access to all your accounts. As opposed to storing your TOTP keys in your password manager, where only one service would have to be compromised to get access to all your accounts.


You need to think about the security of the whole system, not just the storage encryption: very few people get compromised by someone running an offline brute-force attack compared to the number who get malicious code running on their system. Password managers can make an effort to harden against that kind of attack but ultimately that's why things like MFA exist since there's a high likelihood that someone will be able to ready anything you have open.


Again, I don't disagree per se, but it's such an obvious truism that you have to trust the device you're using it seemed a bit pointless to say it.

I'm also not sure of the point you're making?

MFA doesn't help you if you open anything like you said. So you're safe until you open something, at which point you've presumably used MFA to open it, but then it's open so MFA doesn't help you any more.

Basically, malicious code on device = game over in all any circumstance.


Here's a simple example:

1. You use AWS

2. You lose your phone, `npm install` the wrong package, etc. and someone gets a copy of your password database

3. The attacker tries to login as you to fire up their bitcoin miners

In the case where you're not using MFA or are using TOTP with a shared seed, they're successful. If you use U2F, TOTP on a separate device, etc. they'll fail even though your computer still needs cleanup.

Consider also that many attacks aren't full privileged code execution — say being able to read a file or dump browser memory but not installing a keylogger or trojan which would allow them to piggyback on your future sessions. If you're not using MFA, that's all they need to be able to open their own session.


The odds of your lost phone landing in the hands of somebody who is going to spin up a bitcoin mining farm on your AWS account is miniscule. A much, much more likely risk is one of your accounts getting compromised by some dude running a botnet using a list of a million leaked credentials.... if you have 2FA on the site the botnet is targeting, you are immune from compromise.

Besides, my phone as the ability to do a remote wipe. It is effectively a bricked door stop until they can log into the phone.


> Besides, my phone as the ability to do a remote wipe. It is effectively a bricked door stop until they can log into the phone.

If they got your TOTP seeds you're in race seeing whether the person who {compromised,stole} your phone can disable your ability to do that first and since a remote wipe requires network access they can simply ignore it and reuse your credentials until you change them.

What all of these have in common is that multi-factor authentication is based on having separate factors. If you store your passwords in the same place as your TOTP seeds, you have one factor rather than two. You might decide the risk is acceptable but that should be a carefully reasoned decision, which was … not apparent … from the comment I was replying to.


So, by that definition, every password manager is broken. If your machine is compromised, it's kind of game over there.


I think you're thinking about this the wrong way. I trust 1Password. I trust it so much that I don't bother storing TOTP secrets in it, because I don't need a small extra dynamic password tacked on to the strong password it already generated for me. The threat stories where someone somehow captures my password but hasn't fully compromised either 1Password or the site I'm logging into (thus invalidating the credentials anyways) are extraordinarily narrow.

1Password TOTP is mostly just theater.


> But that also means that if your password manager is compromised, 2FA is broken as well

Sure, but if your password manager is compromised, you are fucked anyway.


Not if you use MFA as designed: they’d still need to snag your token(s) to complete a login.


Whenever I read about "across devices" or "all your devices" I think "really? my various linux boxes, android devices and tablets including amazon tablets where I've sideloaded stuff?". If I can't use it on a single device I have to write down/remember the password as before which means i'm unlikely to use a long, strong password. And even on devices which are supported I've struggled to marry the now-built-into-android support for these things (not sure if it was 1password) and browser plugin support for sites so I ended up fighting the various UIs to cut and paste credentials between them.


1password and "very usable" should never appear in the same sentence.


I keep an old tablet at home with a second copy of all my rotating keys.

I know all the sites say not to do this, but if I loose my phone or it breaks or while switching phones I have that tablet around.

Not the ideal "cheap" solution, but I had it on hand already and if I hadn't I would have gotten something like a Yubi Key and use that as my second set of keys.


Do a lot of places really say not to do this? I think it's fine, and way better than using SMS as your second factor ...


There is probably no good answer that doesn't sacrifice security. At some point you have to be willing to say "ok, I accept this risk, this is still way better than using just passwords."

Personally, I enroll multiple devices for each account where I enable 2FA (technically not a supported operation, but nobody can really tell if you scan a QR code twice).

It's technically less secure, but I think a decent compromise.

(self promo, but related to the topic at hand: since I don't have 2 phones, I made a utility to enable using T2-equipped Macs as a 2FA client that binds the keys to the hardware. You can check it out at http://github.com/sqreen/twofa)


https://mattrubin.me/authenticator/ stores in iOS keychain and you can restore your device with TOTP codes from itunes encrypted backups.


If you use an iTunes encrypted backup to a Mac (or PC?) then restoring to a new phone works fine. I’ve restored my TOTP to a new phone this way.

I guess for those who don’t want to do old-school backups to a PC this isn’t helpful though.

I think just storing the TOTP in 1Password is probably the best approach for the average person. I print the backup codes and file them to a filing cabinet.

Edit: for certain TOTPs I’ve also put them in an app on my Mac, so that gets backed up automatically with time machine. Also makes it much more convenient to get them than having to pull out a phone.


> If you use an iTunes encrypted backup to a Mac (or PC?) then restoring to a new phone works fine. I’ve restored my TOTP to a new phone this way.

Really? That wasn't my experience when I restored from backups. Had to set them up again despite most everything else carrying over.


Google broke that for their Authenticator some time ago. This app https://mattrubin.me/authenticator/ works with encrypted backups.


Looked again and it seems like things changed sometime around 2015-2017. Sorry for the misinfo. https://dpron.com/recovering-google-authenticator-keys-from-...


This exactly happened to me (I took my iPhone swimming in the Mediterranean).

My personal solution is 2 encrypted files - one for my passwords, and one for the keys for the (sadly few) services using TOTP. So losing (ie stealing then decrypting) one is not losing both - ie the password and the TOTP are still mutually exclusive factors.

I got back many of my TOTP accounts fairly easily, but boy the amount of trust placed in a SIM Swap is still scary.

Compare for example to (my idealised way it should work) of every online account I have using 2 different U2F keys (ie different hardware IDs) to control the account. One I lock away in my bedroom safe and one I carry in my wallet [#]. Lose one and I still hopefully have the other. But a) can you name any service that does that today? b) is my bedroom safe actually safe ?

My work TOTP was backed up on cloud - and it still worked! The app had stored the key (probably securely) in a icloud backed up location.

So the TOTP was no more safe than iCloud. Which is to be fair a pretty high bar. So I am fairly relaxed about it. But still, TOTP hardly counts as "something you have" these days.

But I completely agree that the lifecycle problem (revocation mostly) is long way from being solved. I would just like to see dual U2F access controls as a default on web services today.

[#] That's another problem. I am seriously contacting wallet makers to see if I can design a USB key and u2f key friendly wallet - I hate my keyring appraoch.


This is why I use Authy for my 2FA tokens. They store everything encrypted in the cloud, so you can wipe your phone without losing access to your whole world.


This

Also, "users will try to provision multiple devices with 2FA" well, you know, some services need a shared login. What then? (And no, in many cases you can't expect the user to use their personal login in some cases)

Shared accounts are a thing, and if they need 2fa make it so.

And yes, phones get robbed, lost, dropped into toilets, run over, their battery catches fire, they get irrecoverably damaged, a recovery scheme is needed for accounts.


I ran into that issue when my Nexus 5X died to the bootloop issue. It was enough of a pain that I just gave up and use Authy with the remote backup enabled.

I've dealt with phones dying/being lost on three separate occasions and recovering the accounts, as you've said, is a huge hassle, and I always end up screwing up at least once resulting in dealing with a round of phone support calls to get access again.


Even after your edit: Treat your TOTP keys like passwords and store them.

It's like "secrets management". At some point, somehow, automation programs need secrets to access APIs. At some point, no matter what, there will be sensitive data in a file on a container/server for your program.

Likewise, "Kill the Password" initiatives are shorthand for "Yeah we know there will always be a password somewhere but let's merge it with 2FA, call it a PIN and we'll be good".

I'm ranting a bit, but what I'm saying is that people will need to store secrets for accounts for a long time. Every single Internet citizen should have Keepass or Lastpass for account management, and that database should be well secured, and should also house their TOTP break-glass codes. (If you're paranoid you can store them in a different database). Barring a Password DB compromise, 2FA still gives a good benefit.


Duo Security has a phone migration/transfer feature now. You need to enable it in your org settings and opt-in.


I just can't believe Duo doesn't have a desktop app of some sort, even if super basic. We use this where I'm at and I have to grab my phone while sitting at my desk to 2fa into things so many times a day. I use Authy for a lot of other stuff and it has a little chrome webapp/extension that'll let me not constantly grab my phone.


Random thought, they have apps that mirror your iPhone screen, typically for development or presentations. You potentially could use that as a workaround so your phone just needs to be connected or on the same network etc.

See Reflector 3, works with Android/iOS.


Authenticator Plus is a nice simple paid (one time) option for encrypted OTP syncing, and supports both Android and iOS.

Additionally, Duo Mobile supports cloud backup and restore of third-party OTPs. (Disclosure: I work for Duo.)


Sounds like the TOTP apps need an encrypted import/export capability, kind of like what HSM's have (including Yubikeys and TPM's.) You have one TOTP app generate a key pair, give you the public key, which the other TOTP app uses to generate an encrypted blob out of the stored codes, which only the target TOTP app can decrypt.

Since modern phones are getting TPM's, it's even becoming possible to do this without any private keys being accessible to the apps.


HSMs generally support import, but not export. Keys generated on the device can't be transferred elsewhere, but you can generate the key separately and import it into the HSM while keeping a backup copy. TOTP apps generally work the same way, except the keys are always imported: you can save the secret used to configure the app (screenshot the QR code, or scan it with a regular QR code reader) and use that to set up the same TOTP on a different device.

> You have one TOTP app generate a key pair, give you the public key, which the other TOTP app uses to generate an encrypted blob out of the stored codes, which only the target TOTP app can decrypt.

Unless you somehow verify that the public key came from a "genuine" TOTP app, that's essentially the same as allowing the keys to be exported in plaintext. (User/attacker generates their own key and presents it to the TOTP app which duly encrypts its secrets in a way the user or attacker can easily decode.)


Exactly this. The onboarding process is simple but the total cost of ownership expensive. I ended up having my older mobile phone with me like a token device and will migrate its keys when I have time to do that. I was even confused by the Samsung migration using the USB OTG connector since doesn't clearly states that some stuff is not migrated.

I hate when you need to put your brain in sysadmin mode instead of disconnecting it expecting a process that just works.


That's why I have an NFC hardware key. Doesn't matter if I get a new phone then. Of course if you lose the hardware key, you still have the same problem.


Something I do to mitigate this problem is take a screenshot of the barcode and store it in keepass - along with the password. So next time I change phones- I just scan the codes back and I am back on track with the TOTP.

> I thought we were trying to stop people writing their passwords down and storing them next to their computer?

Yes- but I guess it's okay to use password managers anyway. And they make it easy to store screenshots and stuff as attachments.


> Why are we developing systems that use TOTP if we are encouraging users to treat them like passwords, undoing the vast majority of the security benefit?

Better advice would be "treat them like passwords, but keep them encrypted separately from your password manager."

You shouldn't need to restore your TOTP secrets to your phone more than once every couple years, so there isn't any reason for you to have access to them 20 times a day.


If you're using Google Authenticator and rooted your phone, it's not hard to extract the database and transfer the secrets to your new phone. Or you could use an alternative like Authenticator Plus which supports export / import (and has some other nifty features). Of course, both these options come at a cost to your security posture.


>The main problem I have with TOTP (What you're using when you use Google Authenticator) is that the migration path doesn't exist when you get a new phone.

It also doesn't migrate when restoring from backup, as I found out when my iPhone had an issue and I had to get a new one - if I'd known that I'd have done it on a weekend :/


Saving TOTP keys is not that bad. Even if they are on a piece of paper next to your computer, it is still good protection against keyloggers. If they are in a safe and that safe doesn't contain the corresponding passwords, that's probably the best you can get.

Note that you can use two phones in case you want recovery.


> the migration path doesn't exist when you get a new phone

I faced with this problem last month. I was using `Google Authenticator`, mainly because of its simplicity. But you cannot see the secret keys for entries. I had to transfer sqlite database out of the phone and extracted private keys manually.


> If you now buy a new phone (which users might be doing once every 18-24 months), you need to log into each account and generate a new TOTP key and void the old. That's a couple of hours work.

No you don't. You can simply save the original key and enter it again 'manually'. I say 'manually' because it's really copy and paste, so it doesn't take very long to do at all. A few minutes at most.

Every single time I've set up TOTP for any account I can simply select to enter the key manually into Authy or Google Authenticator etc rather than scan a QR code, so you always get access to the key. Just save it.

You can be paranoid about this and store them in a separate password database if you like. Or you can store it alongside the passwords in a password manager.

Then just secure your password manager with TOTP and all you have to 'recover' in a disaster recovery situation is the password manager password (should be memorised) and the TOTP key which you can store under physical offline security if you like.


That behaviour is literally what this article is discussing trying to stop:

- Documentation: Warn users not to save their QR codes

- Documentation: Tell users to only provision one device

- Documentation: Suggest TOTP applications that don’t support unencrypted export

Why are we telling people to use 2FA if we then immediately remove the security benefits by telling them to treat it like a password?


The "Warn users not to save their QR codes" was to address the issue of users who might "Screenshot their TOTP QR codes and leave them lying in their Downloads folder". I don't see how that is necessarily applicable if, say, I print the QR code and save it in a safe deposit box, where I also have my FileVault recovery key and my 1Password Emergency Kit.

I also don't really understand the "Tell users to only provision one device" point. If the device is one like the gemalto thingy that we use at work to login to AWS, then sure, I can see why having more than one is bad for a given login. That shows the code to anyone who presses the button. If you had two, you'd need to keep both of them under your control at all times, and then there would be a decent chance then that if you lost one or destroyed one, you would also lose or destroy the other at the same time, so having two might not even gain you much in reliability.

But what if the devices are my iPhone and my iPad and my Apple Watch? They have pretty strong protections to prevent a third party from using them if I lose them. The consensus seems to be that unless I'm targeted by a government, a lost modern Apple mobile device with a long passcode is not going to cough up its secrets.

(Well...at least an iPhone or iPad. I think Apple Watch defaults to automatically unlocking if you are wearing it and unlock your iPhone. That might be exploitable if the person in possession of your watch can put it on and arrange to be close to you when you unlock your phone. I wonder what the range is for that? Would it work through a typical office wall?).


None of those points above address what I said, nor should they because TOTP should allow for disaster recovery.

To allow for disaster recovery the keys used to generate they TOTP codes must be storable somewhere.

The article is creating a strawman by suggesting to screenshot QR codes and leave them in the downloads folder. It's perfectly reasonable to save keys in a secure manner.

It's also giving borderline bad advice of trying to engineer in an unrecoverable state should a single device fail. That's a poor suggestion to give under any circumstance.

Saving TOTP keys into a separate dedicated encrypted vault under physical security is absolutely a valid method of allowing recovery from a device failure.


That's what recovery codes are for.

If you're ok storing the TOTP key, you could just store a recovery code instead. Recovering an account is generally audited, so this is more secure that just provisioning another device.

My point is just that it's still a lot of effort to recover, and we're basically encouraging people to undo the benefit of MFA by storing the TOTP key/recovery code right next to the password they used to get through the first factor...


There's no functional difference between the two codes in terms of account access. And I dispute the claims of 'audting'. No service cares. As long as you give a correct code then in you go.

So why use recovery codes and then have to keep them secure with the drawback of resetting up every lost account when you can put the exact same amount of effort it to storing and securing the original key and re-setup all accounts in a few minutes?

I speak from personal experience on this topic as to which is easier and how the effort to store and secure recovery/original keys is exactly the same.


>There's no functional difference between the two codes in terms of account access

Are you sure?

The last time I used a recovery code I got an email and my 2FA immediately stopped working.

If I have your TOTP key I can use your 2FA without you even knowing, even while you use it. It effectively gives me an unlimited backdoor into that account.


> Are you sure?

No, because I haven't tried every service. What I do know is that key services I have notify me of every single login that is made, so I'd know anyway.

Plus, one has to examine at which point back up the chain the problem might occur or be spotted.

In this instance you've managed to access my TOTP keys, which means you've hacked and broken the encryption on the password manager or you've got malicious code running on my device. Or you have physical access to a running and unlocked machine.

In either of those cases I'm already truly fucked.

I would imagine that any scenario where I managed to get hold of your recovery keys would involve the same things, so you'd be truly fucked.

So in that sense there's no functional difference in the way I have things setup for me.


On IOS use "Authenticator" app by Matt Rubin. It'll save codes with iTunes encrypted backups but not to iCloud. Then restore to new iphone. Not sure how that'll work out when they retire iTunes.


Yes! Plus you might not have an expensive data plan when you travel.

Better solution - central trust via apple face recognition (federated, private), but used by others as login.


AndOTP has an import/export feature. Bitwarden has automatic sync. tetripin has just a clear text file you can export. There are technical solutions.


Screencaps of the QR Code inside a good password manager with a different password/key.

Yes, the weakeast point is that password/password-manager now.


Is Authy not TOTP? I migrated to it from GA specifically because it divorced me from my single device.


Recovery codes should not be mandatory. Recovery codes are not second factors — they circumvent the 2FA scheme. You should, of course, allow users to configure recovery codes. However, the default behavior for a two-factor scheme should always fail closed: if a client cannot produce a valid second factor and hasn’t voluntarily weakened the 2FA scheme by adding an escape hatch, they should not be allowed to continue.

This is unrealistic. Users lose 2FA credentials regularly. Think of recovery codes not as a defense for the user, but for the service --- they're keep some number of customers out of your terrifying, manual account recovery flow.


Author here: that's fair; I admit that it's sort of an extreme position to hold (and the framing is intentionally polemic).

Perhaps more fairly: services should only provide recovery codes by default once they (1) fully understand the role of recovery codes within the 2FA scheme, and (2) make efforts to relate that role to users and encourage best practices beyond "don't save this text file to your desktop!".


You're already asking a lot of users. Let them save the recovery codes wherever they want! The real point of the codes is to prevent people abandoning 2FA in frustration, even if it is at the expense of some security.


First of all: thank you for your work on securing congressional campaigns! I can count on my hands the number of people I admire in the intersection between technology and politics, and you're one of them.

I agree it's a lot to ask. But I think it's better to demand extreme security considerations from users and fall slightly short in practice than to encourage any practice that improves account security beyond a single password (e.g., SMS codes).

One frustrates users and makes them abandon 2FA, the other (IMO) encourages some complacency and makes it harder to justify changes to users (especially ones that are really great from both UX and security perspectives, like WebAuthn).


Thanks for the kind words!

Do I understand you right as arguing that having some people not use 2FA because the requirements are too harsh is better than bringing people onto "2FA lite"? Or are you saying something different?


Well, I guess that's the hazard: it's definitely better to have "2FA lite" than to have no 2FA at all, but I don't think our mentality when it comes to 2FA advocacy should be resignation towards the mistakes that users make. But that's a really hard line to walk.


What mistake is a user making? Sophisticated users lose access to their 2FA credentials all the time. The conventional advice is "enroll multiple TOTP devices". Most users don't have multiple mobile devices to enroll. Now you're asking them to enroll something on the desktop, which effectively undoes the "something you have" benefit of TOTP (see: the 1Password thread above). At least a recovery code _can_ be printed out.

I don't think the logic you're employing here is coherent. Consider it a bit longer! This post could be a good longstanding reference, but this is a big flaw.


Yeah, we discussed this a bit internally and concluded that opt-in recovery is an unreasonable standard to encourage. I'll update the post shortly.

Edit: Updated.


I like this post a lot!


Tom, just to be clear, your counterpoint to the original wording is "Recovery codes _should_ be mandatory", correct? Would you consider "Recovery codes should only be mandatory if a user has only configured a single second factor" to be a reasonable alternative?

I have multiple U2F keys configured on all of my important accounts. I'm comfortable enough in my belief that I won't lose _all_ of my keys to not want recovery codes to exist so I don't need to worry about storing them. This places me in the extreme minority of users to be certain, but I still don't want my security weakened by recovery codes that I won't ever use.


I'm ambivalent about the multiple factors case (unless one of the factors is SMS). My feeling (can't back up with evidence) is that most people who do that are savvy, but remember that part of the point of recovery codes (which are in fact a second factor) is to protect you from the service provider's account recovery flow. The more routine account recovery has to be, the less secure the service is likely to be.


Let's be clear though on who is at fault: this is way too hard to use correctly even at expert level. Users make mistakes because we are putting them at the controls of the 747 when they just wanted to send a spreadsheet to their colleague.


100% agreed. I've updated the post to soften the argument around recovery codes.


What? A constructive outcome? But my nerdfight!


It's unclear from the post that it's a polemic; it's structured more like a standalone reference. I think it's pretty good for that, except that the recommendation to avoid recovery codes is going to get users hurt.


> they're keep some number of customers out of your terrifying, manual account recovery flow.

What is the ideal, secure manual account recovery flow for users? Having worked at a SaaS startup previously (where application login 2FA was mandatory for staff initially, but providing it to end users was discussed extensively internally before eventual roll out), it doesn't seem like there is an easy solution to have random user prove who they are if their 2FA auth goes south (no recovery codes, access to TOTP lost).


I like the approach some MMOs use:

- Provide the printed CD-Key from the game box in full

- Provide details about the transaction used to purchase the game (address, date/time, name)

- Provide names of characters on the accounts

They're not impossible to get if you've compromised someone, but they're far from trivial.

In fact, it's actually very similar to the algorithm-based verification my bank uses...


Curiously, Blizzard had hardware tokens (and later phone authenticator apps) 10+ years ago, before most banks and I kind of want to say 'before most consumer online services in general'. And they had the exact same account recovery hellflow problems everyone else has, just earlier.


Namecheap just asked me a bunch of questions that anyone with access to my mail could answer (all in invoices)


First step is to understand your data and why or why not it might be important to hackers. Then base your implementation of 2FA around that.


Thanks for pointing that out! I think we made the wrong call too. Will and I updated that bullet to strongly recommend recovery codes, and note they provide an acceptable usability/security tradeoff.


I think this article conflates two very different motivations for using 2FA:

1) An organization has valuable resources it wishes to protect and secure from understandable but avoidable user mistakes, like phishing. For example: an employer. Notably, in most of these cases there’s an authority to which the user can appeal to recover account access if 2FA access is lost. It makes sense to be more strict.

2) A user wants to protect something they value, but the provider loses nothing if the user’s account is compromised through user error. For example: personal email. In this case the onus is on the user to ensure they don’t lose access, and the provider may be unwilling or unable to accept the liability of enabling account recovery.

The threat models for the two are very different, and in the second case, it’s in the user’s best interest to favor availability of access over all else.

I secure my email with TOTP. I have the key stored in 1Password. In all cases 1Password (native app, not web version) is compromised, I’ve lost the battle already. However, I’m not worth burning a zero day on or otherwise targeting specifically. I also have backup codes saved in my personal disk backups. Anyone willing to break into my house to get them could just threaten me more easily.

Be very aware why you’re offering 2FA to your users. Are you 1, or 2?


This article may be right for a corporate environment where you can go to the IT department and prove physically that you are who you are, and get your account recovered should anything go wrong with 2FA.

However, if you (like me) use Google 2FA for your personal accounts, you must (if you are sane) keep printed / screenshot copies of the QR codes, backup codes, etc. to be able to recover your account.

With Google or any number of services who don't feel they need to get involved in human-being operations, you have nowhere to go for help if you turn 2FA on and then for any reason lose all access to your codes. What if your only phone dies, gets stolen, lost, etc?

That is the tradeoff -- security at the expense of having absolutely no way to circumvent it. So the only alternative to not lose your entire online life is to keep several backups and not implement the rules that this guy lays out (which would be appropriate elsewhere).


I'm a little confused on this risk factor:

  Use the same QR to provision multiple TOTP applications  

  Poor understanding of what/where their second factor is.

  Documentation: Tell users to only provision one device
What is the risk here? If I have two phones and only one of them on me at a time, why is it dangerous to configure my authentication app on both phones so it's available regardless of which one is in my pocket?


It better to add two different devices with different secrets to the account, than both sharing the same secret. This reduces the chances that something goes wrong when a device is lost, or removed. The user can the remove the other device with much less risk of locking themselves out of the account. It also might provide some info in the cases where a secret is leaked, as it can be tracked to the specific device.

There is no real security risk of the devices having the same secret, or in adding a backup device. It is really a usability thing, the biggest pain point a user will experience with TOTP is when they are switching phones, or when they lose one. Decreasing the friction here can greatly improve the security of the system.


If the service supports that then it's fine. A lot of services only support a single TOTP key per account though.

There's a small risk in provisioning multiple TOTP keys on one account too. Each one reduces the search space for a brute force attack. Add in allowances for drifting clocks and a lack of account lockout for missed attempts and you might open up the window just enough for a brute force attack to be successful.


I always use the same QR code to provision more than one OTP app on my (1) phone, since I've had some of them fail on me after updates or OS upgrades.


Because if you do not know where your other phone is then you do not know who has access.

The same issue exists with a single device as well, but to a lesser degree, because it's less likely to be out of your control.


It's in a locked box.

And I'm not too concerned about someone breaking my phone security, and then the TOTP app security to get at my second factor. If I'm being targeted that hard they're probably just planning to kidnap me and hit me over the head till I login to my accounts anyway.


I wish all 2FA worked like when logging in with your Apple ID.

- 2FA by default

- Push notifications for the token to all your devices instantly

- Not a text message


>Push notifications for the token to all your devices instantly

... including non-Apple devices.

I've been faffing with this for the past few days - I had to reformat and reinstall OSX on my rather old MBA (2013), and I didn't notice at first, but it only restored Mavericks (was previously Mojave).

As my only Apple device, I was SOL when it asked me enter my verification code for me to log into the App Store to upgrade the OS (as Mavericks is pre-2FA).

There were no other options for verification and the only other device I own is an Android phone (not entirely unreasonable).

I can't see a way round this other than getting ahold of another Apple device to get the code. Am I missing something obvious?


I did this recently and basically there's a way of requesting adding another mobile number to the account as a recovery number.

You put in the application, wait about 4 days and then you'll get auto approved (I can't believe any human looked at this process) and you can then set that number as the recovery number.

It seemed to circumvent the whole MFA thing pretty easily but the penalty was time.

No idea what checks were performed in the background by Apple, I suspect none. It seems like the 4 day wait was just to make me feel the system would be secure if someone tried it to me.


You can use Command + Option + R to boot internet recovery instead of the on-disk recovery. That'll download and install the latest version of the operating system associated with your Mac.

On the 2fa front: if you only have one Apple device, you really can't leverage the Apple 2fa system, I think. It always requires a past Apple device of some kind to get the code.


ah .. I didn't know about internet recovery. Hopefully, I'll remember your tip if this happens to me again (I borrowed an iPad, in the end).

Unfortunately, it doesn't look like you can turn off 2FA once you've had it on for some time, so it feels like I'm being pushed into buying a second Apple device?


I'm trying to figure out what exactly Google 2-Step Verification is, and whether to trust it or not. It doesn't appear to be a text, and provides a push notification to your device - it's super convenient, I just don't know if it's particularly strong.


Google 2-Step Verification is vulnerable to phishing just like TOTP is. You can go to a phishing site without realizing it's not Gmail, you enter your username and password, the phishing site gives those to Gmail on your behalf, the phishing site causes 2-Step Verification to happen, and Google sends a push notification to your phone for you to let the attacker into your account. (I believe Apple's default 2FA mentioned by GP works the same way.)

Security keys (and the newer project from Google to let your phone act as one over bluetooth) don't have this vulnerability because they connect right to your computer and talk to your browser (and not the attacker's) to verify the domain you're accessing.


But it's still better than SMS, right?

How does the security key/browser pair communicate without involving the site? Does it involve more of Google's interference then? While I know you're not saying "yes" to the site, isn't the key doing roughly the same thing?


>But it's still better than SMS, right?

Right, with Google 2-step verification you don't have to worry about number porting attacks. It's just vulnerable in the sense that a phishing site you've entered your username and password into can still trigger the prompt.

>How does the security key/browser pair communicate without involving the site? Does it involve more of Google's interference then? While I know you're not saying "yes" to the site, isn't the key doing roughly the same thing?

When you use a hardware security key with a browser, your browser tells the security key the page's domain, a user id, and a random challenge token if I remember right. The security key signs a message containing all of these things and gives that back to the browser. If you're on a phishing site, the page will have a different domain than the true site, the message signed by the security key will have the phishing site's domain instead of the true site's domain, and the signed response generated by the security key won't be valid for the attacker to use on the true site.


Ah ok - that makes sense now. Thanks for the explanation!


As we get more and more devices, pushing to all of them feels unsafe. :(

I'm happy with the security token world. But wish it was more supported for personal things. Yubikey letting me out gpg keys is nice.


> with sufficient warnings for users who prefer their accounts to fail-deadly

This is unfair. A service, in general, can only know if it's fail-open or fail-closed. Unless you're running a nuclear weapons service (where this term came from) or the like, you don't know which way is "fail-deadly". I love promoting security as much as anyone but let's not throw around scaremongering terms.

I'd like my GitHub repos, for example, to be fail-open. If I can't get in, nobody benefits from my junk there being lost forever. Certainly, nobody will die. GitHub doesn't really support that, but at least they don't require 2FA.


I don't understand the prohibition against having the same TOTP seed across multiple devices. Seems like a very useful feature (tap phones to sync TOTP seeds). This feels like a case where religious belief about 2FA has won over real-world usability, especially when people migrate phones.


I think the answer is that if you share seeds and lose a device, then you really need to invalidate and reprovision all of your remaining devices. If you use separate seeds and lose a device, you just invalidate the one device and move on.

From a user's perspective, it seems like a good feature; I'm fine with reprovisioning my 2-3 devices, no big deal, I'm in control of that. From an admin or business perspective, it's less acceptable because if I see something weird and need to invalidate a device, I'm actually preventing my user from authenticating altogether - and that could require more work to recover from depending on where my user is and what my provisioning process looks like.


Why does losing the device require invalidating seeds? If I lose my phone or its stolen, those seeds are still behind a lock screen, and even if they get out, an attacker would need my password.


Maybe 'requires' is too strong a word. It's hardly mandatory, but I'd argue you should invalidate a lost or stolen seed for the same reason you should reset a lost or stolen password.

Of course it is not possible to access something protected by MFA if you only have one factor. But I don't think it follows that making it easy for an attacker to obtain a factor since you have two is OK; the whole point of MFA is that single factors are too easy to guess or steal. Solutions that encourage seed export and sharing make it easier to steal the seed, and leaving a seed active if a device it's on has been lost or stolen is like saying you don't care.


When I see 2FA/password discussions, I check to see if anyone is talking about Gibson's SQRL. They never are. I don't know if it's the best, but I have a feeling we'll see each other in the "Getting 2FA Right in 2029" comment thread too.

https://www.grc.com/sqrl/SQRL_Explained.pdf


SQRL isn't able to square the circle on Phishing. The exact formula changes (the document you linked was last updated this month though SQRL started many years ago) but this remains true.

The idea in SQRL is that surely if we show the user what they're about to do (e.g. sign into mybank.example), they'll realise they're being phished (e.g. by "notmybank.example") and abort. But the whole _point_ of phishing is that humans don't work that way.

In one of Microsoft's early experiments with this stuff they asked users to put their own _real_ bank credentials into obviously bogus sites with a variety of warning conditions to try to understand what's effective. _Nothing_ was effective, ordinary users click past the warnings of imminent doom in order to complete the task. I've written here before about "Brick Wall UX" the practice of designing security systems where there is no way to press on into danger. The reason for Brick Wall UX is that it _works_ and that's what WebAuthn/ U2F deliver for phishing.

In other respects SQRL isn't much different than other 2FA options like TOTP, although it has more moving parts. But because it can't fix phishing, it's not worth another look, if you decide phishing isn't scary enough to warrant extra work, TOTP already exists. If TOTP isn't good enough because you're (not unreasonably) scared of phishing, buy some Security Keys and do WebAuthn.


I've done research on this, and I'm working on a SQRL alternative. The solution we came up with is an optional browser addon. Websites tag their QR codes as 2FA codes via HTML, the addon captures it and validates against the expected domain (HTTPS assumed), and if it matches then the QR code is promoted as part of the browser's UI or even sent directly to the phone.

The goal is to automate away the domain verification. Since the addon doesn't have to know any secrets, it should be installed in as many devices as possible and eventually be included in the browser itself.

Also note TOTP is not an equivalent alternative because it has terrible UX in migration/restoration/revocation scenarios.


WebAuthn is only secure because it entrusts to browser to pass verified domains to your USB key. Why can't SQRL just do that with no other protocol modifications? Then we don't trust the user with anything, protocol-wise. Cause sure, if the site can pass a QR code or URL directly everytime, that's an issue because you're still trusting the user to manually verify the domain, but if the interaction is mitigated by a trusted party (i.e. the browser), then I don't see the problem.


Technically the FIDO device ("USB key") doesn't get told the domain name. The browser throws it into a compression function with the random values and a bunch of other stuff to compute a value the server also will be able to calculate for itself. Your key has no idea what facebook.com is, doesn't care.

The FIDO device is impressively dumb on purpose, makes it hard to attack. Given an input and a hardware user interaction it responds, cheap ones aren't storing anything or doing any conditional work, and the interaction means you can't do any sort of brute force attack - if you somehow RCE the browser and prompt the user "please press the button a million times" they're going to report that as a bug and close the browser.


Every time my leg accidentally hits by YubiKey, everyone on Slack thinks I'm having a stroke. I already set it to require a long press so that it takes a couple seconds, but that doesn't help if my laptop is just at the wrong angle. Anyone else come up with a good solution to this issue?


1) Use U2F, rather than OTP 2) Do you have a Nano? I've personally only ever had this issue with yubikey nanos, not the normal ones - and when you're not intending to authenticate, you can remove your keychain that has your yubikey from the computer!


I turn my OTP off, just use U2F:

> ykman mode FIDO


Oh nice, just did that, thanks! I wasn't even using the OTP to begin with, but I didn't realize I could just disable it.


I just don't have it plugged in unless I need it. Admittedly I only use it for mail and password manager.

What do you do that requires you to authenticate that frequently? Could the authentication state not be cached for a while?


Keeping it plugged in also increases the chances you'll forget it there when you go to the bathroom. The NFC version of the keys is the best solution to this, imo. You can keep the key attached to your body and not worry about ripping it out when you walk away



Already did that. It doesn't do anything if the side of the laptop is pushing into my leg or the side of the couch or something, which is where it usually happens.


You accidentally send one-time passwords to other people? Good job that's not the only factor.


> Major sites have begun to discourage the use of SMS and voice codes, which are thoroughly broken as second factors

Thank goodness. Cell phones and carriers are very soft targets.


Agreed - but for non-technical consumer 2FA, there's really no other option that users are actually going to use. And SMS 2FA is still orders of magnitude better than just passwords when your threat model is credential stuffing and weak passwords - not attacks on specific accounts.


Founder of Authy here. I've been thinking a lot about this lately and came to the conclusion that the only sensible way to do 2FA are U2F hardware key's. Here's why:

First, SMS 2FA. People think SIM port is uncommon, its not (i saw thousands of cases). Your cellphone number its public information - pretty much - and its not a technically difficult attack, you just need to convince a carrier to do it. Once the your SIM is migrated to the hackers possession he will hack into all your accounts before you even realized what happened.

Second, TOTP. I founded Authy with the idea that TOTP was strong enough and it is, technically, but in the wild deployments have lots of issues. Biggest one is people constantly change/loose their phones. So you end up with a update issue. At Authy we solved it by encrypting the seeds and storing them on the cloud. But today most users just copy the QR-Code, or store their TOTP key along with their passwords in the password manager. Storing your TOTP in your password manager completely defeats the point of TOTP, it just provides you with a false sense of security. Lastly, because it generates a lot of support issues when people loose their phones, services have added ways to bypass 2FA in their account recovery flows. You'll see backup codes or simply SMS as a recovery mechanism. This means your TOTP is as safe as SMS if your recovery allows it. TOTP today is so misused its just providing a false sense of security.

Third, U2F Hardware tokens. Its finally possible to do U2F to the iphone via Bluetooth and Feitan now has a key that supports it (Google sells one for project Titan). You can buy 2 keys for $50 dollars. It's impossible to missuse U2F tokens - you can't unsafely back-them up, you can't "screenshot them", etc, hardware enforces their security. They are 100% un-phishable, its impossible to trick a user into signing a login on a fake site - the key will simply not sign it, and there is no way for the user to make an "exception"(like you can if the SSL cert is invalid.). Also given the price and form factor is easy to buy 2 or 3 and have a few stored as backups. In my case I have 4 keys, 2 that I use on daily basis, and 2 I stored as safe backups. If I were to loose 2, there is no way of knowing they belong to me and tie them back to my account and I would just use the backup keys to logon, remove the lost keys and buy 2 more. No unsafe recovery keys, no unsafe backups. All my 4 keys are the exact same level of security.

Lastly, now Android allows you to use your android as 1 U2F key(new androids have secure hardware enclave specifically for this), so essentially all that users would need to do is buy 1 hardware key as backup.

If you are a service provider, I hope you consider about offering the ability to use U2F keys as secure login mechanism and enforce minimum 2 keys need to be registered - then you disable any other recovery mechanisms. THIS IS THE RIGHT WAY TO DO 2FA in 2019.


I came to a similar conclusion: U2F hardware is the way to go. For some people, smartphones are becoming the only device they use. However, I am not fully convinced of using the device itself as a U2F key. Then it's no longer a two-factor solution. Thus, I envision the use of U2F hardware with mobile devices as the future of authentication.

Unfortunately, it is still difficult to find the NFC "sweetspot" at the back of your phone. At Cotech, we work on a Hardware Security SDK that solves this and works independent of Google Play Services. It brings support for U2F Hardware over NFC and USB to Android phones: https://hwsecurity.dev/fido/


Thanks for your input, and for Authy. I was a long-time user until I recently switched to an open-source alternative.

> Third, U2F Hardware tokens. Its finally possible to do U2F to the iphone via Bluetooth and Feitan now has a key that supports it (Google sells one for project Titan).

Would you still recommend a Bluetooth key given the recently found vulnerability[0] in the Feitian/Titan? The initial criticism from Yubico[1] seems to suggest it's an inherent limitation of the BLE protocol.

[0]: https://security.googleblog.com/2019/05/titan-keys-update.ht...

[1]: https://www.yubico.com/2018/07/the-key-to-trust/


Laptops also often have a secure enclave now, so they could be a second device. Chrome on TouchBar MacBook Pros supports U2F this way.


> They are 100% un-phishable, its impossible to trick a user into signing a login on a fake site

Maybe 99.999% un-phishable. There have been kinks in the certificate chain in the past that have lead to improperly issued certs.


This is the most insightful post in the entire thread. Thank you for that.

Most of the discussion here is about TOTP, which at this point is like arguing about the beautiful plumage of the dead parrot. TOTP for professional 2FA is a walking corpse, pushing up the daisies, wouldn't squawk if you put 10K volts through it [1]. If you're a company seeking to secure your infrastructure all your employees and contractors should be using U2F hardware keys to access your network. Period, end of discussion. Same for admin access to any external SaaS dependencies - and you should be loudly complaining if your SaaS does not yet support hardware keys.

And if you're a startup and even a solo developer, start looking at supporting WebAuth so you're not caught with your pants down later, especially if you want to sell to other businesses.

Business to consumer TOTP is a more complicated issue. The future is clearly hardware keys, tied to devices like phones, but the support is not yet all there. So you're going to have to support TOTP for a while yet, since it's better then bare passwords. But you should be making plans to move to hardware U2F ASAP, and the earlier you do it the easier the transition will be later when you will have to mandate it for all your users for liability and CYA reasons.

The looming shadow over all this is account recovery, which is not a solved problem in Business to Consumer space (IT/HR can sort you out to get back on your corp network if you lose your keys). There are too many implementations and all of them have flaws. There's little consensus on how to do it and all of the recovery methods can be misused or abused. If you lose your house keys you go to a locksmith who's usually bonded (in the US) and generally not a crook. Who do you go to if you lose all you hardware keys?

And of course there's a cost to users to having multiple hardware keys, which at $25 a pop will not fly with consumers. These things need to be basically free (your phone) or comparable to the cost of your house keys (for backups) for mass consumer uptake.

Bottom line, U2F hardware keys are the future of authentication. Learn to love and support them.

[1] https://www.youtube.com/watch?v=vZw35VUBdzo


> W3C and FIDO finalized the Level 1 WebAuthn specification back in March. Chrome and Firefox already have mature support for it, and Safari is expected to follow.

> Upcoming Android releases will allow users to use their phone as a security key, and iOS is expected to do the same.

This does not make me happy. Why does there need to be a web standard for this? I do not want Firefox to store my keys for me. Or any browser. I do not want transparent authentication, I want it to be explicit, and I want it to be offline.

I have the same issue with hands free keyless entry/ignition for cars. The feature is solely for convenience, and exposes far too much.

The way things work now is the best. When setting up 2FA, I write down my key, and import it into Google Authenticator, or oathtool for the command line. There is no integration, no automation. It works just fine.

In the future I'm imagining my 2FA secrets being stolen from my browser, or being used to track me. Google "for my convenience" automatically logs me in so it can track me? Or perhaps, my bank checking my battery level, WiFi hot spots, and the model of phone when it pulls the 2FA tokens to verify my location. Also, I can only log on with their app on my phone, because the tokens are hidden, further making my desktop useless. Maybe a website figures out how to use JavaScript to generate another logins tokens. It takes an hour of tokens, and feeds it into hashcat on AWS to break my key.

No thank you.

The rest of it though is great. SMS 2FA can die.


Hey! Author here.

> Why does there need to be a web standard for this? I do not want Firefox to store my keys for me. Or any browser. I do not want transparent authentication, I want it to be explicit, and I want it to be offline.

WebAuthn doesn't specify key storage in the browser. It specifies a common JavaScript API for registering and generating assertions via an authenticator, which in turn stores keys internally.

Wanting offline 2FA is perfectly reasonable, and I don't begrudge you that! TOTP is perfectly fine for that purpose, so long as you understand the tradeoffs you make in terms of symmetricity/phishability/replays/etc.


Thanks for the reply. Despite my gripe about this one aspect, your article was very complete and well written. I've bookmarked it to revisit when I need to implement 2FA in projects I'm working on.

That being said, I do not like a JavaScript API that provides access to this. JavaScript, JS developers, and the web in general has a horrific track record for security and privacy. To me, this is like the API that provides access to the battery charge state, or my accelerometer for device orientation.

Why? Entirely unnecessarily! But it's a cool feature, so let's include it. Suddenly, my keyboard typing is being exfilled through the browser, and I'm being tracked by my battery charge state.

Do not want!

EDIT: I've maintained this position for "WebBluetooth" too -- a similarly terrible idea. The less my browser does, the better!


Browser support is necessary for a website to communicate with a hardware key (such as Yubikey). Yubikeys aren't automatic; you press a button to authenticate.

Maybe you don't like those, but they work well for many people. Even non-technical users are less likely to lose a key on their keychain than to lose or damage their phone. (I've lost Google Authenticator keys to phone hardware failure.) They can also be used by people who don't have a cell phone at all, or use it rarely.

They're also resistant to phishing attacks since they authenticate the site, and users are not good at checking domain names.


Most of these concerns are luckily not a problem in practice, I'll try to go through them one by one.

> In the future I'm imagining my 2FA secrets being stolen from my browser, or being used to track me.

The API does not provide access to secrets. Keys remain on the WebAuthn device, and the device only signs data and sends that back. The key is likely also stored in a way that makes extraction hard - for hardware tokens, past attacks of this nature mostly required physical access, and modern iPhones and some Android devices have high-quality key stores offering similar protection. AFAIK keys used by these devices differ for each origin/domain (IIRC through some crypto magic on hardware devices, as they don't have space for many keys), preventing cross-origin tracking.

> Google "for my convenience" automatically logs me in so it can track me?

Most (all?) implementations I'm aware of require approval on the token (physical tap, approval of a prompt). Browsers also tend to show a prompt/notification when sites use this feature.

> Or perhaps, my bank checking my battery level, WiFi hot spots, and the model of phone when it pulls the 2FA tokens to verify my location.

The API does not allow this level of access.

> Also, I can only log on with their app on my phone, because the tokens are hidden, further making my desktop useless.

There is nothing stopping you from using hardware tokens (which use the same standard) or even soft tokens running on your desktop. IIRC GitHub created a desktop implementation utilizing the Secure Enclave that modern Macs come with for this purpose.

> Maybe a website figures out how to use JavaScript to generate another logins tokens. It takes an hour of tokens, and feeds it into hashcat on AWS to break my key.

This does not make sense with the implementation in mind - the key is stored on a separate device and the browser only ever gets something that was signed using said key.


Could we not all agree that the only reason we don't like passwords is because people try to remember them, they are reusable, they are simple, and once stolen we have a hard time telling if the user is genuine?

If that's the case, can't we default to sending an out-of-bound request (e.g. push to phone, or fall back to TOTP) for authorization, and if that's not available, require a long, random, website-and-account-unique password be entered that is kept in a browser's password manager?

The last case, "user lost everything", is more sticky. Can they still log into their e-mail? If so, they can initiate a password reset (and we'll assume that one day e-mail will be secure) and store a new random password in a browser password manager, and register a new external device for push-auth. The hinge point here is the e-mail account, which may need more robust protection.

This scheme should work with existing systems without new standards, be resistant to password reuse & cracking, default to a second factor, allow somewhat reasonable recovery, and not require a hardware token. (The idea that the hardware token is needed to defeat phishing is bogus imho; the browser password manager can auto-fill the password field for the correct site, and refuse to do so for phishing sites)


I've always thought that Bitcoin multisig was the best implementation of 2FA. 2 of 3 keys, one you keep hot, one pay someone to hold and sign/log on your behalf (and let you know if something seems off) and one you file away in case the person you're paying to co-sign with you disappears...


I don't know the security ramifications to what they've done, but as a user, I find Blizzard's current implementation to be the golden standard that everyone else's implementation leaves me feeling bummed by.

I login, I get the "hey, we just sent the code VH3Z (making it up) to your phone/watch/whatever" and then on both my phone and Apple Watch, I get a prompt saying the Authenticator has that code and do I approve or reject. I tap approve on either device and then the current window I'm in within whatever Blizzard app (whether web or Battle.NET) switches to the actual content.

So long as the phone or watch are present and charged, this is such an easy experience.


Same with Monzo banking app (UK). You buy something from Amazon that goes over a certain threshold: Browser: "Waiting for third party auth". Phone: "Hey did you approve this purchase? Yes/No". Very user friendly, intuitive anti-fraud measure.


That long list of ways users subvert 2FA guarantees shows this technology isn't yet user-friendly. (In a sense, passwords aren't either, as seen by all the folks who keep theirs on a post-it note).

I'd like to see web developers implementing this also put some effort into ensuring their sites are password manager compatible.

Even more kudos if you adopt a sensibly layered approach, allowing more innocuous functions with a traditional login, then prompt for 2FA the first time more sensitive activity is requested.


I wish someone made a way to securely sync two hardware tokens - for example I agree before setting up any accounts on my tokens that two or three of them agree to sync data. I only have to make sure I do it once in a while.

Right now if I lose my TOTP or U2F device without setting both up on all my tokens, I lose access. That's super bad.


Bitwarden, 1Password, Authy etc. have encrypted sync/backup of TOTP secrets.

With hardware tokens.. unfortunately the real solution is "enroll a backup one to all services".


It's not 2FA when you have your password and the TOTP secret in the same place.


FIDO2 - https://fidoalliance.org/fido2/

That seems the state of the art at the moment. I wonder why the article does not even mention that.



Any idea of when HackerNews will support 2 factor authentication?


I actually wrote code for this ages ago when I worked on HN. It’s probably still there, if only in git. But it really only makes sense for the internal, secret parts that are YC related. For the forum just use a unique password.


The blog writer doesn't seem to be even remotely familiar with Authy. Most of the issues he discusses have been resolved with Authy.


Please provide more details about how your service resolves these issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: