Hacker News new | past | comments | ask | show | jobs | submit login
Short session expiration does not help security (sjoerdlangkemper.nl)
696 points by ColinWright 9 months ago | hide | past | favorite | 419 comments



The main place I see short session expirations is on banking and financial apps, which seems defensible to me for a couple of reasons:

1) They're used by a wide variety of people, including people who may not own a computer or mobile device, or who may not have a backup device to use when their personal device breaks. This group is probably shrinking—more and more people have smartphones and the remaining people who don't have smartphones are probably also the people without bank accounts—but arguably you want to cater to this group, since they may already be a bigger target for scams.

2) They're very appealing to opportunistic attackers.

3) They're used by people in stressful or unusual circumstances, e.g., when traveling, if they need to make a large emergency payment, or if they're afraid of being scammed.

4) Most sessions aren't very long anyway: checking a balance, seeing whether your paycheck came in, making a single transfer or payment, etc. There are definitely exceptions—15-minute timeouts are very annoying when doing taxes, for example—but it seems like the annoyance in these situations is potentially worth the security benefits.

That being said, I don't know if short session expiration is the best solution in 2023. As the article points out, major corporations like Google don't use short sessions, even though their services are used for a variety of sensitive things and they're huge targets. But as briHass pointed out in another comment, they also provide tools to see which sessions are open and use a variety of techniques to detect sessions being misused. I suspect that's the actual best solution if avoiding session stealing is that big of a concern.


My main bank uses username + password + (random subset of ‘memorable word but actually unencrypted password’ driven by select boxes), then 2FA on top, and it literally feels like they just slapped a bunch of things together to add extra barriers to auth in sequence.

This is the UK. Back in Latvia I would just slide my ID card into a USB reader and cryptographically sign the session with a passcode. Same as chip & pin.


I'm Norwegian, but have lived in the UK since 2000, so I've seen the evolution step by step, and UK banks have pretty much just kept adding new steps every time fraud got too bad instead of rethinking things properly. It's infuriating.

Norway has a near universal electronic signature solution not quite the same as your Latvian one but sounds similar-ish (and there's a EU/EEA requirement to require mutual recognition at some point) and it feels much better (e.g. the Norwegian ones allow access to government websites as well, and can be used to sign contracts etc. - I'm assuming the Latvian one has similar options).


Yeah your ID in Latvia has an electronic certificate for governments, banks, etc


I’m so envious of the Baltic’s use of technology at the institutional level.

It seems literally infeasible for the United States to have state-issued PKI. It’s a meta-partisan issue: no side trusts the other to manage crypto or computer systems.

So for example instead of a public many-to-many digital publishing platform we are stuck with the whims of grown men who want to fist fight each other in the Colosseum and all because every X is convinced every Y is so despicable and corrupt that such a public system would be much worse than what we have now.


The USA was distrustful of national ID cards decades before the current level of hyperpartisan rancor. We just don’t the government enough to let them track us everywhere.

Not that the current climate helps. The REAL ID Act of 2005 has been delayed so many times, now to 2025. Maybe we should just repeal it instead.


> We just don’t the government enough to let them track us everywhere.

That statement is both true and emblematic of the problem. Everyone is so cynical that a clean public-key, offline, certificate-based solution, with an absolute zero visibility to the government of who is doing what, would always be assumed to be part of some nefarious Illuminati/Democrat/Republican plot. Proving it to people with technical tests wouldn't prove it. They'll always be convinced.

What we need from the government is so minimal really, just to sign certificates if we prove our identity. But it'll never happen in the US due to the distrust situation.


Because there has never been a proposal for a "clean public-key / offline / certificate-based solution". It's always a central database that also does x and y and will be available for w and z departments to do as they will with it.


It doesn’t matter though, because the percentage of Americans who can understand public key crypto even in a vague way sufficient to critically read a technical analysis and judge it to be true, you could count on one hand. Nobody has proposed it because if any politicians have seen such a proposal on their desks, even if they themselves grasp it (unlikely), they know most Americans would freak out on principle.


Why are you talking about hypotheticals and abstract "distrust" ? We're already being harmed by existing central ID systems - social security numbers and drivers license numbers - with little hope for reform in sight. If there were something like the GDPR that prohibited companies from nonconsensually demanding, storing, or using these identifiers for anything but their bona fide governmental functions, then it might make sense to talk about adding a new system. But until the government is capable of reigning in abuse of the existing systems, any new identification system would inevitably be a gift to the surveillance companies.


> We just don’t the government enough to let them track us everywhere.

Eh, that’s the pseudo-libertarian objection (which is… dumb as bricks, practically speaking) but the most common objection is one of access.

Until IDs are both free AND easy to obtain in the US (if you think they are, congrats: you’re in the lucky group) then a national id scheme will always be a non-starter.

IDs face the same problem voting has in recent years (and historically): the systems of power use limiting access as a tool to control those they don’t like.


> The REAL ID Act of 2005 has been delayed so many times, now to 2025

Can one fly without a REAL-ID-compliant license or ID?


Yes, with a passport you can

... Probably not what you were trying to ask. I thought the deadline passed for using real ID for TSA but I haven't looked for that info in years


The deadline has been pushed to May 7, 2025.


> no side trusts the other to manage crypto or computer systems

Nor should they. Too much power for governments to have. Every so often some news gets posted here about some government official who just does not give a shit about people's rights, you can actually feel the contempt when you read what they say.

Government solutions are non-solutions. We should solve these problems with ubiquitous technology or not solve them at all.


Except our only solution to this problem today IS government provided already (ID SSN or Passport).


Which is dehumanizing. We are not cattle to be identified and marked.


But you need a way to identify people. If there's no good way to identify real people (government issued ID number), those that need them will invent bad ones (SSN, phone number, maiden name, maybe more). And companies like Google have their own identifier for you already, anyway. And you can't travel anywhere without a passport that is another identifier, anyway.

I'm impressed how much Americans dislike the idea of a mandatory government issued ID. I don't mean it in a bad way - this sounds like a very principled and idealistic stance. I just really like my government issued ID, and consider it a reasonable tradeoff.


I'm not american. Not only do I want the government to know nothing at all about me, I also want it to be illegal for others to develop those alternative identification methods. Those who "need" such things should have to figure out a way to avoid needing it.


Interesting. How do you identify criminals. Or for that matter, how do you identify anyone? "Just don't" or ... Some other solution?


I assume their version involves another individual unaffiliated with government or private industry taking their hand, gazing meaningfully into their eyes, and tearfully proclaiming “I see you” before letting them through the border or into the bank vault or whatever.


Ah so you are the man with no name, I assume?


My parents giving me a name is one thing. The state giving me a number and arresting me if I fail to produce that number to police on demand is another.


Which one of those is problematic to you? The state giving you a number or arresting you if you refuse to identify yourself? Because you've given yourself a nice straw man to fight by conflating them.


Oh he knows that, but that’s all their type has: faux-bravado against imagined threats.

Not least: they’re ignoring that simply having an identity is a traceable & trackable thing… because that would derail their paranoid fantasies where they’re a hero, fighting against a dystopian world.


It’s funny you think there’s a difference


It's funny you think there isn't a difference between the humanity of your parents and some faceless governamental bureaucracy.


It’s funny you think there is a separation between the two in any practical way.


If the government wants to do something they should publish open source code that implements a sane authentication system, and then have no part of operating it whatsoever. If it's good and free people will use it voluntarily. If it isn't then you certainly don't want the same people implementing anything mandatory.


The best part is that the US Government already did this -- it runs the second largest PKI. Second only to the Internet. It has issued more than 20 million certificates to individuals.


That's the part you don't want them to do though. Centralized PKI is bad for privacy and creates a single point of compromise. You don't want this, but for the whole population:

https://en.wikipedia.org/wiki/Office_of_Personnel_Management...

What you want is some well-reviewed code that a bank or utility company can "apt install" onto their server and get secure decentralized web authentication working in five minutes instead of leaving them to create some custom in-house contraption designed by a rotating committee of middle managers.

And they should really endeavor to break that PKI thing into smaller, independent, less centralized pieces. It's way too big as it is. There appears to be something called "Symantec" between "Federal Bridge" and "US Senate" and then another "Symantec" between "Federal Bridge" and "Naval Reactors" -- that doesn't seem great.



tbh I can't think of many cases where I'd want state-issued PKI. I already use public key crypto secured by biometrics (faceid) to make payments, login to my bank, etc. None of that requires interacting with the state. For most normal computer use I quite like how random social media sites can't demand my identity because it'd be too expensive for them to verify it using public records.


> I’m so envious of the Baltic’s use of technology at the institutional level.

You shouldn't. I don't know if anything changed in the last few years, but having to insert your ID into a card reader is very cumbersome from a user experience perspective. Especially since the world is moving away from having to plug stuff into computers.


> Back in Latvia I would just slide my ID card into a USB reader and cryptographically sign the session with a passcode.

I think this is a good authentication model, but it costs money. There is the upfront cost of the physical card, and then the higher cost of lost account recovery. I think that's the turn-off to most banks; they will have to staff a call center that can verify your ID, issue a new card, and then deal with your immediate concerns because you can't access the website. Passwords + security questions are often free, "oh you don't know your password? we'll just email you another one if you remember your first elementary school's name".

At the end of the day, they are allowed to cut costs on critical infrastructure (everyone's money!!) because we have a strong victim blaming culture in the US. If your password is guessed, it's because you're bad at picking passwords, not because passwords are an intrinsically flawed technology. If your money is lost, it's because "it's really cool to have transactions that can't be reversed, you should have done your due diligence". It surprises me that everyone is OK with this.

(Incidentally, the only place where I've seen the option to use cryptographic authentication is on Vanguard. They added it right about the time we were testing security keys / U2F at Google, and they were the administrator of our 401k plans. I think Google strong-armed them into implementing it! Would love it if they did this to some banks ;)


You’re taking a well-known system, criticising it, and burying the fact you’re from the US well inside of that.

We’ve already solved this problem outside of America you know.


In Norway, Latvia, Belgium and probably others, the card is issued by the government. So there is no cost to the bank to re-issue a lost card.


Ah, that's the key. We'd never get a national ID in the US, instead ironically forcing the costly KYC onto each individual bank. (And Twitter now apparently.)


There's already a Federal ID required for banking in the US: your social security number.


Except that when it was first implemented, the government explicitly stated that the Social Security Number was not a valid ID.

https://www.npr.org/2018/03/22/596180023/how-social-security...

Due to this, it is fundamentally missing multiple levels of security. You can actually guess someone's SSN within 2 digits if you know their date of birth, and location of/hospital of birth.


> random subset of ‘memorable word but actually unencrypted password’

This annoys me so much with my bank. Their app lets you enroll your account into Face ID authentication, then still ask one of the recovery questions every time.

The UX is awful. I'm so convinced that complaining anywhere won't actually help so I don't bother.

I'd love to know what the hypothetical attack scenario is that drove that decision, but I suspect there isn't one, and the app saves the username/password in encrypted device storage.


The more crap they shove in, the more some manager can boast about "improving security". And no one in the company wants to be the one saying no to "more security" so no one pushes back.


My 'UK' bank recently dropped using the password completely, likely because someone pointed out to them that since you could simply reset the password via the SMS 2FA, it was essentially pointless. So now the SMS and a 5 digit PIN are all it takes. Eventually they'll figure out that the PIN can be reset via 'just an SMS' too.

'security'


I was about to tell you to switch banks, then I realised you just described my bank.

Yeah, I don't understand why none of the mobile-friendly banks are willing to build even a basic web interface for use on the big screen.


Spains banks (I’ve used two so far) simply use your ID number which is used in a lot of places and not considered secret and enforces a 4 digit password.

It’s an absolute joke.


I wondered once about this, but it kind of make sense from the point of view of usability.

Unlike any webservice, you usually have very few attempts to make a successful login before getting locked out, so even if it's four digits, the odds of a successful brute force attack are very low


I suppose so, I just find it funny really that my bank has less password requirements than most (if not all) online services I use


Bank Of America requires to tell them a 2FA code sent over SMS, when SMS literally says:

   <#>BofA: DO NOT share this code. We will NEVER call you or text you for it.
No, it wasn't scam, seen that process physically visiting a branch on agent's display multiple times.


My bank does that for in person visits but you key in the code on a PoS style keypad at their desk


Most banks in Spain require physical presence in the branch for 2fa


You mean to set up a second factor, they require you to go into a branch?


In Latvia what happens if you lose your ID?


Training people to always login makes the susceptible to phishing attacks with fake logins. It becomes second nature to put in your email and password. Whats even worst is that banks won't send messages over email instead they make you sign into their "Secure" message center. Some how email is insecure but sending the same info in a physical mailing is safer. Any time you select electronic statements instead of physical mailings, they send you an email and force you back to their site to login.


Not only that, they put the link to the site in the email.


To be honest I feel like biometric app unlock has largely made the need for the short tokens banking experience obsolete. I don’t want to change my credential ever 15 minutes, I just want my bank app to verify my biometrics on sensitive operations. The only real reason for short lived bearer “tokens” these days is so you can deploy them in scenarios without revocation lists.


This is how the banking app my team built works -- on Android/iOS devices a hardware-backed keypair is generated and when a login is needed, the keychain is unlocked using local biometrics to perform a signing operation which authenticates the user.

There's a bit more to it than that because we support remote attestation, and you only get a read-only token until you've performed remote attestation (which generally happens quickly).

edit: The authentication results in a short-lived token (5m), a refresh token (20m), but can be re-authenticated with the keypair challenge at any time.


> major corporations like Google don't use short sessions

Ask a Google employee. When I worked there sessions were limited to 20 hours. Beyond that full re-authentication with password + security key would be needed.


I’m pretty sure they mean for consumer accounts, not their own organizational logins.


Why'd you leave?


Dissatisfaction with the speed at which Google does things due to organizational reasons. Lots of smart people working very hard, only to have little to show.


There is a class of people who feel a smartphone is too much of a distraction to carry around all the time. They will carry around a flip phone that can do basically nothing beyond calls, sms, and camera. These are otherwise perfectly normal members of society with bank accounts.

I don't know how common these people are. I happen to know a handful of them but I probably don't travel in typical circles.


I live in a very Hasidic Jewish neighbourhood in North America; they (almost) uniquely use flip phones instead of smart phones.

I've even seen women with cordless landline phones mounted to their shpitzel so that they can use it "hands-free", but without breaking their rules around kosher technology.

Some context:

https://www.theguardian.com/world/2022/jul/19/kosher-phone-d...


In case it wasn't clear, I didn't meant to disparage people without phones or computers (or for that matter, people without bank accounts), just to note they are a small group of people and shrinking.


The people I brought up used to have smartphones and made a conscious decision to go back down the tech tree. So an inflow to the no smartphone group. Certainly not outweighed by the outflow so the group is still shrinking as you state.


Smartphone sounds like the worst device you can use for banking.

If remotely compromised, the attecker gets everything they need. The SMS second factor, the user's password, access to user's network, and the user's behavior profile to know when to execute the attack so that they get as much time for it as possible.


On the other hand I would assume that modern smartphones from manufacturers that care about security to some degree are much harder to compromise compared to random Windows laptops.


Pixel phones had exploit discovered last year where you could unlock any phone by entering the PUK code. You just replaced the SIM card with your own SIM for which you knew the PUK. Seems very easy to me.

https://9to5google.com/2022/11/10/pixel-lockscreen-unlock-bu...


Health services also log you out after ~15min. Kaiser, One Medical, Epic Mychart, etc. Very annoying


Some of my highest-paranoia sites and apps are things like my dental insurance. Yeah guys, hackers are out to get me and they can't wait to impersonate me and reschedule my next cleaning for an inconvenient time!


In my experience, a large percentage of compliance officers believe that this is a non-negotiable requirement for HIPAA compliant web apps. My reading of the Security Rule is much more pragmatic, so I would argue that there are other ways to meet this standard in many situations.


I think reasons 1 and 3 I listed for financial apps and websites apply for health services, but 2 and 4 don't, so I see why they do it, even if it's less clear-cut.


I see pretty short timers in enterprise saas as well. Reason being that having a license for 400 users is much more expensive than a license for 30 users.


Pretty much all SaaS products charge per unique user, not active sessions, so I don’t really understand what point you’re trying to convey.


Nearly all of our agreements are in active sessions. I guess your business just needs better negotiation?


> but it seems like the annoyance in these situations is potentially worth the security benefits.

I would be happy if there was a way to just request a long session for that. No need to force everything into a short session just because it is a sensible default.


I'll add one more: Fintech app users constantly ask for short sessions.

As the developer, I really didn't want to add it, but who cares what I want.


> I suspect that's the actual best solution if avoiding session stealing is that big of a concern

that big of a concern for whom? Google doesn't care because Google has constructed a world where when something goes wrong, sorry, it's on you.

Your bank does care because if something goes wrong, it's frequently on them. The bank times you out to protect them more than you.


In a lot of cases, short session expiry is used as a hack around subpar authentication standards such as SAML/OIDC where there is no reliable backchannel for the identity provider to tell the service to expire sessions (following a credential change, user being deleted, etc).

The short session expiry is used as a workaround to force the third-party service to regularly check-in with the identity provider, thus placing an upper bound on how long IdP-initiated changes take to reflect on all third-party services.


This is correct, but its uncharitable to call it a "hack" in many contexts. In oauth, for example, the access token / refresh token concept is literally spelled out in the spec. It's not a workaround, its how you implement eventual consistency in a loosely coupled system where the IDP can't push updates to clients because it doesn't know all of them by design


It's fair to argue that point. If you need something like that, then this aspect of OIDC etc. is not a hack. But really really few people take a look at the question of how to integrate an external identity provider and then decide that loosely coupled, eventually consistent is the right choice. Instead developers mostly just choose whatever seems sufficiently popular and build their system around it and only look somewhere else if the popular choice is visibly much worse at its job than the alternatives. ("visibly" with the knowledge about the topic at hand that is. Most people I've talked to just see OIDC flows as a given fact about how authentication has to work.)

From a practical perspective, there are lots of applications out there which are perfectly reachable from the outside and which use an OAuth2/OIDC library as a standard component where they could forward an update from the identity provider with a simple library call. And think about how much edge cases in front-end applications could eliminate, if you wouldn't have to be ready to get a new token at any moment, because the current one has just expired. [1]

In my opinion, pushing updates to clients should be the default of identity protocols which you only opt-out of, if you have special needs. And then hopefully documentation tells you very clearly to have very short token expirations.

[1] And yes, you technically still have to be prepared for that at any time, but you can push the trade-off of making that case less user friendly much further, if it occurs only seldom.


Hang on, we're talking about user sessions and you're talking about access tokens.

Short expiration of sessions is bad because of the terrible UX. Access tokens can be refreshed without user interaction, so it's not the same issue there.


"Session" here is the word used for the duration in which an access token is valid. You may be talking about UX, but the submission is talking about access tokens.


The article specifically mentions the need for users to re-enter their username and password as a downside of short-lived sessions, so I think the author's definition of "session" extends as long as the refresh token lasts.

I think that most of the non-short-session examples — Google, Microsoft, GitHub, etc — are using an access token + refresh token pattern.


That's because it's a poorly written article by someone who doesn't know the difference. It interchangeably talks about issues only with the UX and the actual technical backend pieces involved.


The length of time an access token is a delegated authorization, not an authentication session. For first party mobile apps and the like, they might act similarly, but for other use cases they will not.

The access token may be so my account at an event coordination site has free/busy access to my Google calendar, and that authorization might last for years.


> Access tokens can be refreshed without user interaction, so it's not the same issue there.

Not on mobile, when the app is not in foreground or gets killed by "energy saver" mechanisms - Samsung is fucking annoying in that regard, even on 4GB RAM and more it keeps closing Chrome with 10 tabs after a minute or two and it completely loses state, as do many games - even taking a call in foreground can be enough.


4GB is not a lot on Android so 10 tabs sounds about right. You need a lot of wiggle room for garbage collection to be efficient and you can't swap to flash without burning write cycles and power on small devices.

That aside, I don't see any technical reason why you can't renew a token that expired 1 week ago. Renewal just makes sure nothing changed (Eg user hasn't been deleted) while you were gone. It doesn't have to do any user-facing auth


> In a lot of cases, short session expiry is used as a hack around subpar authentication standards such as SAML/OIDC where there is no reliable backchannel for the identity provider to tell the service to expire sessions

Both SAML 2 and OIDC have standard mechanisms to expire sessions.

One problem is that sessions are always a per-site, bespoke technology. Flagging a session as expired in a back-end database isn't going to help if the front-end uses cookies holding JWTs as an optimization.

So some sites prefer front-end expiry (which is also standardized by both). Some sites won't bother to support either.

Add on the inconsistent behavior of cookies across browsers these days, and it becomes very hard to support. It has been prioritized out of most things.

There is also the issue that sign-out doesn't make sense for many things. Logging out of Google in my browser shouldn't kill my Discord desktop session just because I chose the SSO option for authentication.

SLO makes sense in enterprise scenarios (where many big SaaS products tend to still not support it) and in single-party consumer scenarios - where SSO is used as integration glue to make something that "looks" like it is all one site, such as first-party Google logins.


> /OIDC where there is no reliable backchannel for the identity provider to tell the service to expire sessions

That's what the OAuth/OIDC refresh token is for: https://oauth.net/2/refresh-tokens/


There’s a solution for this! It’s called SCIM and it lets you sync user updates from the directory so you can expire sessions when users are deactivated.

(I work at WorkOS.com which helps developers with this.)


I'd rather rely on session expiration rather than on the fact the SCIM sync works well. I implemented SAML and SCIM services. With SAML you implement things once, tweak it a bit, and then it works with all the IdP, even those you never heard of. SCIM on the other hand have only 2 client implementation that I'm aware of (at the time I worked with it at least) and they were sufficiently different from each other than you kind of had to do 2 implementations. Not to mention it uses stupid JSON patch thing that are crap to work with unless you use mongodb or similar I guess. And stupid limitations on forcing the sync on AzureAD that I forgot the details of.


How would those other technologies deal with the situation where grandma signs into her banking account via app on her phone and then gets distracted, leaving her phone unlocked on the table for teenage Jimmy to find? My bank uses short sessions so I get signed out within a few minutes of inactivity. Long sessions would seem to leave grandma wide open to this sort of local attack.


By asking for credentials/auth for every meaningful interaction.


The main problem is the way these standards are implemented. OAuth 2.0 (on which OIDC is based) does define a reliable back channel for session revocation, although it is still pull-based instead of push-based. The scheme is simple:

1. Use short-lived access tokens (forcing clients to refresh often)

2. Check for revocation on every token refresh

There is even an OAuth 2.0 RFC for a token revocation API[1], and an Open ID Connect extension for backchannel logout[2]. Unfortunately, many OAuth 2 implementations (especially these were refresh tokens are JWT-based) do not support revocation of refresh tokens at all.

The other big problem is that refresh tokens are too often misunderstood. I've consulted quite a lot of development teams who implemented OAuth 2.0 (both as a client and an AS/RS), and most of the developers did not initially understand what the refresh token is meant to do. This resulted in a lot of wrong implementations.

If I blame the standards for something, I'd blame them for being too complex and flexible. This goes without saying for SAML - nobody should be using that if they have any choice. But even simple OAuth 2.0 needs care. There are many RFCs out there, and if you just read the original RFC or one of the many low-quality guides on the interwebs, you probably would miss the point about how to properly use a refresh token. RFC 6749 subtly hints at this strategy I mentioned above, but never fleshes it out as a recommendation, let alone mandates it.

OpenID Connect is even worse. It introduces a new type of token (ID Token) that has unclear purpose and security, encourages JWT use without setting up a standard for revocation, reinforces the insecure implicit flow and introduces a whole new similarly-insecure flow that serves no purpose (the hybrid flow) which serves no purpose except for making sure there are more vulnerable apps out there.

Both OAuth 2.0 and OIDC can be implemented securely, but the base standards are not guiding you on how to do this, and - in the case of OIDC - contain way too many footguns. I think the OAuth 2.1[3] is a step in the right direction. GNAP[4] (a.k.a. "OAuth 3.0", "XYZ", "TxAuth") looks to me like a step in the wrong direction (even more complexity), but perhaps it's too early to tell.

[1] https://oauth.net/2/token-revocation/

[2] https://openid.net/specs/openid-connect-backchannel-1_0.html

[3] https://oauth.net/2.1/

[4] https://oauth.net/gnap/


> The main problem is the way these standards are implemented. OAuth 2.0 (on which OIDC is based) does define a reliable back channel for session revocation…

The larger issue is that even if you had a reliable revocation system - most relying parties wouldn't use it.

The typical relying party supports logins from Facebook, Google, Apple and the like because the user chose to use those as authentication systems. However, the relying party is independent. The user would not expect other sites and desktop apps to suddenly stop working because the user hadn't visited Facebook for a certain number of hours.

There were efforts in the past using transparent pixels on the identity domain to do distributed session tracking - e.g. if I'm interacting with _any site_ using the login, that will encourage the session to stay alive. Turns out that was way too much visibility for social login providers to have.


Hand waving away the threat of application use on shared devices seems a little over confident to me. This is probably not a threat for company devices, but it is clearly a threat in other environments, i.e family members sharing a device. While some users might expect to be logged in all the time, others expect to be logged out after they close a web app tab. Session expiration should be application specific. Google's sessions do not expire, so that more user data can be collected. That is clearly more valuable to them, than compromised accounts due to session hijacking. Them doing it, is not a great use case for others, because their value proposition is entirely different than for most other web applications.


> Google's sessions do not expire, so that more user data can be collected. That is clearly more valuable to them, than compromised accounts due to session hijacking. Them doing it, is not a great use case for others, because their value proposition is entirely different than for most other web applications.

I think this is a pretty bad take. Google runs some very sensitive applications for paying enterprise customers, and they still tend to not expire sessions.

I also really don't like Google as an ad company, and I think my trust in their judgement has fallen precipitously over the last decade, but I find it hard to compare them to someone like Microsoft and say they're doing worse on the security front (I don't think they are).


Both Google and Microsoft have world-class security teams. But Google is run in a more effective way (so fewer instances of the right hand not knowing what the left is doing) and has less legacy to lug around, it also suffers less from the innovators dilemma (for now, at least).

It makes a big difference if your stuff has been designed as a web based service from day #1 or if you are required to talk to anything and everything on prem and off prem as well as in the cloud. The attack surface of a typical Microsoft enterprise product is absolutely gigantic and the fact that they do as well as they do is something to be appreciative of. That said I don't want their stuff anywhere near my company.


> Google runs some very sensitive applications for paying enterprise customers, and they still tend to not expire sessions.

For Google Workspace, web applications (e.g. Gmail or Calendar) will regularly force you to re-authenticate "for your security". It's not a daily thing fortunately, but it is common enough to be frustrating.


Session expiration length is a configurable setting by the domain admin, it's not enabled by default.


Oh is that why it asks for re-auth all the time? I thought it was built in. Nice, I’m going to disable that.


> Google's sessions do not expire

They kind of do. I use a lot of machines that I might only hop on once a month or so. Chrome sync often ends up in a "paused" state where I have to re-auth. YouTube will fall back to a not-signed-in profile on me and I'll need to re-auth every now and then. Loading up Gmail will have me re-auth again pretty often. Often its not a full re-auth with my security keys but it'll at least challenge a password. I get these challenges probably every week or so across all my devices.


If you live with a family, you don't have any security margin: an unattended computer gets the next user immediately and a lot can be done in just one minute.


> Perhaps you used the shared computer in the library to access your web application, and forgot to log out.

> Is this a thing? Are shared computers without user separation a thing? If so, these shouldn’t be used to access web applications with sensitive information at all.

Yes, it is a thing.

I understand you would like it to not be a thing.


Heh, author gave a perfectly reasonable example of where use might be shared, and immediately asked "Is this a thing?".

Like, yes. It is. You literally JUST gave me an example of it.

Also this, shortly after:

> Are shared computers without user separation a thing? If so, these shouldn’t be used to access web applications with sensitive information at all

That "should" is doing a lot of heavy lifting. You don't decide which security controls to implement based on your best-behaved users.


It's not even about behaviour.

Some government services switching over to fully digital means there's a cohort of people being left behind. A decreasing number, sure, but a number nonetheless.

Effectively the author is saying poor people who need to use library PCs shouldn't get security.


The article bringing up that scenario and immediately dismissing it actually convinced me to change my opinion to the opposite of the article's thesis. I generally haven't seen the need for short session expirations in the past (when I've thought about it, which isn't often), but I hadn't thought about the shared-computer scenario before. Keeping that in mind, and knowing that it can't be handwaved away (as you point out), short sessions make more sense to me now.


It reminds me of when I didn't understand why my library account has such a short expiration time. Almost every time I open the library's website, I have to re-enter the password. Why? What's so important about a library account? Who's going to borrow a book on behalf of me?

And then I realized that logging into your library account is probably one of the most frequent things on all the shared computers in the library.


Even if it is a thing - after using a shared computer one MUST log out. If the 15 minute expiration time saved you then you're just damned lucky!


That's reasonable to do, but not necesarily to ask for What if you loose your internet connection and can't log out? Or have a power cut, or have to leave in a hurry, or drop dead on the keyboard while using the computer

Unfortunately for devs, RL is messy and even if you can convince some people to do the best thing, if you're large enough you have to go by Murphy's Law and work around the people that you know won't / can't


I have a feeling a lot of people get "lucky" a lot.


> Also, it would be better to protect against this by securing the logs or using hard drive encryption.

This one line is emblematic of the flaws in the article.

My take on the article is, “Imagine that everything else in a system is done correctly, and the system, overall, is perfectly secure. In this imaginary world, short sessions don’t help.”

One fact about security which you cannot avoid is that any one particular security feature may fail or be bypassed in some way. What are the consequences of this? Well, it means that you want multiple layers of security. Your server runs its daemons with minimal privileges so that a remote execution vulnerability needs to be combined with a privilege escalation vulnerability. You don’t think about the “right way” to secure something, but you think about multiple ways to secure something, and don’t stop securing it just because you’ve found one good option.

Short sessions are there because there are various ways that sessions could be compromised.


The author also puts lot’s of faith on the user not doing stupid things:

“Is this a thing? Are shared computers without user separation a thing? If so, these shouldn’t be used to access web applications with sensitive information at all, no matter how short the session expiry time is.”

Yeah, users might just leave their bank logged in a open and logged computer library. That’s why short sessions exist for those as this will limit the window of opportunity for a bad actor to do something bad. Not perfect of course, but limit the exposure.


> Not perfect of course, but limit the exposure.

It's a shared computer (and if the session is carrying over, it's not just shared hardware it's a shared account).

In this case - you are utterly fucked if you think that machine is secure. Hell, fuck the session, I'll just run a keylogger (or if I'm not admin, install a malicious browser extension) and capture your whole login - I have considerably more access to this machine than I need to compromise you in a large number of ways.


> I'll just run a keylogger

You could. But what is relevant is whether you have. The point is not to protect against a determined attacker, but to reduce the chance of an opportunistic attack.

It really feels like most people have little experience with shared computer resources any more, because pranking people who left their computer unlocked used to be practically a sport, even when the same people would (mostly) never go out of their way to attack a locked down account.

Screen locks became a thing long before mobile devices for a reason.


Except these public or low bar access machines are there for generally vulnerable people - elderly, homeless, migrant workers, or others who generally don’t have a dedicated machine they can secure. Most people don’t “think” about security at all, it’s beyond their education and familiarity. Banks, for instance, offer access to their services to everyone - not just hacker news readers. They’ve started adding fees to in person or phone interactions directing people who can least afford the fees towards online access, which they often don’t have direct access to. ISP fees can be expensive, computers as well, or in the case of elderly they often just don’t know how to get started and value accessing via the library. If you’re homeless you don’t have a place for an ISP subscription, and many access devices at shelters. People in jail or prison also use shared devices.

Yes they’re overall very exposed to bad actors and the machines they use are really insecure. So, short lease credentials definitely reduces blast radius. If you have a key logger, but no automated account drain scripts, a 5 minute timeout will effectively prevent you unless you’re actively watching.

Finally, some of these are regulatory requirements for better or worse. That doesn’t forgive the regulation, but it does sometimes explain the policy.


Maybe the machine is secured by not allowing anything further to be installed on it. Or maybe it's not. I've heard most crimes are due to opportunity, so it's best to protect your users from accidentally leaving themselves logged in on a shared computer for the next person to find that opportunity.


And get ATM machines work every day on the principle of asking you to just re-enter your password on potentially harmful transactions.

Let’s also pretend like there’s a security camera looking at my desk that is usually not monitored, but you don’t know that


Let's say the bank also uses 2FA (say, physical code calculators) - your next step?


Install system malware - wait for next login (which will be soon, since the short session is forcing repeated logins) send session token to myself.

Done. Now I have an active session. Don't give a fuuuuck about that 2fa device.


This is a bit like saying, "there's no point in having a lock on my door because somebody in my house can shoot me". The fact that an outer ring of security can't protect you from people who are already in an inner ring doesn't invalidate the outer ring.

If you already own somebody enough to install whatever malware you want on their computer, then sure, session lengths aren't going to stop you, but they're also not intended to. Session lengths are intended to stop the guy at the coffee shop who grabs your computer when you go to use the bathroom.


Got it, so people who aren't privileged enough to own a device shouldn't use the internet.


> Users might just leave their bank logged in a open and logged computer library.

Fine. Then add an option where I can press a button in order to signal the bank that I'm at a secure computer, and that I'd like to increase the session timeout to 1 hour for this one time.


Another scenario is a large office. If a user leaves their desk to get a coffee, an attacker could walk up and get access. Of course, in that case they could also install key loggers, MTM software etc. so they will get access to anything they want.


It's better to let users do stupid things so they learn from their mistakes and not do them ever again. And probably tell their story to their friends and family so they, too, don't do this. Putting all these excessive guards in place kinda encourages ignorance and tech illiteracy.


You have far too much faith in stupid people.


>One fact about security which you cannot avoid is that any one particular security feature may fail or be bypassed in some way. What are the consequences of this? Well, it means that you want multiple layers of security.

I think the problem here is that no one ever attempts to define what "multiple" is in layers. Most seem to agree that one layer often isn't enough for the reason pointed out.

The issue I take is no one knows or provides any sort of guidance of how many layers are enough. People working in security take a degree of liability in their jobs, perhaps careers, for compromises. As such, all the incentives are to add as many layers as possible, everywhere. So we get N-factor Auth, continuously expiring passwords, biometrics, regularly expiring physical access credentials, sessions that are increasingly fleeting and often work requires layers of these sessions so you end up with multiple compounding needs to reauthenticate to the network, to some remote application debugger, to some application running within an application, and so on.

I often work in secure environments and it gets tiring, tiring to the point I admittedly start to take shortcuts to make my life easier, shortcuts that ultimately defeat a few layers of security in some way to keep my sanity so I can... do the actual work.

So the other extreme of 1 layer of security is N layers of security where nothing is ever usable. There need to be some reasonable protocols in place and people who write them need to be forced to actually go through the use cases with a combination of efficiency/production quotas. Only then do I believe security starts to realize an appropriate balance of layering security and creating value. In the current state of affairs, a lot of places are off the rails where they add so many barriers for normal use they create behaviors in authorized users to open very bad holes whereas if a few layers were peeled back and appropriately managed, the overall security of a given system would likely improve.


> I often work in secure environments and it gets tiring, tiring to the point I admittedly start to take shortcuts to make my life easier, shortcuts that ultimately defeat a few layers of security in some way to keep my sanity so I can... do the actual work.

Yeah, and without getting as agitated as that other guy in the comment thread, this is where security becomes its own enemy. And how achieving effective, practical security is a lot more subtle than it seems.

For example, we've been talking about putting 2FA onto the identity provider allowing access to internal high privilege administrative interfaces, for example via Duo, which requires you to authorize a login via your phone. However, an important question there was: How long would we trust a 2FA authorization?

If every single login after a session expiration requires 2FA, people would riot and/or start looking for workarounds. That's not great. But eventually, we ended up trusting a 2FA authorization for like 12 hours. That's one work day, even if it goes longer. Usually this means you have 2 Duo pings in the morning - one for the VPN and one for the internal IDP. That's entirely acceptable imo.

And in a similar sense, SSO can increase security while increasing convenience as well. For example, our ADFS allows sessions of about 4 hours, so if you access it once in the morning and once during midday, you stay logged in. And this in turn allows systems like the Keycloak acting as an IDP to work with very short session timeouts. As long as you're working, your keycloak session remains active. Once you're not active for 5-10 minutes, you're logged out from keycloak - but that's just 1-2 redirects on the next click and you're back in.

And once this is simple and convenient, people want to use this. And suddenly you got rid of a mess of local accounts in different systems and everything is based on the central directory. And that, in turn, is more valuable than validating MFA multiple times a day in the grander context, at least in my book.


> “Imagine that everything else in a system is done correctly, and the system, overall, is perfectly secure. In this imaginary world, short sessions don’t help.”

Isn't the reverse also true?

Imagine that everything else in a system is done incorrectly, and the system, overall, is completely insecure. In this imaginary world, short sessions don’t help.

At any point between "perfectly secured system" and "completely borked security in every single layer", in what way will short sessions help?

If you've got a good and secure setup, with maybe one or two holes you don't know about, how does a short session help?


Short sessions are obscurity...not security. If you use a serious website like Fidelity's, they don't let you do anything impactful without an authentication challenge. You could have logged in a few seconds ago...want to tinker with bank accounts? Challenge.


Fidelity also logs you out after a short duration of inactivity…


Short sessions are there because, for all practical purposes, SLO doesn't work and we are using short sessions to simulate high-latency SLO. In a previous life we supported SLO properly, and it had no value at all.

It's one of the many sad realities of the modern world that basic functions don't work and no one will fix them.


Also pointless: "your session will expire in X minutes; click Continue to stay logged in" alerts. That just turns a short session into a long session? And provides a handy session.renew() call for the attacker?


I tend to agree with respect to machine sessions where tokens are handed around and persisted and the cost of reauthing bothers no one. Sessions should be as short as is reasonable for performance.

For human applications you should generally not expire general access but expire leases to critical actions. Generally there are more and less sensitive realms of actions and read that a user can do, with a non linear continuum of risk and user pain. Logging in and browsing your content - fine. Mutating, deleting, viewing sensitive like credentials, creating new credentials, changing credentials, etc - reauth either every time or with some short window like 5 minutes.


Fully agreed. Just because 1 security measure doesn't prevent all malicious attacks, doesn't mean that it "does not help security". It's just fundamentally false because some malicious attacks rely on long expirations, therefore for those attacks, this method does help security. Not all malicious attempts are refined or perfectly executed and sometimes a user can simply rely on a token that lasts too long.

It's a clickbait title and it worked. A title like this would be much more accurate: "Short session expirations provide less security than you might think"


> This one line is emblematic of the flaws in the article.

I don't think you can reasonably dismiss this article based on that line.

(1) It's just an aside, not part of the main argument; (2) It happens to be true.

Anyway... "Who knows, it might help" is OK (well, better than nothing), when there's not a significant cost to pay. But when there is, you need to go deeper.

There's a tradeoff being made and if you don't think of it that way, you're going to make a poor decision.


> Short sessions are there because there are various ways that sessions could be compromised.

You haven't actually addressed either of the author's points, though. Namely:

1: A short compromised session is still a compromised session. The duration usually does not prevent the attacker from achieving their goal.

2: The vast majority of ways to compromise a session already give you access far beyond that session itself (ex: you have user access on a local machine, or physical hardware access, or you're an admin who manages that user, etc/etc/etc). So an expired session is, at most, a small speed bump in those cases.

So... going back to the point: If expiring sessions is terrible UX (and it is) and it's not stopping attackers (and it's not), why are you doing it?

----

> You don’t think about the “right way” to secure something, but you think about multiple ways to secure something, and don’t stop securing it just because you’ve found one good option.

This attitude is cancer. Let me throw another quote at you:

"A ship in harbor is safe - but that is not what ships are built for."

----

Security is ALWAYS (fucking always, yes really - fucking always) a tradeoff.

The most secure application runs completely isolated, with no input or output, and is totally, utterly useless. But no worries - it's secure!


Can you please not post in the flamewar style to HN? I realize it's the tradition in some online circles, but it's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


> This attitude is cancer

> yes really - fucking always

You're really angry with that straw man you've stood up. The OP isn't saying systems have to be locked down to the extent that they're useless. Who would ever argue for such a thing?

Note that "You don't stop securing it just because you've found one good option" is NOT the same thing as saying "You don't stop securing it until you've closed every possible security gap and compromised usability".

Be charitable in your interpretation. Your OP is making a simple argument that defense in depth is a part of a security posture, precisely because they agree with your core premise, that security is always a tradeoff. Because we won't choose to implement every security measure, and because the security measures we do implement will be flawed, or compromised by the need for the application to actually be usable.

Step outside, breathe some fresh air, maybe lay off the coffee first thing. And consider not throwing 'cancer' bombs at passers-by.


Please don't cross into personal attack, no matter how bad another comment was or you feel it was. It only makes things worse.

(I appreciate that you're arguing for charitable interpretation, but the personal swipes at the beginning and end of your post wipe out the good effect and then some.)

https://news.ycombinator.com/newsguidelines.html


The point is that it's a really bad tradeoff, because the impact to the users is high, and the impact to security is low. And yet we do it, because "You don't stop securing it just because you've found one good option", and that's a really bad reason to improve security by such a small increment with such large negative consequences.

The problem with 'defense in depth' is that it comes as close as possible to locking down systems to the point of uselessness, without actually, technically, preventing work from being done.


Ding ding ding!

If you want to defend in depth - more power to you.

If the way you're "defending in depth" is mostly not adding security, and is actively making the product less useful... I'm going to call it shite.

If you blindly say "defend in depth" without actually... you know... evaluating what that defense does to the product as a whole, you're doing your job poorly.


The rules of engagement on this site call not for radical candor but for taking the most charitable interpretation of someone's words.

Maybe the OP wasn't blindly saying 'defend in depth'? Maybe they were advocating for evaluating what that defense does to the product as a whole? If they were, is their attitude worth describing as cancerous?


If that was my entire comment - I'd probably agree with you. Good thing there's about 5 other paragraphs of content in my response that provide additional context...

My take: you're stuck on the word cancer as some sort of insult, rather than an analogy. I'd argue you're being fairly uncharitable in your responses - and further... you're yet again not engaging with the actual on-topic discussion.

Have a good one.


> Ding ding ding!

> If you want to defend in depth - more power to you.

> If the way you're "defending in depth" is mostly not adding security, and is actively making the product less useful... I'm going to call it shite.

Agreed. Sure, there is defense at different depths, but there's no reason to add depth without adding defense as well.


The other thing security people fail to realize is that when you’re hostile to UX, people start coming up with all sorts of workarounds that leave you less secure than you were before. Like the corporate managed laptop is so full of spyware that users bypass it and use their own personal device for development.


We don’t fail to realize that.

Security folks are humans too.

We realize that every human loves convenience and security removes conveniences. Simple As.

No matter what we do as security folks, the users will do everything possible to return to their convenience or complain about the inconvenience until the security is removed.

I’m not saying there aren’t over zealous security folk but our goal isn’t to make humans lives harder. We want to make it harder for the bad guys to ruin humans lives.


> human loves convenience

Except that it's not a matter of 'convenience', it's a matter of being able to do their jobs. Security is a hard job, in part because you have to come up with security practices that are actually workable, and keep work impediments to a minimum. It's really easy to just add more restrictions. It's hard to add security that doesn't impede the users. When I see 'defense in depth' being invoked to justify massive work impediments for minimal security improvements, I don't see effective security practices - I see a cargo cult.


not your objective is make the organización loss the less money posible by reducing the incident rate the recovery rate or the impact if you damage the org more the risk you are saving against you are liability, this isn't good vs bad thing, this is decide when the line is worth crossing and this article say at least in their opinion this open isn't, you still have multiple other layers.


That's a reasonable point, sure.

One which can be made reasonably, without telling anyone that their attitude is cancerous.


I don't think that appealing to courtesy is really the play here.

We are discussing ideas about security in a place and manner that allows us to have honest and frank conversations.

I think security teams optimizing only for security is actually very apt analogy to cancer: Part of the organization is acting in a manner that is negatively impacting the organization at large - while positively impacting that subset of the organization.

Cancer is the act of some cells in your body prioritizing themselves at the expense of the whole.

Personally - I think you're digging to find an insult in that comment, and I take it as a way for you to disengage with the topic at large.

This is an attitude that is routinely used to shut out voices that don't match the current "dress code".

Trust me, I'm hardly going to be calling you cancer over the dinner table for not passing the salt. I'm using that word intentionally and carefully - in a frank and honest conversation. If you're feeling hurt (especially on behalf of someone else...) maybe go do something else?


> The problem with 'defense in depth'

No, that's a problem with bad engineering. That a process requires skills most people attempting it don't have isn't a problem with the process, it just means that it is hard and relatively new.

One thing I see all the time that demonstrates this incompetence is talking about something being more or less "secure" without reference to a threat model. You simply can't make reasonable tradeoffs without thinking this through, and yet nobody wants to do the exercise.

In fairness, this is not just an engineering fault. I've seen one case where a legal department freaked out when they heard about a risk analysis project in pursuit of a formal threat model - they vehemently objected to anyone producing documents about such things that could potentially surface in some discovery fight.


> One thing I see all the time that demonstrates this incompetence is talking about something being more or less "secure" without reference to a threat model. You simply can't make reasonable tradeoffs without thinking this through, and yet nobody wants to do the exercise.

Hey - I completely 100% agree. Believe it or not, I did quite a stint in software security before becoming this jaded (5+ years fulltime work at a security focused product sold primarily to large fortune 100 companies [banks - it was all banks]).

I think my problem is that for any difficult challenge... there is an answer that is simple, obvious, and incorrect.

My opinion is that the incorrect answers I see most are the two extremes: I don't care about security (BAD!). I only care about security (WORSE!!!!).

The first will eventually lead to compromised accounts/data and that can kill a company. The second will lead to products no one wants to use, which WILL kill a company.

Neither is a good spot to be. You want to find an appropriate compromise in the middle: Secure enough.

----

Side note - no one truly does the threat assessments based on threat model because no one in industry likes the answers.

For small and inconsequential threats - you are already secure enough.

For nation states - there is likely no solution that is workable if the thing is on the internet.

It's like trying to buy a secure door for your house: For most folks walking down the street, the current door is fine. When the Gov shows up with tanks - there is no door you can buy to solve the problem.


> Note that "You don't stop securing it just because you've found one good option" is NOT the same thing as saying "You don't stop securing it until you've closed every possible security gap and compromised usability".

It seems to me that 'defense-in-depth' can very easily become a mantra that inevitably means drifting towards the latter. What are the real guidelines for telling when enough is enough? Because ime people who even can articulate anything along those lines are way, way fewer than people who make appeals to defense-in-depth.

And I think this is part of the problem: without a principled way to assess what is gratuitous, repeated appeals to defense-in-depth will lead to security practices that heavily favor having more measures in place over having a good UX. This is because the environments where information security is most valued are already organizations that frankly, do not give a shit about UX. The customers for cybersecurity products are massive bureaucracies: large enterprises, governments, and militaries. The vendors that sell those products are embedded in a broader market where no software really has to be usable, because there's a fundamental disconnect between the purchasing decision and the use of the software. For all B2B software, the user is not the customer, and it shows in a thousand ways. In infosec things are further tilted in that lots of easy routes to compliance which are terrible for UX are falsely perceived and presented as strictly required, perhaps even by law.

The idea that in a B2B market which primarily serves large organizations and governments, you will get any organic weighing and balancing of security against usability 'for free' is sheer fantasy. So where is the real counterweight to the advice that on its own recommends 'always add more, unless you have good reason not to'?


I think this is a more elegantly stated version of my argument.

It's also why I strongly link this argument to cancer. It's an idea that grows unbounded until it's harmful, and by the time the organization realizes the harm, it's often too late to change.


> I think this is a more elegantly stated version of my argument.

Yep. I think your argument pretty much conveyed the same thing, along with a lot of anger and frustration.

I also agree with that anger and frustration— I've felt the same rage before, when I've been hit with blockers or UX degradation related to nominal or actual attempts to improve security. Restrictions that are ill-motivated (or whose motivations are just not clearly or convincingly communicated) are infuriating.

> by the time the organization realizes the harm, it's often too late to change.

This worry is the twin of the rage, for me, this sense that I can't do anything about it and it's never going to get better. A dreadful, reluctant admission to myself that the only way to stop the continual degradation of my work life will be to uproot myself: give up my job and everything I do like about it, leaving behind people I enjoy working with and reducing the amount of contact I have with them.

Happily, engaging directly with my company's infosec department directly often gives me hope and allays these fears somewhat. But generally, online discussion with people who implement security controls tends to reinforce my worry that, to borrow your metaphor, the disease is systemic and terminal.

Most 'cybersecurity professionals' (who are visible online, at least) transparently do not give a shit about UX, display flagrantly antagonistic attitudes about users and developers, and talk often about defense-in-depth but never articulate any inherent limits for appeals to defense-in-depth beyond 'well don't bother with measures that don't increase security at all'. All of it sends strong signals that people who value UX, DX, autonomy, morale, and well-being, to the extent they are present at all, are outliers in infosec who do not belong and have no hope of being effective.

And then the response to someone openly including a dimension of emotionality in an argument about a security measure they feel is gratuitous and cumbersome is

> Did... did [a cumbersome security measure] hurt you?

Like, seriously? Yes. Indeed it did and does.

But more than the security measures themselves, the pervasive attitude conveyed by that belittling question is the even bigger problem. And it generates many of the small ones.


> You're really angry with that straw man you've stood up. The OP isn't saying systems have to be locked down to the extent that they're useless. Who would ever argue for such a thing?

You are - you're literally arguing for it right now. Short sessions just don't help all that much, and they have an outsized impact on users.

Why are you dying on this hill? Likely because your mindset is "security above all else" and that mindset is literally cancer.

Security above all else leads to a product that does fuck-all else (and it does it poorly).


Literally cancer? Really?


On the plus side, it seems HN has discovered the cure for cancer.


Yes. Literally.

I would say it makes an excellent analogy: A part of the whole (security) is prioritizing themselves and their needs in a way that makes the overall organism much less capable and effective.

Cancer.


Still metaphorical cancer.


Not according to the definitions of those words that I can see

Metaphor: a figure of speech in which a word or phrase is applied to an object or action to which it is not literally applicable.

Cancer: a practice or phenomenon perceived to be evil or destructive and hard to contain or eradicate.

My usage is literal. Organizations that act this way have cancer. They suffer from it, they can be treated, they can die.

It's as real as it gets. I am using cancer in the body as an analogy for this form of cancer - but I am not using a metaphor.


[flagged]


FWIW, it seems pretty clear to me that you are the person who is being insulting here. I mean, I can't even figure out how to reply to this comment without making it all about you and how you are involved in this thread and the words and strategy you are using, as that's what you are doing to horsawlarway... who, notably, was addressing an idea and an attitude, not a person and their behavior.

Like, if I put myself in the shoes of the people each of you are replying to, I can see how to reply to horsawlarway: if I disagree with their interpretation of what I said, I can argue back; if I realize that I said something poorly and didn't mean it, I can apologize and correct it; if I disagree with their analysis, I can push back with my own arguments...

...but your comment? You are just pointing at someone and calling them rude. You are then not only refusing to engage with their comment or their points, you are just telling them they shouldn't even be discussed or listened to because they used a word which, to be quite frank, is not insulting; and, in doing so, you have dragged this discussion from one between people who were passionate about an interesting topic that affects everyone on this website--one where I was excited to read both sides--to a bunch of people--sadly, now including me!--squabbling about how words are chosen and how arguments are formed while pointing fingers at each other about who is being insulting, which is a waste of everyone's time so bad I frankly feel bad about how I now feel like I also have to take part.

(BTW, I actually did at first choose to not send this comment: I wrote it, and then decided it was just further feeding you and further wasting other peoples' time, so I put it in a text file and moved on; but, then I noticed one of horsawlarway's responses had been flagged--even though you are clearly the instigator here--so I would need to post a defense of why I vouched for it: you are the person who not only took this thread in a personal direction, but you are the person who decided to start throwing around playground-style bullying: saying an idea is cancer is an opinion and analysis, not an insult... but all of your comments on this thread are patronizing and are the kind of taunting people use to start fist fights.)


> So... going back to the point: If expiring sessions is terrible UX (and it is) and it's not stopping attackers (and it's not), why are you doing it?

Everytime I get into this kind of discussion, the answer seems to be "because it makes me feel better". Which is why it's so impossible to actually change someones mind about it and thus we have security "experts" (or worse, non-technical managers) making life miserable for thousands of users.


As the GP said, the attitude is cancer, and spreading is what it does.

It really doesn't help that "security expert" is a job title. That means this person is expected to deal with security, not business objectives, and will be blamed for security incidents, not business goals satisfied. If the OP's attitude is cancer, this is what causes it, and it's completely toxic.


Exactly. Anyone working in a moderately large company (500+ employees) or in a regulated industry is familiar with this issue. Just as no politician wants to be seen as soft on crime, no CISO want's to be blamed for a security breach. So more and more security gets piled on to every process and application. MFA everywhere, even though you're required to use MFA for the VPN. SSO everywhere, because why not. Session expiration after an hour. VPN limited to 8 hour sessions (great for people who work long periods or who have long running processes and don't want to use tmux/screen etc.

My team (20+ SREs/admins) spends roughly 40% of our time complying with either security requests, external audits, internal audits, internal queries from Risk/Security about servers etc. Figure the cost to our business of this (just from our team alone) is roughly $1M per annum. Add in the cost of all the "security" software, add another $2M per annum. Staffing of Risk/Security team is probably another $2M per annum. For all the other workers in IT (250 or so) probably another $3M. For non-IT workers, the added friction is easily 10% of their cost, so $15M. Add it up and you're around $25M.

And it's not even the out of pocket costs, but the opportunity cost. Thank god I work in a business that has regulative moats that prevent real competition.

Cancer is a perfect term for this IT culture.


The reason why your business can "afford" this inefficiency is due to regulatory moats.

Big companies are like governments, and when you remove outside market pressure, they become even more so.


The point I was trying to make about that is that most businesses don't have the financial resources to absorb the cost of "security" as it's being practiced now. Yet it's continually being foisted upon them as a necessity.


Yea, we agree on this.


What do you have against SSO everywhere? I see it as actually one of the things that makes obvious sense. It makes the user's life easier and improves security. It makes it easy to give a new employee access to a bunch of systems at once using RBAC. That and putting users' SSH public keys in LDAP and using that for auth everywhere instead of passwords are two obvious pure wins to me.


SSO is great in a monoculture environment, but falls down in heterogenous systems. We have Azure SSO, Ping Federate, and a couple of others we're getting rid of. I think it adds unnecessary complexity, and fails too often. Our internal users don't like it because the failure modes are opaque to them compared to a userid/password.


“We can’t afford to not do it”


Sometimes the answer is "so our sales team can tell people our security is better than our competitors."


Hah, I worked for a software security company. This is literally the entire industry.

Check these boxes to be compliant to make that enterprise sale.

Is it actually more secure? Who cares, insurance will cover us now and the enterprise paid us.


I still have PTSD from dealing with HIPAA and SOX stuff. Don’t run your own credit card processing if at all possible.


Same thing with selling software to the government. "Is the software secure" is far less important to those customers than "does it pass the scan".


> 1: A short compromised session is still a compromised session. The duration usually does not prevent the attacker from achieving their goal.

Need a citation on that “usually” part. A short session duration most definitely:

* Makes it less likely that when an attacker obtains a session token that is is unexpired.

* Gives the attacker less time to use a valid session token to move laterally into (potentially) unfamiliar infrastructure.

The trade-offs, of course, are:

* Poorer UX, and potentially driving users to attempt to bypass approved tools and/or security controls

* More interactions with the authentication system, which, depending on the auth system and the attackers motivation/capabilities might actually let them harvest more credentials.

But, in any case, I’m not aware of any research that short session tokens don’t thwart attackers use of those tokens, and the idea is, to put it mildy, counterintuitive.


Once the attacker knows they can steal the session ID, and how long it will be valid for. It’s just a matter of running a script to do that all that as fast as possible.


That’s one possible scenario.

The correct way to evaluate security is to consider many different scenarios, and consider how your mitigations affect the likelihoods of all of them, weighted by their impact.


> The correct way to evaluate security is to consider many different scenarios, and consider how your mitigations affect the likelihoods of all of them, weighted by their impact.

NO! that's merely the FIRST step in evaluating security. The next steps are: What sort of threat am I attempting to prevent? How do my mitigations impact usefulness of the product on the whole? And most critically: Are my users better served by adding this security measure?

That is much more likely to be determined based on what the product/tool is trying to achieve. Which brings us right back to: Security is a tradeoff.

There are situations where I think short sessions make sense (ex: changing billing/contact info). There are also situations where short sessions are huge negatives (ex: how well is slack going to work if you get logged out every 15 minutes?)

My proposal is simple: Actually do the damn evaluation, instead of just blindly siding with "moar security = moar better!"


> My proposal is simple: Actually do the damn evaluation, instead of just blindly siding with "moar security = moar better!"

Agree 100% on this.


I will posit this proposition:

Short sessions are thinking like physical security. Someone can pick any lock, the question is will it take long enough for a human to interrupt the attack?

It doesn’t matter how long a computer has access to the key. How fast it can cause damage is limited by the speed of light, not human fingers.

If you ever leave the credentials where they are accessible, they can be used even if the session expires in three seconds. And if they’re being seen (in motion) why would the session be expiring in three seconds?

Machines hack differently than humans. Don’t think in human timeframes or orders of magnitude. That will let you make mistakes you can’t afford to make.


> Machines hack differently than humans.

An awful lot of hacking is done manually, by humans. For many scenarios, considering human timeframes is completely reasonable.


We live in a world where real time advertising auctions happen. If you think there’s something about that which can’t apply to organized crime you’re gonna be in for a rude awakening when your systems start to fail en masse.

I’ve had to replace a credit card twice for suspicious activity (once cost me my favorite domain name, which is still parked). There were no major charges in either case. One charge at a business I’ve never been to.

Some people get a card and use it. Some immediately sell it after proving it works. That means a clearing house. A food chain. That will get more automated, not less.


Sure, there are some attacker activities that are highly automated. That’s not what we’re talking about here. We’re talking about compromised temporal session tokens, which are frequently harvested by manual action, and in those scenarios, thinking about human timeframes is very useful.


> * Makes it less likely that when an attacker obtains a session token that is is unexpired.

But gives them more opportunities to acquire such token.

> * Gives the attacker less time to use a valid session token to move laterally into (potentially) unfamiliar infrastructure.

Only if tokens lifespan is counted in milliseconds. Otherwise, the attacker will refresh the session token as soon as they get it, and continue to do so. An active session token can be thought as having arbitrary long lifespan.


> But gives them more opportunities to acquire such token.

Maybe, if the session tokens are being acquired by improper logging. If the tokens are acquired via the user’s cookie store, for instance, the total number of session tokens is going to be the same — the user is going to use the applications they use, and the stored session tokens will reflect that.

> Only if tokens lifespan is counted in milliseconds. Otherwise, the attacker will refresh the session token as soon as they get it, and continue to do so. An active session token can be thought as having arbitrary long lifespan.

If the session timeout is based on inactivity, not total life time.

There are instances where session timeouts/forced reauth are useful and where an attacker could not endlessly refresh the token.


How likely is it for an attacker to get access to the user's cookie jar at a single instant in time only?

> There are instances where session timeouts/forced reauth are useful and where an attacker could not endlessly refresh the token.

If the token isn't refreshable without a "real token" the "real token" will probably need to be somewhere the attacker can get it anyway.


> How likely is it for an attacker to get access to the user's cookie jar at a single instant in time only?

Depends on the dwell time the attacker has until detection and eviction, but generally speaking, in the scenario where session tokens are being harvested from a user’s workstation or something like a jump server, the attacker is going to be able to access stored session tokens from the most recent login prior to their gaining access and any that occur during access. In any case, shorter session tokens are going to result in less access for the attacker. There isn’t a scenario that results in more access, and only absurdly contrived scenarios that result in the same access.

> If the token isn't refreshable without a "real token" the "real token" will probably need to be somewhere the attacker can get it anyway.

That may be true of ticket-granting-ticket schemes, but not for single/multifactor authentication for ticket schemes. Both scenarios exist and need to be accounted for appropriately.


Why wouldn't the system require each refresh of the session token to require additional authentication? Then a stolen token can't easily be refreshed.


UX for common applications is poor. Would you want to have to enter your email password every hour or two, for instance?

I’d argue for truly critical infrastructure, short-ish session times can be useful, but for most applications they do more harm than good and better alternatives exist. For instance:

* Enforcing step-up authentication for access to sensitive application functions.

* Forcing re-auth based on behavioral analytics (for instance, if the user normally accesses an app 8 - 5 Monday - Friday from the United States, but presents a session token on Saturday afternoon from Russia, maybe force a reauth.)

* For enterprise apps, SSO where you may be forcing an authentication event every shift, but at least it is one, not one per app.

But, of course, there is no one right answer because there is no universally applicable or agreed upon threat model.


I think this debate is too abstract to be useful. Certainly, there are some cargo cult practices where various forms of "key rotation" has gotten out of hand for no real benefit. But there are many valid scenarios which depend on different overall system characteristics. I'll just revisit a few.

1. Actual encryption key rotation. This is one grandparent of modern session limits, built with an assumption that keys might be compromised by cryptanalysis rather than stolen by malware-infected clients etc. Rotation reduces the number of ciphertext examples made available to perform the cryptanalysis, thereby reducing the risk of successful attacks. It also may reduce the scope of compromised data if and You may have a UX requirement for preventing idle sessions to stay open on clients because people are in a shared space with client devices, i.e. HIPAA touches on this for medical systems. only if key exchange is still considered secure, such that holding one key does not let you decode traffic captures to determine earlier/later keys in a transitive attack.

2. Forcing idle logouts with less dependence on client systems. This is another grandparent of modern limits which is sometimes cargo-culted. The underlying risk is people logging in and then abandoning a terminal in a shared space, so you want to reduce the chance that someone else can walk up and start using it. You really want the client system to participate in this, i.e. not just end a session but lock/clear the screen so previous data is not on display for unauthorized observers. But it is often seen as defense in depth to also abort the remote session credentials so that future actions are unauthorized even if the client software has failed to do its part, such as if it has crashed/hanged. This one has the weakness you mention that a malicious client could do endless refresh to prevent the detection of an idle UX by the server.

3. Forcing reauthentication periodically or for high-value actions. This is more paranoid than the prior idle logout concept, and actually demands the user reestablish their authenticity during an active session. This has been used historically as another kind of defense in depth attempt to verify user presence with less trust of the client system. But it is also used as a UX ritual to try to get the user to pay attention as well as build an audit chain for their consent to a specific action. Designers might tie this in with MFA/hardware tokens, to get different kinds of user-presence detection throughout a session.

4. Decentralized "web-scale" architecture and caching. In a traditional system, a session key is little more than an identifier and might be checked on each use, i.e. each web request, to determine actual client identity and privileges with a server-side lookup. But as people developed more distributed services, they have often shifted to embedding more of this info into the session "key" itself, as a larger signed blob. Different service components can decode and operate on this blob without having to do synchronous lookups in a centralized session database. This is where automatic refresh still serves a purpose, because it is during each refresh handshake that the centralized check can occur and potentially reject the refresh request or issue modified privileges. This rotation rate defines a system-wide guarantee for how long it takes certain changes to take effect, e.g. disabling a user account or changing the roles/privileges assigned to a user. I have seen this approach even in systems where the session key is a mere ID and cache invalidation could arguably be handled in the backend without forcing client's session keys to rotate. This seems like cargo culting, but is useful if the system operator wants to retain the option to use signed blobs in the future, and so does not want to allow client programmers to assume that client session keys are stable for indefinite time periods.


> A short compromised session is still a compromised session. The duration usually does not prevent the attacker from achieving their goal.

I'm a bank robber, I want to steal your money without you knowing so I'm not caught.

What would be a better way to do that? Withdraw $1000 immediately or to spread out that withdrawal over several months.

A short token forces the $1000 withdraw immediately. And one common way these tokens are compromised is a scammer getting Grandma to open the developer console so they can "fix" things.

> The vast majority of ways to compromise a session already give you access far beyond that session itself (ex: you have user access on a local machine, or physical hardware access, or you're an admin who manages that user, etc/etc/etc). So an expired session is, at most, a small speed bump in those cases.

Or you are employing the common scam above, screen sharing under the guise of helping.

Granted, some of the calculus needs to be "what type of app is this? What does compromise mean?".

> The most secure application runs completely isolated, with no input or output, and is totally, utterly useless. But no worries - it's secure!

I didn't take this as what the op was saying.

Security works in layers and good security imagines that other layers don't exist or might have been compromised. In your ship analogy, that's adding a second hull, putting airtight sections between hull locations, and having lifeboats.

You wouldn't eject the lifeboats because "we have two hulls, what could go wrong!"

The actual cost of doing this is generally developer time.

There are certainly practicality limits, but in general a layered approach to security is how you both increase security and decrease compromise impact.


This is just wrong on so many levels.

> A short token forces the $1000 withdraw immediately.

No, a short token forces the attacker to continue making requests, but otherwise places very few limits on what they can do with it (since these tokens are almost always something like "15 minutes since the last use")

> In your ship analogy, that's adding a second hull, putting airtight sections between hull locations, and having lifeboats.

This is PERFECT! Because it hightlights exactly the trade off I'm trying to point out. Fucking no one uses double hulls except for oil tankers, and they only do it for a very specific reason: They want to stop oil from leaking out.

They pay a HUGE cost for it, but it happens to be worth it for this very specific trade. Here's the tradeoff:

---

Double-hulled tankers have a more complex design and structure than their single-hulled counterparts, which means that they require more maintenance and care in operating, which if not subject to responsible monitoring and policing, may cause problems.[2] Double hulls often result in the weight of the hull increasing by at least 20%,[3] and because the steel weight of doubled-hulled tanks should not be greater than that of single-hulled ships, the individual hull walls are typically thinner and theoretically less resistant to wear. Double hulls by no means eliminate the possibility of the hulls breaking apart. Due to the air space between the hulls, there is also a potential problem with volatile gases seeping out through worn areas of the internal hull, increasing the risk of an explosion.[8]

---

> The actual cost of doing this is generally developer time.

No - the actual costs are usually actual costs, paid by all members of the system. Developer time is just the most obvious up front cost. There is a cost every time a user has to re-authenticate. There is a cost in resources to handle the extra authentications. There is a cost in complexity to maintain and extend the system doing authentication.


>There is a cost every time a user has to re-authenticate. There is a cost in resources to handle the extra authentications. There is a cost in complexity to maintain and extend the system doing authentication.

I think this is definitely where the security trends in modern IT have gone very awry, as it _is_ extremely annoying to be an end user having to work with modern IT security practices. Off the top of my head:

- MFA everywhere means that if you have any issues with your alternative authentication devices, you are completely locked out of your work and probably your life until you get that resolved

- broad and vague block geo-based block lists means users just flat out cannot access resources depending on where they happen to be, which means service desk tickets, investigations, and ultimately people who cannot access non-sensitive data they should be able to just because of where they are physically located

- CAPTCHAs can lock entire classes of persons out of specific services as the CAPTCHAs aren't easy for these classes to perform on demand

- SSO/SAML authentication pages take you on a whirlwind tour of dozens of randomly generated authentication pages meant to establish and pass your authentication back to a central location, and it makes it impossible to tell from the URLs themselves whether or not it's suspicious or not unless you know the specifics of the system in use; this is particularly bad because this is exactly what it will look like if you click on a spam site in a search result or a compromised webpage. how is a user supposed to know when they've accidentally gotten tricked into a compromised authentication page? the uniquity of SSO for logins is nice, but it also means that as a user, I expect that I can be taken to an SSO from just about anything, so how am I supposed to know if the entry point from page X is legit compared to the entry point from page Y?

- a corollary to requiring multiple authentications even from the same device (looking at you Microsoft...) is it creates uncertainty as to when I should expect to have to sign in; if opening a link to a report requires me to authenticate or just accessing an internal web portal requires additional auth, why should I be suspicious when my colleague's account gets compromised and an attacker sends me a link saying "hey, we need to respond on this form by EOD; don't have time to explain in full, but it's pretty straight-forward. I'll follow up in a few hours when I'm done with another item"

- Edited: another corollary with SSO means that getting auth'd once means you get auth'd a lot. While you should need to configure additional security and checks on more sensitive services, since you're already auth'd through the main means of identification, it's often trivial to get the access by normal means or to social engineer access

It really sucks to be an end user in such environments, and it's just too easy for IT security to absolutely lock out legitimate users who are following the policies as best they can with earnest intent.


> It really sucks to be an end user in such environments, and it's just too easy for IT security to absolutely lock out legitimate users who are following the policies as best they can with earnest intent.

Yup. I'd add to your list: multiple corporate auth systems/domains that are supposed to be in sync, but sometimes aren't. When that breaks, you end up having to turn the Internet off to even log in to your work computer, and find yourself flying out to another country so the IT people there can fix the mess, and this is cheaper than them spending a long time trying to help you remotely, while you can't do any work.

Don't ask me how I know this.


> if you have any issues with your alternative authentication devices, you are completely locked out of your work

You have printed the rescue codes when prompted, and have put that physical piece of paper into your wallet, haven't you?


I have ~1000 accounts, ~200 of which are used for work occasionally. Their 2FA recovery methods vary, and some have no recovery method. I'd like to say my wallet is not large enough for the printed codes, but only about 5 accounts even offer backup codes, considerably fewer than the number of 2FA accounts.

Besides, my last Gmail account for work appeared to be locked to my phone and didn't accept backup codes, and was OAuth master to a number of other accounts.

(For real: I lost access to that Google account permanently when my phone screen stopped working due to an internal fault. It wasn't really a problem and I didn't pursue it fully because I left the job soon after anyway, but the fact I couldn't regain access during that time despite copying the broken phone's content to a new device which successfully transferred the 2FA codes for all other accounts, was striking. It's why I don't use Google for id when there's another option. I tend to use GitHub for id at the moment.)


That's a nightmare process for any normal user. There's no way the vast majority of people are savvy enough to do this correctly.


Which part of the "click print, cut or rip out a corner, put it in your wallet" a nightmare for a normal user? (I'm not one, can't judge.)


I used to do something like this with my passwords. A folded, printed sheet with tiny font holding my accounts and passwords that I carried in my wallet. Eventually I found there wasn't enough space even on both sides of an A4 sheet with the tiniest legible text, and a full sheet was hard to fold small enough. The text got mangled in places due to crushing.

I think normal users don't have a printer or a nearby print shop in 2023. (For those with an inkjet printer, the ink has dried and the head seized up since they printed something last year.)

Many people, who I assume to be normal, don't have a wallet separate from their phone these days. They use virtual payment cards on their phone and store paper notes and if necessary cash and a payment card inside their phone case. Not useful for "lost my phone" recovery codes, terrible for "my phone was stolen" as it reveals too much, but maybe good for "my phone broke".


The part where you actually have to do it. And the part where you remember where you put it. What happens when you lose your wallet? Or when the paper gets crumped up and ruined? Or wet? Do they put it in a lock box? What good does that do them when they are in another country? What if they reset their password and have to reset their codes but forget to update the paper? Or what about when you have the old and new codes and can't remember which piece of paper you put on top? How may of them actually go back and verify that the codes work and the process hasn't changed and they can successfully recover their accounts? How about the 50 other accounts they have all forcing their unique and totally different 2FA recovery process that isn't like any of the others?

I keep my backup codes in a GPG encoded document with copies of it in multiple places. It's a big pain in the ass but I know I'm covered. For the vast vast majority of people this is more theatrical bullshit they won't bother with.


Actually have to do it: I see, but really, dear real user, you are adept at printing pages, you do it quickly and masterfully, just click the button now.

Remember where you put it: the answer is trivial and always the same, "your wallet". The tech support will remind you to look in your wallet if you come to them with your problem.


Yup - let me just go get my "wallet binder" from the storage yard I have to keep it in after adding 800 pages of backup codes (which is literally not an exaggeration - I have more than 800 active accounts between personal/work/contracting).

Let me just bind this fucking book over here, after I ran out of printer ink twice while printing it, and shove that right on into my little wallet flap.

Perfect! Why didn't I think of this sooner!


Most people don't have printers on standby. Most wallets have lifetimes way shorter than those of account rescue codes. Everything else in a wallet - government-issued IDs, bank cards, etc. - has lifetimes way shorter than those of account rescue codes.


> Which part of the "click print, cut or rip out a corner, put it in your wallet" a nightmare for a normal user? (I'm not one, can't judge.)

- click print - you lost 50%+ of your users there, as approximately nobody has a printer on stand-by at home; if they have it at all, it's a hassle to turn it on, and half the time it's probably broken (ink dried out, etc.)

- put it in your wallet - where? Also, what if you lose your wallet? At least with everything else in it, there's a reasonable process of recovery, usually involving visiting banks and government institutions in person. No such thing for webshit MFA.

This is worth repeating: literally nothing else in your life works like this. There are no other documents that you need to hold on to for a decade or more[0], that are in any way important, and loss of which can't be recovered from. It's an impossible ask for most people, because nobody has habits or even required perspective for such use case.

(What I usually hear from people is, "you should put it your safe". But I don't have one, and I've never (that I know of) met a single person who owned a safe either. It's some American thing, I believe.)

--

[0] - My Google account rescue codes are over a decade old now. I had to use them last year. It's a miracle I still had that piece of paper in my wallet - I've long forgotten about it, but it happened to be put next to a single-page reference for time travelers, so it got transferred to new wallets along with said reference.


Upvoted for the sharp sarcasm that's dripping from this comment.

It was sarcastic, right? Right?


> dozens of randomly generated authentication pages

I have never seen an authentication page be randomly generated.

Elaborate?


I'm explaining it poorly; think about the urls for common authentication redirects and how it usually looks when you go through an SSO portal.

Probably you start at a page like:

sso.company.com

When you try to access a service, you're taken to probably something like

sso.company.com/auth

If your company uses Microsoft or Gmail, very likely before you reach your SSO login, it may temporarily flash MS/Google's auth page briefly before redirecting or loading the elements for your company's SSO portal

After login, probably it will then load something like:

saml_provider.company.com/autheticate/redirect

saml_provider.company.com/[some generated string in the url]/some_action_page

and depending on how it's configured, you might go through a few of those types of URLs with no direct connection to your company or the resource you want, but it's just the authentication process passing your authentication from service to service until finally it figures out to return you to your originally requested resource and it passes an auth token. †

The reason I think this is frustrating is that it's very fast, no user input, but it is observable by the user; you will see the pages loading and the long urls, sometimes some basic info is printed to the page with simple HTML, but the user has no idea what's going on.

Combine this with the fact that this is exactly what happens when you accidentally click on a spam site from search results, and my problem is "how can a user possibly know if this redirect spiral is a legitimate authentication process or if they've accidentally clicked on something compromised?"

† sometimes these auth-spirals don't even take you to the correct item you were trying to get to in the first place, it takes you to a generic landing page...Reddit is guilty of this from my experience where logging in to subreddits that are flagged NSFW will redirect me to the reddit front page instead of back to the subreddit I initiated the log in to check


Some Lastpass admin page redirects me no joke like 10 times.


exactly; if you know what these systems are doing it's easier to be comfortable with it, but it's still very annoying/long for every single login.

and we've done such a good job of training users to detect suspicious behavior, and here we are using the same suspicious behavior that spam sites use, it leaves me with a frustrated feeling.


> since these tokens are almost always something like "15 minutes since the last use"

No? They’re almost always “15 minutes since the last use, or 5 hours since it was created”, which results in a completely different security picture.


Why would I focus on your grandma when I can get everyone’s grandma and spread the theft out even more?

Sit in a coffee shop and steal credentials for a local bank all day.


I couldn’t agree more. Security is almost always user hostile (speaking from a UX perspective). I am NOT advocating that we remove security for obvious reasons (a hacked app is also user hostile). HOWEVER - if we can just acknowledge that security is antithetical to an easy to use, user friendly app then we can make appropriate decisions moving forward.

One of my favorite sayings is “if you are not careful, you are going to secure yourself right out of business”. Ease of use is a real thing, and if you don’t figure out a way to make a secure app that is also very easy to use, someone else will and your pleas of “but it is not secure” are going to fall on deaf ears.

Honestly, baseline security is part of “essential complexity” https://ferd.ca/complexity-has-to-live-somewhere.html. Essential complexity can’t be removed, but it CAN be moved around. Right now we are talking about the essential complexity of managing security through managing a user’s session length. The advocated solution is to make sessions short so that they expire quickly. This seems to remove all “accidental complexity” so that we are only dealing with the essential complexity. But this is misleading. There is still complexity in juggling those short sessions. As a designer you think this solution is simple, but what you have done is moved that essential complexity over to your users. THEY must now manage the impact of short sessions. The complexity does not go away, you just moved it to your users, hence it is user hostile.

The trick then, if making the best product is important to you, is to figure out a way of letting users have long sessions but managing it on your side. This seems to argue that you are making your system more complex by adding accidental complexity (which is generally a bad thing). But really what you are doing is moving some essential complexity away from your users onto you. You lower their burden. This is how you make competitive applications.


Is there a term for this kind of thinking, or a type of job role in security that focuses on problems like this? Are there any professional 'strategic rearranger of security complexity' or 'security UX champion' jobs out there?

This seems like it could be a really valuable and maybe also fun role, if one can find an org that has made room for it.


This isn’t a security mindset, it is a product development mindset. You run into problems creating these situations like we are discussing when roles across the company diverge and no one is responsible for the big picture. The security guy doesn’t care about product management, and the product guy usually doesn’t see the value in security. Good founders get this.


A lot of the same problems come up later in the software lifecycle, though. I wish considerations like this could be a factor in purchase and integration considerations.


> This attitude is cancer. Let me throw another quote at you

Nonsense, defence in depth is a core security principle. You should not rely on a single control to protect you.


> Nonsense, defence in depth is a core security principle. You should not rely on a single control to protect you.

And you should not prioritize security over the goal of the product.

The conversation is a discussion of relative value and tradeoffs. Does increasing security make the tool as a whole worse? Sometimes - the answer is yes.

I have a nice set of front windows, but that means a risk of someone breaking through them. I accept that risk for the windows - the extra light and visibility is well worth it, and the windows are not the only way in. Compare to short sessions.


The idea of “adaptive security” is compelling.

E.g. my bank makes me type my password and sends 2fa codes when initiating/approving wire transfers… even when I just logged in a minute ago. If I’m doing 2 wire transfers in a row, it doesn’t care, it still has me fully reauthenticate for every wire transfer.

But I’m fine with that because moving money is something that I’m willing to accept however many roadblocks are thrown at me.

But do I want that happening when I go to post a tweet? Absolutely not.

In other words, let’s adopt the concept of refreshing authentication upon particularly sensitive user actions and have lax requirements in other cases.

We don’t need to think of sessions as “logged in” or “logged out”. It’s possible to design a system where you’re always logged in forever, but you need to reauthenticate based on certain rules or actions given the context of the app and risk/threat.


Agreed - this is a much better approach. The "session" that can do the normal daily tasks for users should last as long as you can make it.

The "session" that can do things like change 2fa/billing/contact-info (decidedly not-normal things) should last for exactly as long as it takes you to complete that form, and should require your pass/2fa again to touch.

This is currently Google's approach, and I find it much more sane.


GitHub does this too.


My bank does that but also:

* logs out after 10 minutes of inactivity - so doing anything that involved switching between accounting app and it is annoying * not allowing more than one tab open at once - that's just stupid in its entirety.


> But I’m fine with that because moving money is something that I’m willing to accept however many roadblocks are thrown at me.

Really? I'd change banks over that. If I log into my e-banking website, the main activity I'm going to do is pay bills. I would absolutely not tolerate having to jump through hoops to do it.


This is our business account so we're moving anywhere from $20k to $200k at a time.

In the spirit of adaptive security, someone moving $100k will probably be fine dealing with a couple extra password / 2fa prompts... but I agree it would be annoying to deal with this for day-to-day (consumer) bill pay. A bank could throw fewer roadblocks when paying a $500 invoice vs. a $50,000 invoice

The workflow I described also is part of a dual-approval model where a finance person sets up the transfer and it's approved by a 2nd person (who's presented with a bunch of authentication/password/2fa prompts, almost to a fault).

But again, I'm fine with it because it's large amounts of money in corporate bank accounts. But yea, agreed it would be annoying and should be toned down in a consumer use case.


> I have a nice set of front windows, but that means a risk of someone breaking through them. I accept that risk for the windows - the extra light and visibility is well worth it, and the windows are not the only way in. Compare to short sessions.

Okay. I will compare using house to short sessions.

Short sessions is like having house with every doors having a lock and I need to use the keys to get into different room, if I stay in one room for too long, including the shitter. I also need to use the keys to open windows and the oven. And developer going "well, you shat yourself ? That's your fault, should've had keys on you at all times".

That's what short sessions are. Delusional security clowns ignoring usability. It's less than security theatre, it's security circus.

Requiring re-auth to pay some money or delete something important is reasonable stance.

Requiring re-auth few times a day just to browse data in the app is not,


That is a straw man argument. Nobody was saying security should be prioritised over the goal of the product.

Security is just another non functional requirement (mostly) of a product.

To obtain good enough security, defence in depth is still a good principle to follow. It means you are not putting all your eggs in one basket. It often means that each individual control does not have to be perfect or massively over engineered.


So in this case, when short sessions are a clear negative for a lot of products, and we have existing examples of HUGE enterprise companies that have agreed and adjusted those sessions to be much longer for most cases...

I would argue that you are arguing to prioritize security over the goal of the product. Right here and right now - you are literally doing it.

> To obtain good enough security, defence in depth is still a good principle to follow.

I don't disagree! I just think that each "defense" needs to actually be considered on the whole, not as just another bonus to security. Short sessions SUUUUUUUUUCK. They make your product shitty. Users hate them. They don't add a ton of security.

Are there products that should still have them? Sure. Probably lots of products in VERY specific places. Should they be the default everywhere? Sure as hell not.


Don’t prioritize security if the root cause of the security breach is someone is getting access to the session tokens not that the server session tokens are arbitrarily too long. that attack might happen once ever and it doesn’t really matter if they have five or 10 minutes you’re still screwed because they can just go get another session token next time and be prepared.

Optimize the application to run the best for all the users first and then adjust the security implementation as necessary. Otherwise, you could DoS yourself by trying to be too secure.


"Optimize the application to run the best for all the users first and then adjust the security implementation as necessary. Otherwise, you could DoS yourself by trying to be too secure. "

I think it depends what kind of buisness you are running and what a security breach means for you or your users. If it is a hobby forum, well yes, UX matters more. But if you screwed up security for anything with big money related - you probably want to prioritize security first and not after you lost some billions.


Zero days exist almost every day and there there’s nothing you can do about it. So make sure that what they steal if they do steal anything is a bunch of encrypted envelopes instead of raw pictures.


Yes, however, in the limit it's also the case that you can have either secure or useful system, but not both.

And then, since the top comment was encouraging a wider view, there's even wider view: business needs. Truth is, short sessions make me viscerally hate the product, creates a desire to avoid using it, and becomes a factor in decision to switch to a competing offering when such possibility arises. Or to not switch - one big reason I'm still using my current bank account instead of another one I had to have to get better mortgage rate, is because the bank operating that other account has short session times and associated pseudo-security annoyances.


Really, the issue here is that security is never treated quantitatively. At least in my experience - are there examples of quantitative security?

In some sense, the problem is that the goal is zero controls breaking, but of course, that also provides no information on security. Intuitively, it would seem that parameterizing security (in a diagnostically useful way) would also require a number of quite different measures. For instance, it matters who (how sophisticated and motivated) your attackers are. And some of the parameterization would be based on use - every time you force someone to type a fixed password, that password is more exposed. Combining all these quantities would just be Bayes-based.

I think an exercise like this could be instructive. For instance, short sessions mean more authentications and both usability and failure modes. But they also mean that an attacker would need to wait until the next session, or have a smaller chance of hitting a session. Again, these are quantifiable, at least in rough terms. And this kind of analysis does expose some additional questions: eg that short sessions make no sense without guaranteeing the security of the client system (ideally by reinstallation from known-good reference).

At least in my experience, "security people"


The closest security generally gets to "quantitative" techniques is in applying risk management to threat models.

But the way risk is managed in the industry (multiplying likelihood and impact) is completely incoherent and voodoo. See the book "How to Measure Anything in Cybersecurity Risk" [1] for a good explanation of why it doesn't work and better ways to do it.

Which is a long way of saying, no, security doesn't use quantitative techniques mostly, but it would be possible if people understood how to measure and manage risk properly.

[1] https://onlinelibrary.wiley.com/doi/book/10.1002/97811198923...


But, you need to make sure the extra controls are actually providing depth.

I think in some (very narrow) cases, short session times and aggressive reauth do add depth and can be an effective part of a security program.

But, all too often, defense in depth is used to mean:

* Vendors In Depth, whereby every security tool under the sun has to be deployed (or at least purchased) to achieve “security”. Or, worse, the Noah’s Ark model where you buy two of everything.

* Uncoordinated and/or seemingly random layering of controls that either don’t add to the overall confidentiality, integrity, or availability of the system being protected. Or, worse, are positively counterproductive and actively reduce the real-world security of the system.


> Vendors In Depth

Vendors will always love the phrase 'defense-in-depth' more than they care about assessing whether additional layers of tooling and controls actually provide more defense, because a vague appeal to defense-in-depth is a great way to justify purchasing more security software.

It would be naive to think this doesn't affect the volume of research produced promoting and emphasizing the importance of defense-in-depth either, or how frequently papers on basically any kind of attack end with something that means 'this is why a defense-in-depth approach is needed in XYZ area' even when defense-in-depth is at best incidental to the substance of the paper.

You can trivially tack that on to the end of pretty much any paper about any exploit, which raises the question of how meaningful an observation it really is.


> A short compromised session is still a compromised session. The duration usually does not prevent the attacker from achieving their goal.

Why "usually does not"? The stolen hard drive, leaked data, etc. can happen at any point in the future....one minute to one year from now.

For most of those times, a short session will prevent the attacker from exploiting it.

> The vast majority of ways to compromise a session already give you access far beyond that session itself

For the local system, yes. But not the for the remote system.


> For the local system, yes. But not the for the remote system.

This is fair, although you've picked a specific case where the local system would not likely access the remote system again after compromise (because theft removes access for the normal user) and an expired session might be helpful as security.

But the other thing about theft is that it also immediately alerts the user, and having a simple "Sign me out everywhere" button is a more robust solution that causes much less user pain and mostly accomplishes the same result.

As for

> one minute to one year from now.

I'm not arguing for indefinite sessions. I'm arguing against short sessions. Rotate it after 30 days if you want (or 5 days, or 1 day - just don't do it every 15 minutes). It'll catch 99% of the cases you're thinking about solving, and it's literally 3000 times less annoying for your users while being nearly as secure.


> theft is that it also immediately alerts the user

The user may or may not know of theft or leak.

And even if they are aware, they may not know remember every remote system they were logged into.

> Rotate it after 30 days if you want (or 5 days, or 1 day - just don't do it every 15 minutes).

So we've gone from arguing that short sessions doesn't work, to arguing that it works for such a large % of the cases that it could be relaxed.


I don't think my argument has changed a whit - Short sessions cause more pain than they solve. They are a bad security tool for almost all products.

Arguing that short sessions are bad is not the same as arguing that rotation never has its place. Rotation can provide some benefits.

My argument is that EXCESSIVE rotation (aka: short sessions, the whole freaking conversation) is folly.

It's a bad decision usually implemented without thought or understanding (it's on the checklist...), which has a high cost to users, and actively degrades the product.

In return for the costs of short sessions, what are you proposing that your users gain?

Because personally, logging in every 15 minutes for the rest of my life is a god damn travesty of an exchange to make to cover me on the one case where my laptop goes missing. Especially since that's not a very common vector for account theft. It's SO much more likely someone just calls the help center and claims to be me and gets in just fine.


Stolen hard drive is rarely what you defend from, and arguably might completely not matter if say their short lived session is dead but long-lived password manager one is up.

Exploits owning software on machine are far more common than machine itself being stolen.

I'd also argue that tying re-login to the sensitive actions is far better way to fight it. Basically have long session for nondestructive actions but short for any potentially harmful ones like changing payment options.


Stolen hard drive = Stolen laptop/phone/tablet


Or decommissioned drives.

I’ve seen businesses put used laptops straight up on second hand market, without barely doing basic formatting of the drives, unencrypted. Even less so on private market.

Heard from security friends also that there are examples of attacks that succeeded thanks to drives found in trash outside enterprise.


The shorter tokes are useless by themselves; but if combined with proactive security, they can make a real difference. Some concrete examples, based on the places I worked in, which provide support for at least not having infinite expiration:

- Attacker steals Google token from regular employee.. What can they do? No, they can't "make themselves admin, or wire all your bitcoin to their account" because such things are not available for regular employees. Nor can they use email to take over account because all stuff that matters uses corporate SSO without automated password resets. So all they have access to corporate google drive, which has few terabytes of not so-valuable-stuff + important documents somewhere.

With long-living token, the attacker has plenty of time to browse around and download valuable stuff, perhaps even do it slowly so it can fly under radar. But with the shorter token, this does not work: downloading everything will take too much time / trip the alarms, and manual browsing is not fast enough. (I can imagine some sort of sophisticated AI system which auto-downloads interesting docs, but I am sure most attackers have nothing like this)

- Attacker steals AWS credentials -- again, they either have to be fast and get detected (there are alarms on unusual high download rate) or go slow and worry about token expiration. Even if they manage to steal important data, at least the expiration will force them to use faster methods more likely to trip the alarm.

Even with the full machine compromise, the short-lived tokens are useful. A npm package which steals all tokens once, at the install time, will likely go undetected. A npm package which installs a persistent backdoor will be much easier to detect.. So having a long-lived token makes attackers' lives much easier.


> You haven't actually addressed either of the author's points, though. Namely: 1: A short compromised session is still a compromised session. The duration usually does not prevent the attacker from achieving their goal.

The worst hack at a company I worked for was caused because an old server that was supposed to be decommissioned was left plugged in and connected to the network… a good time later, a hacker exploited a vulnerability in an unpatched package on the machine and got in (since the server was supposed to be decommissioned it was not patched when the rest of the machines were patched).

The hacker used old credentials on the machine to gain access to other machines in our network. If we had been rotating credentials more often, this would not have happened.

We had systems in place to patch and maintain our machines, but once the machine got lost in our inventory management system, it was forgotten about. It wasn’t monitored anymore, and other protections lapsed.

This category of exploit is prevented by credential rotation, because this type of exploit is only possible if a system is neglected.


> This category of exploit is prevented by credential rotation

This category of exploit is prevented in a huge number of ways - not a single one of them is "make my user's cookie/token sessions 15 minutes long"


It could also have been prevented by… not using user credentials on a server to access services? By doing regular checks on your inventory? By using IDS on your complete network? But not trusting your network? (Zero trust security). By regularly simulating attack scenarios and then check your readiness? By implementing ISO27002?


> 1: A short compromised session is still a compromised session.

But an expired compromised session is not a compromised session.

If a session cookie/ID is compromised after it has expired it is no use to anyone.

But if a session cookie/ID has no expiry built in, then it remains an open compromise risk forever.

Situations where we can expect the session identifier and the context in which it is stored to reasonably be under the control of the user for a short time - while they are actively using a site, say - but where it might not remain under their control after they've finished using it, are a real part of real threat models.

If I have some cross-site-request-forgery phishing scam that works on users who are logged in to a bank website, that requires them to have an active session at the time they open their email, or visit my site, or whatever... if their banking sessions time out after 15 minutes of inactivity, they are less vulnerable to that threat than if their browsers keep them logged in all the time.


While security and usability are in some sense opposite poles of a continuum, it’s not strictly a matter of tradeoffs. It’s easy to imagine a decision that considerably decreases the usability of a system without appreciably increasing its security or even decreasing it.


When I think of short session expiry I think that access_tokens have a limited timespan so if they get accidentally logged the remote end or the person is fired their access lapses. This is a good idea.

If we are talking about bounding the lifetime of a refresh token, I agree with the articles, but that is not what I think of when I think about credential expiry. Credentials that you send to third parties should be short lived. I think without that underlying distinction the article is dangerously confusing. Time-bound credentials have many useful features.


> Security is ALWAYS (fucking always, yes really - fucking always) a tradeoff.

It’s helpful to me that ACLs prevent me seeing other users’ files. I’m not trading anything off that I’m aware of.


> I’m not trading anything off that I’m aware of.

Really? Really???? You're not, say... having to manage ACLs, and having to run a system that can enforce ACLs?

Because both of those are tradeoffs. They might be "completely sane and reasonable" tradeoffs! But they are still tradeoffs.

Also - in a more blunt way: Simply not being able to see other peoples files was considered a pretty big negative in some computing crowds when it was first enforced. That's a huge tradeoff: You're sacrificing visibility for privacy.

I think that turned out to be the correct tradeoff in most places - but to be clear - you absolutely are losing a feature to gain a feature.


> Really? Really???? You're not, say... having to manage ACLs, and having to run a system that can enforce ACLs?

In this case, it's a more user-friendly tradeoff, because those burdens can be borne by the infosec staff and other IT staff rather than end users. I'm happy when I can arrange things that way for my own users for sure.


> Security is ALWAYS (fucking always, yes really - fucking always) a tradeoff.

> The most secure application runs completely isolated, with no input or output, and is totally, utterly useless. But no worries - it's secure!

Yeah, as the Rust saying goes “The safest code is code that doesn't compile”


Librewolf has an option to delete cookies on exit. Then you are as secure as possible.


Except if the cookie is copied elsewhere, then that cookie/session needs to be expired?

(but yes, expiring locally reduces the window for copying that cookie)


Pretty sure every browser does


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: