Remember that phone numbers are only 10 digits long, so brute forcing all phone numbers is totally doable.
Considering that, if you implement any flow that involves checking if a phone number is already in use, then you are effectively leaking to an attacker a list of every phone number that uses your product.
It's interesting to wonder why only 5M accounts were affected by this exploit, especially if it's brute forceable. IIRC this vulnerability was widely known about for at least months before it was fixed, so I can't imagine nobody in the know had access to the resources/botnets necessary to enumerate through every account.
Have only 5M accounts linked their phone numbers on Twitter? That's less than 2% of their total accounts (~290M). I don't know what the industry average is for linking phone numbers, but this seems like an exceptionally low ratio.
What percent of mobile numbers do you think are associated with twitter accounts? I don’t know, but it wouldn’t surprise me to find out they had to try 500M or more numbers to find 5M accounts.
Independent of Hollywood, some American cars just might do that. Maybe not in such an impressive manner, but I've been through so many Dodge transmissions and Ford's reputation here is even worse.
joking aside, the 5M figure probably came from targeting like this, such as choosing a few area codes with high tech populations and testing the ~10M phone numbers for each area
Rate limiting should be used to mitigate this, although I suppose a botnet could overcome that to some extent proportional to the size of the botnet.
And for anyone who didn't read TFA, this incident goes well beyond leaking what phone numbers use the product, it leaked the usernames associated with each as well.
Rate limiting is not useful meaningfully. For a service we ran we regularly had botnets with 100k+ IP addresses making one request an hour to endpoints, which absolutely decimated the backend but hit no limits at all that a real user wouldn't also trigger. Even with a couple of requests an hour you could enumerate the entire phone number space in a very short period with that botnet.
There are "residential proxy services" offering exactly this and you only ever pay for bandwidth. Using 100,000 unique non-datacenter IPs will only cost you few thousand dollars as long as you only sending tiny API requests.
And this is service offered by registered Israeli company that get formal agreement from "bots" to route traffic through them. Very shady, but totally legal service that used by a lot of data collection agencies for price tracking on Amazon or getting data from Linkedin, etc.
How do you defend against such an attack? Putting a service behind something like Cloudflare won't bring it down but it will still leak the phone numbers existence, no?
Don't leak whether or not the phone number belongs to an account. All failed login attempts should be some form of "Invalid login" regardless of whether or not it was an attempt against an actual account or not.
Usually you'd try to make the effort/cost no longer worth the data with minimal user impact. For instance, text/email the inputted address with the result instead of displaying it to the requestor through the browser
Or if this functionality needs to return the value, require an authenticated user and impose rate limits based on reputation (which could just be account age)
For instance, Facebook and Twitter used to tell you which profile a phone number belonged to when you put it in the search box (maybe it was this issue). You could restrict that to authenticated users that were 30 days+ old and impose rate limits per day on top of that. A regular user could still look up a few numbers per day but someone enumerating phone numbers would need lots of 1 month old accounts (more effort/cost)
I guess I was thinking more like "limiting the number of attempts" than "limiting the number of attempts over time" -- take time out of the equation (but then NAT causes trouble). But even so, you're right: as the threat landscape approaches the size of the result set, it breaks down no matter what.
That has some problems. If you limit the total number of attempts globally then the feature is effectively disabled, every botnet and script will blow through the attempt budget and real users can't use it. Global limits and IP address limits are not useful, and because we're assuming the user is unauthenticated (using the password reset), we have no other way of distinguishing good traffic.
Captcha comes to mind, but that's a cat-and-mouse game in the age of machine learning (not to mention actual humans working for a bad actor). Cloudflare seems to be on the cutting edge with their newest challenge mechanism, but good vs bad is somewhat distinct from human vs script.
My wife was in charge of security at MySpace back when MySpace was still a thing and there was one occasion that the MySpace team was manually feeding images to a suspected human acting as a bot. As I recall it became clear to both sides that there were humans on the other end and it ended with a picture of a scantily-clad woman and a response of “very funny.”
It's typically smaller though, not every phone number is allocated and many are in sequential groups. Some are special cased, you don't need to search any number matching `****555***` in north america for example, which cuts down on the search space quite a bit.
Try the math, this is a good problem to work through. The position of the 5 doesn't impact the search space like that. 10% of the 10 digit numbers start with a 5. 10% of the 10 digit numbers end with a 5. 5... in your example shouldn't be 1%.
Maybe they should store salted hashes of phone numbers.
The purposes of phone numbers:
1. Verify you are a not a bot: no need to store anything except TRUE once verified.
2. 2FA - well use something better than SMS, but if you must, store the hash, and make me enter my number for the 2FA each time. Compare with hash and then send SMS.
Hashing numbers has other implications, like support impact (some folks don’t know their own phone number), preventing the ability to offer SMS updates in countries that need it (or to reactivate that feature in national emergencies for countries that SMS support was pulled from), as well as making potential marketing, data mining, satisfying legal requests, and future feature development harder.
So your suggestion is a good one for a privacy-conscious service that doesn’t already depend on (or that is unwilling to relinquish) unhashed numbers, but it probably isn’t in the nature of twitter to seek to protect user data at the expense of existing or future features, even after leaks like this.
Non-geeks dislike the hassle of 2FA enough as it is, having to enter their phone number every time too sounds like it would hurt adoption quite significantly.
With technology like FIDO Passkey built into newer phones (both iOS and Android), I see passwordless multi-factor attested auth becoming the standard for most services very soon. Then, users will have to do even less to get more security.
already doable with e-mail addresses. doing this with just a phone number is not really a problem. It is a problem when you can link the phone and email. But discovering a phonenumber in itself is nothing more then pressing random numbers and see who answers?
So after forcing users to enter a phone number to continue using twitter, despite twitter having no need to know the users phone number, they then leak the phone numbers and associated accounts. Great.
But it gets worse... After being told of the leak in January, rather than disclosing the fact millions of users data had been open for anyone who looked, they quietly fixed it and hoped nobody else had found it.
It was only when the press started to notice they finally disclosed the leak.
That isn't just one bug causing a security leak - it's a chain of bad decisions and bad security culture, and if anything should attract government fines for lax data security, this is it.
The whole announcement reeks of "Stop hitting yourself!"
What scum. They had lots of chances to fix this, the first one being not collecting phone numbers in the first place. They chose to do that, and then they didn't adequately protect it, and now they're oh so very surprised that someone might be doxing their most vulnerable users.
If anyone is harmed by this, Twitter should be held liable.
didn't actually not just protect the phone numbers. They actively used it illegally to market services outside of the purpose for which the numbers were gathered
I know the answer is money in politics, SV culture, etc. But it's near certainty twitter will continue as they do in and 2 weeks everyone will move on.
Maybe they get a small boo-boo in the form of a symbolic fine, mangers scramble for a bit, and then the whole thing happens again and again.
Because twitter users care more about the convince twitter provides than they do about the risks their privacy and security as a result of using twitter. I suspect most have no idea what the risks are or have some very limited idea of some of them. Maybe if they had a better understanding of the risks they'd close their accounts and move to something new, but I doubt there be enough of them to cause twitter to invest in securing the unnecessary amounts of data they collect.
This sort of thing will only be fixed when we hold companies accountable for failing to protect customer data through regulation with many rows of sharp teeth.
Twitter is vulnerable, most vulnerable of the big social media sites it seems. The Musk deal has fallen through, and it seems like Musk was not the only one to lose confidence in Twitter. It could easily go the way of Myspace. How many users does Myspace have these days? Active users
They also refuse voip numbers. I am now at 20 back and forth emails with Discord support explaining I do not own a cell phone. They are seriously suggesting I buy one just to use Discord.
Yeah. I used to live in a semi-rural area with no mobile phone coverage, and the insane level of disbelief from places when you tell them "I have no mobile phone" was a real problem. Including banks, and other utilities. :(
Perhaps if you paid for discord. I happily pay for nitro because I see value in supporting discord. Still had to give them my number despite already paying them. I'd be happy about that sort of regulation.
I usually don't do ads, however there is a tool called SMS pva where you can rent phone numbers specific for services for a one time confirmation. You usually get a working one on first try.
I can't even count how many companies suggested that I should 'just get a phone number' to use their service.
> The FTC says Twitter induced people to provide their phone numbers and email addresses by claiming that the company’s purpose was, for example, to “Safeguard your account.
> ...
> But according to the FTC, much more was going on behind the scenes. In fact, in addition to using people’s phone numbers and email addresses for the protective purposes the company claimed, Twitter also used the information to serve people targeted ads – ads that enriched Twitter by the multi-millions.
So you're right, it wasn't for "no reason", but it also wasn't just for fraud and spam prevention, security, or any of the other lies Twitter told users.
They no longer use it for ads, so the value now is just fraud and security.
> if it's just to prevent bot signups, why keep it on file at all?
I mean, you need the actual number for 2FA. I guess maybe you could hash it after some amount of time just for blocking bots? You couldn't just discard it or one number could create unlimited bots.
Multiple companies have been caught using information for ads that they said they wouldn't, and Twitter have already proven that they're not trust worthy
I have seen too many services that ask phone number for account recovery purposes and then end up using it for other purposes for which the user didn't consent. Given how insecure SMS OTP is, I try not to enable that if I can avoid it. Then, on top of it, bugs like this make the service behave like a globally accessible open reverse-directory of mobile numbers to names.
How is twitter notifying users? Has anyone posted screenshots of this notification? I want to know where this notice will appear.
Not defending them but I think a major reason why Twitter (and for example Gmail nowadays) is asking for phone numbers is to decrease spam accounts (which is of course a good thing in itself).
As I said, not defending them. They are likely doing dozens of other things as well. But using phone numbers is a quite effective method of hindering spam/bot account creation - in most countries in Europe at least getting a prepaid SIM requires ID nowadays. Not that Twitter would go as far as to inquire ownership records of phone numbers... but/so you could still go and buy 100 SIM cards if you wanted to, but it'd be way more expensive than just spawning new email addresses.
No spammer ever buys sim cards in store with ID.
5sim.net apparently has direct SS7 access and nearly infinite numbers and offers bulk purchases for receiving SMS. Even for countries like Germany, where ID authentication is mandatory to get a phone number. They have thousands of +49 numbers.
Costs only a few rubles. If you convert it to euros it’s between 1-10 cents, depending on the service and country.
The bottom line is: IDs for sim cards are useless.
Oh, that's interesting. I wonder how they get past regulation in countries like Germany as you said. I'd assume they'd have to be registered as an official operator there?
We consistently have to go through Data protection practices, and limit the purpose of what the data collected can be used for. This seems like either a blatant miss in process, or willful ignore where $150m is under the EXPECTED value of the rewards through marketing
I think you will see more of this class of attack.
Lots of companies have various 'forgot my username'/'forgot my password'/'trying to sign up for a new account with a new email address but existing phone number'/'add a friend by email or phone' flows. It's very easy to accidentally leak some info that shouldn't be leaked while implementing such a flow, since you are peering into the users database querying by email/phone/other identifier while the user hasn't properly authenticated yet.
Yes. The proper way to implement this flow is to ask for the information, and then present the exact same result screen regardless of the actions taken. Any additional information or action should be done exclusively through the contact information you have on record.
And making sure constant time on the response. Otherwise the slower response likely corresponds to a real phone number if the backend synchronously did more actions, such as sending a recovery email. The backend would need to be really slow however in order for a strong enough signal for this to be useful.
No, the binary information too is a privacy concern. For example, one could enter a coworker's phone number to confirm that the coworker has a 4chan account. This isn't good.
> If you operate a pseudonymous Twitter account, we understand the risks an incident like this can introduce and deeply regret that this happened. To keep your identity as veiled as possible, we recommend not adding a publicly known phone number or email address to your Twitter account.
First time I've heard a company actually say this. It's obvious to people who understand a bit about tech and security, but not obvious to the layperson. Twitter actually deserve a tiny amount of credit for giving practical advice that reduces adversity for users in the event of a breach.
No, that's just shifting the blame onto the user. If they are asking for something as sensitive as a mobile number, then they need to protect it properly.
They ask for a mobile number to verify you're a real human, then they say "Ha it's your fault you gave us a sensitive mobile number". 99.9% of users only have one mobile, and have no idea how to get an alternate number, so they just give the number they have.
Even so, it's the first time I've seen a company actually imply to the public in plain English that they can't protect private info, rather than maintain a facade of security that doesn't actually exist.
As you point out though, if Twitter requires a phone number to sign up and 99.9% of users use their personal number, then Twitter are basically saying "our security sucks and if you want an account you have no alternative...".
Some interesting corollaries:
- Are there any services that will sign up to twitter on behalf of users? (and would they work or would it be merely shifting trust from Twitter to a potentially less trustworthy party?)
- I wonder if Twitter could consider not requiring personal info at sign up so as to avoid this dark UX
I signed up for twitter a couple weeks ago to follow some ukraine folks. They didn't require a phone number and just double checking my account doesn't have one.
So you have a well-established account from years ago that doesn't have a phone number. Congrats. Now try to get a new account to protect your identity.
Except for a long time they shut down accounts without a phone number under the pretense of "suspicious activity". For some reason, these suspicions could be immediately allayed only by providing your phone number.
Being forced to do something and later being advised not to do that thing out of deep concern for my well-being? Yeah, that's the Twitter UX vibe: the most self-regarding, passive-aggressive person you know, in software form.
Twitter often FORCED users to enter a valid phone number by locking accounts, and then verified if it was active in comparison to accounts. To this day there is no way to remove the phone number or disassociate it with an account. Please do not oversimplify the offense, it does not do justice to the cited issues involved.
Two days ago, I've tried to create an account tied only to an email. During account creation, the wizard suddenly inserted an additional step and required my to enter a phone number.
I realise though that this is possibly an anti-spam measure (which I'm in favour of), since I've connected through Tor when creating the account. But this procedure stands in stark contrast to the advise given in the article.
Perhaps Twitter needs to make it easier to create accounts anonymously and stop virtue signaling (i.e suspend accounts created over Tor onion-service)
With pseudonymous usage of public services information minimisation to maintain operational-security against private user-data being disclosed by external hackers or rogue insiders is a mantra that needs to be followed religiously.
I’m six months in and they haven’t asked for a phone number yet. I dread the day when they do. This is where proficiency in the Twilio API comes in handy.
when I started liking "too many" tweets I got hit with it and my mobile carrier (canada btw) refused to deliver txt msgs from Twitter so I could never get verified.
Lucky you. I can't create another twitter account as my number is on a network unreachable by their SMS system. Worst of both worlds for me as when that number was on another network they could verify. So leaked number that I cannot even use to verify a second business account :-(.
Virtue signaling? Preventing completely anonymously accounts doesn't seem to fit that colloquial definition of that, I always assumed it meant taking an action simply for social signalling, that has no benefit to you otherwise.
How about the fact Twitter recently launched an official onion-service yet it is claimed by users when attempting to create an account with email over it the account is locked for 'abuse' within short order?
I certainly understand why you want to use Tor to create a Twitter account, I guess the disconnect is you seem to feel it is fundamentally and obviously wrong to prevent this, but it does seem fairly clear why you'd offer a service to allow logins yet not signups. And in any case, can't speak to why an individual account got banned
$5k seems embarrassingly low so something with such horrendous impact. Potentially allowing for doxing, and because phone numbers are the lynchpin for many 2FA and consumer-facing telco security is generally lax, total user hijacking across multiple platforms. What an absolute disaster.
I have found many far more serious bugs, even at larger companies, that have paid me under $500. No one feels security researchers time is even worth that of the internal engineers creating the bugs.
Anyone have any idea how many of these bounties are collected by people who actively look (seems like a hard way to make a living) vs. say people with some knowledge who stumble across the issue and wouldn't take the time to properly report, otherwise (might convince me to take a couple of hours)?
Turkish law authorities have abused Twitter's login system in the past several years. If an anonym Twitter account was critisizing Erdoğan they were trying to log in, try to reset the password, choose phone number and then Twitter was showing last two digits of the phone number.
They also have list of known people who were critisizing the Erdoğan publicaly but without any bad words, unable to open a criminal case agains that person.
Then they were matching probable phone numbers (last two digits) from Twitter with these knnown people'phone numbers. If there was a match (last two digits) they opened a criminal case.
And then that person was being visited by police officers in the morning, arrested for several hours, then he had to attend hearings for 3 years, like once evry 4 months. Also he had to hire a lawyer, for 5 minimal salaries.
At the end he probably wins the case if he is not the owner of that Twitter account, and Erdoğan pays around 1x minimal salary to defendant's lawyer.
Pretty disgusting they don't have a thing to check if they leaked my personal information, which lets not forget they screamed and stamped their feet to force me to hand over in the first place.
I never wanted to give you my phone number, Twitter. You demanded it.
Well yeah. Some accounts could be two. If I see language like that in a headline, I pretty much ignore it. It's like when I see the word "may" in a headline. "New wonder drug may cure cancer." That isn't even news.
That's not unusual for a security bug; it's not like this stopped people from using the app in a way that they'd loudly complain about or that would show up in metrics.
Given they didn't think it was exploited they must have pretty poor logging and analytics around that part of their infrastructure. Someone managed to abuse it millions of times and they didn't know about it even after they'd fixed it and knew exactly where to look for abuse.
I said this before years ago about Signal, Robinhood and Coinbase [0] and right now it's 2022 and SMS 2FA is still being used despite SS7 attacks, SIM swapping, one-click zero-day SMS attacks as found in Pegasus and sophisticated SMS phishing attacks. [1]
Really. One needs to think about logging into any service that requires ONLY Phone number 2FA and this should be a wake up call.
Twitter really should get a massive multi-million dollar fine for this breach.
It's always hilarious: Whenever any company is caught not taking X seriously, the first thing they do is issue a press release that starts with "Here at COMPANY, we take X very seriously!"
A story an old coworker of mine often told was about the CEO at a previous company he had worked for. This guy was apparently pretty scummy in general, but one time he got threatened with a lawsuit for sexually propositioning his secretary.
He settled that issue with an under-the-table payout, but the first thing he did after that was to send out a stern memo to all staff warning them that "we will tolerate ABSOLUTELY NO sexual harassment at this company!"
You can pretty much read a list of company values to find out exactly the things they do only for show.
The companies I've worked for have always ignored any stated values as soon as it costs them money or gets in the way of making money. Which is, you know, always.
> When we learned about this, we immediately investigated and fixed it. At that time, we had no evidence to suggest someone had taken advantage of the vulnerability.
> In July 2022, we learned through a press report that someone had potentially leveraged this and was offering to sell the information they had compiled. After reviewing a sample of the available data for sale, we confirmed that a bad actor had taken advantage of the issue before it was addressed.
Yikes. Sounds like they either didn't dig deep enough to see if it was exploited or they don't keep records long enough to be sure.
This link is not particularly relevant, as it talks about how the phrase "no evidence" is used within a specific community and that community has little overlap with the community which writes press releases after security incidents.
Security incident response teams do not have the same strange distinction between "real" evidence and the non-published non-peer-reviewed evidence which cannot be relied on or even really mentioned.
Probably the latter - all companies operating in the EU have had short (ie. 30 days) retention policies on anything user-identifiable (ie. http logs) for a while now.
But if they didn't keep sufficient logs, they should have alerted the users back then, not now.
AFAIK there are exceptions for many purposes, taxes, law enforcement, "critical business functions", etc of the 30 day window. Tax records, which can be quite PII and personal, need to be kept for ~7 years in the US for instance. Anything that needs to go to law enforcement stays around until the court case is over which can be longer.
For security reasons IP addresses needs to be available in plain text. There is no time limit for how long time you can store the data, but you need to be able to motivate why.
No that's not valid at all! You must remove any trace of your ability to backwards engineering the IPs. Hashing isn't sufficient since it's so easy to run over the whole IPv4 space. This is one of the trade offs.
You could probably make the argument that you need to store http logs with cleartext IP addresses for more than 30 days for operational security and fraud detection reasons. I would certainly consider 180+ days of cleartext IP addresses quite necessary to be able to react to any security or abuse incidents.
You can if the hash collides within the IPv4 address space; ie it's a hash of less than about 16 bits. Enough to let your roughly see if something fishy is going on but you can't reverse engineer to any specific IP, only a set of 64 thousand.
That isn't good enough. By taking that hash and old request data combined with your current request logs it's enough to de-anonymization a significant portion of those logs making you not in compliance.
Data from the past few days we do have a legitimate interest in; protecting our network. If someone is spamming us we need to be able to find out who did it and the only way to do that is deanonymized logs to begin with. Atleast in my workplace we have worked with the DPA to ensure that we are in compliance and there is no issue in keeping around 7 days of IP logs without further anonymization. All or long term logs are hashed below the bit minimum, and that can't be paired with old request data as easily since we strip all but major version identifiers from User Agents, for example.
If something uniquely identifies someone, it's considered a PII and a salted (but still useful) hash of the IP address is that. At least under GDPR. That means you will need to throw away the salt and have different salt for every instance. At that point, you might as well replace with a random string, and that isn't very useful.
"In the context of the European GDPR the Article 29 Working Party has stated that while the technique of salting and then hashing data “reduce[s] the likelihood of deriving the input value,” because “calculating the original attribute value hidden behind the result of a salted hash function may still be feasible within reasonable means,” the salted-hashed output should be considered pseudonymized data that remains subject to the GDPR."
Under CCPA, I think that is enough, HOWEVER, business must implement business processes that specifically prohibit reidentification. So again, not useful at all in this case.
The question should be is IP address a PII or not. Under CCPA and GDPR it is, but only if it “identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.”
Out of curiosity, why is it only 5M and not 500M? You would think the same vulnerability applied to every server, not just one or one cluster, if they are using automated deployments
Doing it slowly over time to not raise an alarm and collect the information, rather than twitter noticing a massive upticks in password resets that don’t go through?
"We have no evidence that this was exploited" is a standard psychological trick they pull in vulnerability announcements to give an unfounded impression that it hasn't been exploited.
I always wonder who "we" refers to in that usage, legally speaking. Does it refer only to a subset of employees / board members who are authorized to speak for the company? Because then even if someone analyzing logs sees something damning, if middle management is trained to stop that knowledge from reaching the top, then those speaking for the company can continue saying "we" didn't know it.
Tech needs regulation like the finance industry in this regard. Regulation that can push responsibility for breaches up the chain. There must be ways to escalate and if something is seen and reported but not acted on, then liability goes upwards.
CEO's in Finance and Banking do A LOT of compliance work and it does catch a lot of problems.
really? what do you mean 'middle management is trained to keep that from getting to the top'? intentional malfeasance?
where I work people are trying their best but dealing with complex systems, memories, and methods of communication. because of this, security issues are sometimes missed, sometimes poorly communicated, and sometimes poorly remediated.
I guess to poster's claim, "I have seen this happen" is an existential claim, not a universal one.
Fwiw, I've ended up being "middle management" at a large company, with deep technical background, and I'm trained and incentivized to report, escalate, inform, communicate, share, and otherwise ensure its addressed up the bloody wazoo. I get slapped on the hand for not communicating / informing enough, never for communicating too much. Over 2 decades, I've never seen my executives try to cover something. "Manage the narrative", sure, but that's largely about how they craft a sentence, not about not reporting.
However, I have also witnessed corporate culture in other places (as embedded consultant) where each layer is terrified of layer above, and each layer is heavily punished for reporting "bad news". They were institutionally set up to fail project deployment as risks are not escalated and they proudly plunge forward. They're not sure much top-down knowingly obstructed to hide stuff, as much as electroshock therapied that it's a bad experience. Taking the most cursory log at the most basic logs and saying "whee, no evidence of exploit!!" Would be par for the course :-/
Probably not 'trained' as much as 'heavily incentivized'. Nobody wants to be the messenger that gets shot for bringing bad news. Much easier to cover up and tell the big boss what they want to hear as long as you can.
This certainly happens. If you speak to a corporate lawyer about a potentially sensitive issue, they will encourage you to use the phone, don't put anything in writing, and don't tell anybody especially not higher ups in the company, until you sort things out with them first.
Seems both ethically questionable and maybe not the best strategy for the individual if they're being instructed to keep information to themselves instead of passing it up the chain in the company. Is that intended to keep just that employee responsible for whatever mess?
Right, but how so? A person or company can get into trouble with things being written down or made known to others. Having a lawyer consider it first is legally prudent and is entirely reasonable and common advice given out to any person (don't speak to police/regulator/other party/internet/newspaper/etc before consulting your lawyer). If you think that's ethically sound advice for a person, then what changes the calculus for a corporation?
> and maybe not the best strategy for the individual if they're being instructed to keep information to themselves instead of passing it up the chain in the company. Is that intended to keep just that employee responsible for whatever mess?
Probably less instructed to keep it to yourself, more encouraged to stick to "official" reporting channels, and then when you do that or come into contact with such issues by other means, more encouragement to use the phone.
And it completely depends on what it is as to the intention I guess. Initially so that the lawyers are able to consider and advise. But sure you aren't paying the lawyer so they are only taking care of your interests so far as that coincides with the company's interests. So if you had a concern that you would be responsible for a legal problem, or are a victim of a criminal or civil legal matter from the company or another person in it, then I would say you should consider discussing that with your own lawyer.
It means silos and information hiding are baked in — as a matter of corporate culture — at least in part to preserve the option of plausible deniability for statements like Twitter’s.
It would be more honest to say "We aren't able to determine whether it was exploited" which could better brace potentially impacted users for the possibility they might be affected.
This is a relatively benign case but the same language is used in other breaches when people should be taking measures like freezing their credit or reviewing financial transactions.
The only thing that could happen with the data would be that it is exploited.
The only thing that happens to stolen cars is not going to the taliban.
These are not even similar in nature. They aren't saying "the data was stolen". They also aren't saying "the data was available for exploit we are unable to determine if that occured."
What if they never looked for evidence of unauthorized access? They wouldn't have any!
This is the same as modern science and medicine frequently using this academic phrase, no evidence, when what they mean is that there has been no investigation.
It's more like saying "I left my car in a shady neighborhood unattended for 72 hours with the doors open and the key left in the ignition but I haven't been keeping track of the millage or the fuel level so I'm not aware that anyone used it while I was away."
Nothing would have stopped someone from using it. Probably best to assume that they have.
You can make positive assertions though. E.g. attack might have been simple in which case it's possible to produce indicators that cover 100% of variants. Or it could have been complex and indicators either don't cover every possible attack or they produce large number of false positives.
Another thing to mention would be how long in the past you were able to look. E.g. in this case they have found out that the bug was introduced in 2021, were they able to inspect logs covering all of that period or did they only had limited logs/other evidence so it's impossible to know whether anyone used this opportunity or not?
How about we don’t use terse language and a short blog post to describe a complex thing and instead talk about what happened, what you did to investigate, WHY you couldn’t determine if it was exploited, and what the heck you intend to do about it? How about some facts and transparency? How about some real honesty?
> instead talk about what happened, what you did to investigate, WHY you couldn’t determine if it was exploited, and what the heck you intend to do about it?
This will be read by optimistically 1% of people, the rest will just catch the summary. This way, you at least get to write the summary.
Well, “after investigating by <insert actual efforts taken here>, we were unable to find evidence it was exploited” would be a good start, as it would indicate some effort was put into disproving the hypothesis.
It provides close to nothing, because it doesn't indicate whether there was no evidence because there could be no evidence - you keep no logs - or whether there was no evidence in spite of the fact there definitely should be if it was exploited because of copious information kept that would show it.
“We have no evidence” strongly implies some sort of extensive forensic dance was performed, and was fruitless. “We have no way of knowing” sounds much more like epistemological resignation. “Evidence” is a pretty loaded word to use.
"We have no way of knowing" may not be correct statement. There could always be a way to know that you may have missed. It would be inhuman to claim "we have no way of knowing" in circumstances like this.
But then you might as well just assume everything is compromised, at all times, even if there's been no announcement. They could just not be telling you.
Which is maybe not the worst strategy, but it's going to be pretty exhausting.
I'd suggest that instead we should just expect and enforce a certain amount of openness and honesty from companies when they fuck up in this way, so we can make informed decisions.
Well, yes - this is the dilemma which is not resolved with empty platitudes, even though "you can't prove a negative."
In the US and elsewhere, there are already some penalties for covering up a problem, and they should be expanded commensurately with the potential harm.
I mean in practice what it tends to mean is the logs only had a 3 month ttl so really could be either way. "no evidence" implies there is at least a place there could have been evidence, they looked, and didn't find any, which is a weak but nonzero update towards it having not happened. It would be nice if they clarified exactly what they checked.
> "no evidence" implies there is at least a place there could have been evidence, they looked, and didn't find any
Yeah I'd never assume that any of that is true. Sure, there probably are ways twitter could find out if something has been being exploited like evidence in server logs or new batches of accounts showing up for sale on the black market, but I wouldn't trust that they looked for them, or that they looked very hard, or that the person making press statements was told about it either way.
If a company has a financial incentive to not find information it's weird to assume they'd seriously look or be trusted to be honest about what they found.
Other purpose than being a psychological trick, what purpose could pointing out the lack of evidence at the time have? Instead they could have written something like "We found the problem in 2021 and promptly fixed it. We first learned that it has been exploited in 2022."
That is not a normal statement if it is your company's fault the question even came up.
"We left a giant tub filled with cyanide completely unsupervised in front of our door for months. We have no evidence that it was used to murder someone."
"We left our gun outside, unsecured, but no one has complained they were shot with it and we didn't detect any fingerprints on it when we finally noticed it wasn't locked up properly"
"We left a giant tub filled with cyanide completely unsupervised in front of our door for months. We have no evidence that it was used to murder someone."
No one would say that second sentence, if you don't have evidence of something you don't state that because of the set of objects and events that didn't happen is infinite.
"We left a giant tub filled with cyanide completely unsupervised in front of our door for months. We have no evidence that someone accidentally fell into it, an animal died in it, it was used in a bank robbery, someone's cell phone slipped it in............"
"That person owns a gun legally, we have no evidence that he used it to murder someone"
I wonder, if you destroy all the evidence this was exploited, can you still claim you don't have any evidence this was exploited? Asking for opinions from non-lawyers only please
Don't currently have? Sure. The quote says "At that time, we had no evidence" so I think that would be harder to argue. You could maybe make the case the statement means: At that specific moment we didn't have any evidence because we already destroyed it. But it certainly implies they mean they had not found any before that point in time.
Works the same way with government. The "I am not aware of ..." is a great trick for when your organization is intentionally silod. The folks who get subpoenaed are left out of detailed info. It's a complete non-statement.
I could bring up examples across both sides of the isle. It's all a big game.
haha. I am a lawyer so sorry, but while you might be able to claim that, you are legally and ethically obligated to also divulge the intentional spoiling of hte evidence.
It would be a lot more convincing if they said they put a team on to it to investigate extensively and didn't find anything indicating it was exploited.
Absence of evidence IS some evidence of absence if you look thoroughly. It sure isn't anything of the kind if you haven't actually tried to gather the evidence or are aware of giant holes in what you were able to gather.
Saying there is an absence of evidence (of a leak) isn't useful by itself unless they also indicate whether that is evidence of absence (of a leak). I.e., they should indicate whether it is likely that they would have caught it if a leak had occured (e.g., via extensive logging).
Provide some level of detail on how they looked for evidence. "We have no evidence" could mean "we didn't bother looking for evidence", or "we looked extensively for evidence, but didn't find any." In fact, the company has an incentive not to keep logs or collect evidence specifically so they can truthfully claim they don't have any evidence of a breach
It's not a trick. Incident response (not vulnerability announcement) is all about evidence. If you can't prove it, it didn't happen. They can probably stil take precautionary measures though which the announcement is part of.
That's why I referred to it as a psychological trick.
They should be open and forthcoming about their level of confidence, instead of using the least worrying language they can offer while remaining technically correct.
You seem to believe "we had no evidence to suggest someone had taken advantage of the vulnerability" implies "we looked for any evidence of it", it doesn't, not in that case nor in any similar situation.
Yes I wonder about this as well. Say Musk had good reasons to suspect some private information was at risk and Twitter kept denying anything was going on. No matter how minor the actual impact would be in the end, this would not paint Twitter in a favourable light especially in a legal battle where Musk claims Twitter held back vital information.
The page isn’t loading for me and I notice Twitter itself is either slow or not loading at all right now. I also see a spike in reported problems for Twitter on DownDetector.
You know... in the last major tech bust, downsized teams working on oversized software didn't have thousands of productions services to maintain. What's a company with 10k services, and 10 languages going to do when when it comes time to patch security vulnerabilities. Or merely keep them from emerging?
Considering that, if you implement any flow that involves checking if a phone number is already in use, then you are effectively leaking to an attacker a list of every phone number that uses your product.