Hacker News new | past | comments | ask | show | jobs | submit login
Equifax IT staff had to rerun hackers' queries to work out what was nicked (theregister.co.uk)
247 points by wglb on Sept 19, 2018 | hide | past | favorite | 165 comments



I know for a fact that they also had Splunk ES bought, built, and running, then the CISO had them turn off the alerts because they were "too noisy" and sure enough, they could see all this happening when it was retroactively inspected.

There are so many things wrong in that sentence but my most glaring question is: why on earth was she even receiving the alerts? That's so far below her level. So, there's more here than just the cert issue. There's also just a complete disregard for a secure mindset from the person that is supposed to be setting that mindset in the culture, along with what sounds like micro-managing to a state of no-managing.


An Equifax exec went to Parana Bread (of all places) and later had trouble even understanding what a security researcher told them they had an API with customer data that was easily found and queried from the internet.

The executive got confused and suspicious when the security researcher offered to send additional data and talked about sending that data securely via PGP (it seems like he simply didn't know what the request for a PGP key... even meant).

https://arstechnica.com/information-technology/2018/04/paner...

It would seem a lot of Equifax executives are happy to act on their own in spite of their own ignorance.

I know a handful of folks in security, they run into complete morons in the security industry all the time. It's scary how wildly variable the talent in the industry is at all levels.

To some extent that talent variability is everywhere, but the sheer volume of people are so clueless that they could be described as a liability in some situations is surprisingly high when it comes to security. I know security folks who have changed jobs simply because they didn't want to be associated with someone who took over at their employer. Reputation among the folks who know seems to be a big deal to many of them.


Oh I remember this. Dumbass had no idea what kind of issue he had on his hands and just blew it off not realizing the level of blowback and publicity that would occur if the whitehat decided to publicly announce it. Boggles the mind how these people get such high paying jobs.


It is so confusing as the pattern of:

1. Dude contacts you.

2. You fail to work with him / or just don't.

3. It becomes public and you look like idiots.

That pattern is so established that every security executive should know you want to avoid that... but nope, some run headlong into it.

I'm no security dude at all, and I can recognize that pattern just from reading the news. What on earth is an executive reading if they have no idea?


A fair number of people wind up in executive security roles with no technical background to speak of.

That said, for every legitimate security researcher, there are dozens of people who are blowhards. Bug bounty programs, for example, are stuffed full of "[CRITICAL][ACCOUNT TAKEOVER] EMERGENCY VULNERABILITY"... which is actually some skiddie reporting someone being able to XSS themselves.

On the outside, you mostly just see the ones that go bad. On the inside, any given report is unlikely to be nearly as severe as it's claimed to be.


Oh I hear ya. The lack of knowledge extends to the "researchers" for sure.

The unfortunate thing about the Panara Bread deal was any dude could have just pulled up Postman and hit the API and seen the results / issue in a couple seconds + however long it takes Postman to start on their computer.

They didn't even try, they could have rule out "crazy researcher" in seconds.


I do security consulting and that very situation is common. I know several BigCorps that specifically turned off Splunk alerts because it was too noisy. Some of them had valid reasoning for it, other didn't.

There's also no guarantee that if they did have the Splunk alerts enabled, that they would have caught anything. In one instance I can think of, having Splunk alerts enabled did more harm than good because it meant SOC staff were busy chasing down hundreds of false alerts (aka the "noise" in "too noisy") and missed real threats. With the alerts off, the staff could use other more reliable sources of threat notifications and catch more of the real threats.

Unless you have a big enough staff, or have Splunk (and your monitoring team) tuned well enough to actually effectively sort through the alerts, they truly well can just be "too noisy".


> Unless you have a big enough staff, or have Splunk (and your monitoring team) tuned well enough to actually effectively sort through the alerts, they truly well can just be "too noisy".

At some point, the size of your staff doesn't really matter, unless it's at roughly number of alerts / 2. Now not only you have the alerting noise, but you also have the added noise of the big intra-team communication. Cleaning up those alerts needs to be first priority.

Not to mention, usually such a barrage of alerts means they are very poorly designed.


I don’t get it. Can’t you tweak the settings on these alerts until they’re effective never sent during normal operations?

I mean, that might take you a day, maybe a week to catch the edge cases, and then it’s not a problem any more...


Oh yeah, these tools are all trainable

In my experiences though, there’s no time to become an expert anymore

You get it set up, then a PM or manager with no clue redirects us to the next priority. Meanwhile the new monitoring tool just hangs out being underutilized at $20k/mo

Not to say all PMs or managers are clueless, but there are more clueless ones out there than not. They are noise generators themselves.

Or, IMO worse, the sycophant coders who concern themselves with their own personal metrics, release dumpster fire after dumpster fire. And us in devops and secops are fighting their idiocy all day, not training tools to catch intruders.

I’ve worked with far too many people who epitomize that meme of Captain Kirk where he can’t hear you over how awesome he is


Turning off alerts is a quick way to ensure that they are effectively never sent during normal operations.

Most days, it is not incorrect.


> Most days, it is not incorrect

This feels a bit like saying a stopped clock tells the correct time twice a day.

Turning off the alerts doesn't cause a problem unless there was something worth alerting about that day.


And how did you get into this type of consulting?


Depends on how big their team is and its structure with regard to alerting responsibility. One would expect one of the big credit rating agencies to have a large IT team, but old school corporations view IT as a cost center and like to run skeleton crews (which can flatten out administration hierarchies).


I work for one of the big three and we generally are pretty lean when it comes to staffing.

Our CEO likes to claim it benefits us because then they don't have to do big layoffs during downturns.


"Can't chop off heads if there aren't enough to start with" .... now there's MBA logic for you.


Well, if there are zero consequences to failure, why spend money preventing it? Maybe it does benefit companies in that position.


I wrote a lengthy response on how suppressing false alarms is absolutely the right thing to do, then realized that the CISO was the one requesting it.

That really makes no sense. The engineers should be the ones determining which ones to adjust and which to suppress. Middle managers are already too far removed from the productive chain to even make the determination if an alert is false or not, let alone C-level.


> Middle managers are already too far removed from the productive chain to even make the determination if an alert is false or not, let alone C-level.

This is assuming the security team was big enough to have that many layers to create such a disconnect...


It's very likely that she comes from corporate and thinks she has to know everything. I worked for a guy like that. He wanted everything on his plate and simple things like a confluence update could take 1.5 years because of it.


> I know for a fact that they also had Splunk ES bought, built, and running, then the CISO had them turn off the alerts because they were "too noisy" and sure enough, they could see all this happening when it was retroactively inspected.

Splunk (ES or not) is not a magic bullet. It generates a lot of bullshit to justify its expense.

Finding the signal in the noise is easy when you know what the signature of the largest data breach in history was supposed to look like. What exactly does this prove?

> Why on earth was she even receiving the alerts? That's so far below her level.

Knowing the C-levels there, they likely wanted earlier warning of emerging incidents-- EFX and their subsidiaries had been dealing with a steady stream of incidents (some public, some not) for a while before the big one. Krebs covered some of them.


The incredibly frightening thing is that I'm no longer surprised about the fact that people like these still remain completely employable.


Unfortunately, management roles are often inherited, instead of being earned.


< then the CISO had them turn off the alerts

I'm reminded of the Target breach.


Meanwhile Equifax stock is at ~130 compared to ~145 it had before the news broke of the back. With their lowest dip at ~90 and a long time dragging along at ~110.

¯\_(ツ)_/¯

I'm seriously considering that on the next massive data breach of a public traded company I'm gonna buy some stock in the onset. Even though it feels like betting against my own principles it just seems like a too good of an opportunity to miss out on...


I've come to the conclusion that whatever is computerized is going to be leaked. Health records? Might as well open source them for all I care. Red team is tenacious and the steps you have to take to actually keep everything secure make you seem unreasonable.

I've moved on. It's more important to focus on election security and physical security at this point. Infosec is dead. Five years ago Bruce Schneier wrote a book called "Carry On" now he has one called "Click Here to Kill Everybody".

Just five years.

We're in a technological growth feedback function and we're unable to predict it.


Author David Brin has (relatively) long advocated that we have two options: shoot for privacy resulting in those with power able to access the "private" info, or have it open where everyone can access. He then argues for the latter because it minimizes rather than maximizes abuse. (I'm paraphrasing and summarizing, possibly poorly)

I struggle with being _comfortable_ with that idea, but I don't have a problem believing the technical accuracy of it.


Interestingly (because the parent is about Schneier). IIRC Schneier argues against this.

The argument is that even if everybody has access to all information, only a small group is able to exploit that access effectively.

So, you still end up with a small group abusing their access to the information. You should therefore aim for privacy of sensitive information.


Yeah, I think a good example of this is thinking about something like license plate scanning.

Technically, this is not taking advantage of any public information. It is just capturing images of things happening in public. Anyone could set up a camera and capture license plates as they drive through an intersection.

So is it totally fine for a police force to scan license plates at every intersection?

The question is complicated, because the effect on people changes when you do something that is not worrisome on a small scale in an automated way at a large scale. When you add in that the people doing it have a large amount of power over people and it changes it even more.


Well hold on, the difference there is the police can cross reference scanned plates against other databases the public doesn't have access to, yeah? Or is there some way to id plates against names? If there is I wanna know cause I wanna ID whoever knocked my motorcycle over


If you had near continuous location information for a license plate it would be relatively simple to use it to narrow it down to a small group of people.

I’d imagine it would be relatively easy to figure out where someone works and/or lives and just go find them.

Correlating across multiple public databases might make things even easier.

But I don’t think this is the point of the argument. The point is that those in power can gain more benefit from the information than the general population.

Let’s say all votes in elections become public. That information is more useful to a powerful political party than to the public at large, or a small unfunded group. If you don’t want that information abused, the argument goes, you should make sure it’s kept private.


In practice, the forced revelation of information makes individual privilege and power more important. When everyone has to play with their cards on the table, so to speak, then people who feel like they can be themselves without consequence do so freely -- these generally being people with support groups of like-minded people, and who are neither economically nor physically vulnerable. People who are more vulnerable to consequences use concealment as a method of protection: it makes it possible to speak freely about controversial subjects, or even about any subjects, without fear of harassment.

-- Yonatan Zunger, chief architect, Google+

https://plus.google.com/+YonatanZunger/posts/WegYVNkZQqq


For anyone curious for more information, Dr. Brin's largest work on the subject is the non-fiction collection of essays called The Transparent Society: https://en.wikipedia.org/wiki/The_Transparent_Society

Published in 1998, I think it's probably still the most fascinating work on privacy of the post-internet age.

More fun fictional explorations of the subject can be found in his novels Kiln People and Earth.


I worked in a factory that produces critical parts for both the nuclear industry and the silicon fabs. Their main software stack is old dos software with a win95 front end bolted on running in compatibility mode with an integrated database that went obsolete in the early '90s.

I asked them what protections they took against hacking, and they told me that nobody would be interested in what they do, so they didn't have to spend any money on protecting themselves.


security by obsolescence is a thing too


It's actually one of the security layers for America's nuclear arsenal

https://www.businessinsider.com/hacker-us-nukes-report-2016-...


Security by obsolescence isn't the thing where you just dump old code on internet connected computers and steadfastly refuse to do any form of technical assessment other then "Does it run?", due to the assumption that nobody would be interested in screwing you over, though.


Was that computer connected to the Internet? If not, then no problem.


Define "the Internet". Do contractors plug laptops into the network with these "air gapped" computers? How about IoT devices? Printers? I can't recall who said it, but a venerable infosec practitioner once said "Think of an air gap as a very high-latency connection."


Air gapping is often "good enough" protection for all but the most sensitive and desirable targets, even if people plug things into them all the time. An air-gapped PC might get a virus that way, which could be damaging in some way, but it's not something that most companies need to worry about.

A counter-example is something like Stuxnet; that likely leapt an air gap, but it was exquisitely targeted for that scenario. I'd worry about this as a nuclear component manufacturer.


Stuxnet was spread via a USB stick. You know what doesn't have a USB port? A computer from 1995. :)

An ancient piece of air-gaped hardware is pretty bulletproof. I bet the NSA would have more trouble hacking it than a fully up to date install of <modern OS> that is on the Internet.

It's very possible the biggest threat vector is leaving the door unlocked.


Even though I'm largely against air gaps as a defensive measure, there is something to this. Though I think the breadth of the problem isn't properly appreciated. It's too dimensional. There are too many unforeseen interactions and complications with computers and the humans in the organizations that run them. Even leaving aside moles[0] people trust to easily and often have interests that are odds with their organization.

[0] Which, why should we? The world has moles.


Yes, of course it was.

Anything with a network port was plugged into the lan, which was plugged into the free router that came with the business broadband package.

Which was also the only firewall and it was still on factory defaults other than the wireless password.

As far as I can tell this is more or less standard business practice in the small to medium enterprise sector here, which is to say, the vast majority of specialist engineering companies in the UK.


Stuxnet.


I work for a healthcare IT company that deals with tons of PHI (protected health information) and we have very strict security practices. New software is extensively evaluated at several points in the SDLC for security issues and access is tightly controlled. Engineers and CEOs have been held personally liable for major breaches (not my company!). Health records are WAY more protected than your SSN. If what happened to Equifax happened to a company like Epic, they would go out of business.


Epic is not hosted software, it is installed and heavily customized at each hospital. Epic doesn’t have a global db of all patient records.


None of your systems store patient data encrypted at rest.

Inadvertent data breaches are inevitable.

The very best you could possibly do is access logs and auditing. Which of your customers do any kind of auditing?


Although at-rest encryption of health data is an addressable part of US health regulations, not a required one, it's almost always done because it's easy and effective. Most electronic health records are encrypted at-rest.


I should have been more clear.

I'm advocating Translucent Database techniques, where patient data is encrypted at the field and row (document) level. Much like a salted password file.

System level encryption still means the admins can still see the data. Therefore, breaches remain inevitable.


Hold on, do you know for a fact that EPIC specifically doesn't encrypt data at rest? I'm not questioning your competence/knowledge, just trying to understand specifically what you're saying.


The database or file system is likely (hopefully) encrypted. I'm referring to record and field level encryption. Like this:

Translucent Databases http://a.co/d/5855TYO http://wayner.org/node/46

Think of how password databases don't actually store passwords. Instead they (should) be storing salted and hashed passwords.

---

Please see my other comment about plaintext demographic data vs using unique identifiers.


Literally every single system we have stores patient data encrypted at rest.


I've worked on both election integrity and electronic medical record exchanges. I care very much about privacy.

Your conclusion is correct.

Medical records cannot be encrypted so long as demographic data must be stored in plaintext. Otherwise how would we do record matching (linking) across heterogenous systems?

The fix is to use universal identifiers. Once you moot the record matching problem, you can use translucent database techniques to anonymize patient data.

I don't expect the USA to adopt an official centralized universal identifier for people any time soon. And none of the 3rd party solutions (NSA, LexisNexus, ChoicePoint, Facebook, etc) are acceptable, for various reasons.


Homomorphic encryption, anonomized data coupled with metadata, searching via k-Anonymity models.

I mean, sure it has a cost, some of this stuff is new, but to say "cannot be encrypted"... that's where the fun is- working out how to do it.


Hold on, even if you keep the demographic data in plaintext, surely you can encrypt the actual HIPAA-protected data?


If you figure out how to do that, please let me know. Maybe there's some new math or tricks since I worked in this domain.


You can meet an at-rest encryption requirements in a lot of environments by flipping on disk-level, filesystem-level, or database-level encryption where the encryption keys aren't stored on that disk.


Exactly. Google has a nice internal solution where data at rest is encrypted with a record-unique (or bucket-unique) key and a separate system decides whether to give you that key based on whether you give it auth tokens that entitle you to access to that record. That way, having direct access to the datastore doesn't give you automatic access to everybody's data, and all access is auditable. (And data can be "deleted" wherever it has been replicated just by deleting that key for good at the centralized key store).

Obviously, some admins/sres still need to have full access to the key store, but that can be a very small group, as compared to a situation where "every Gmail engineer can read every user's email".

Edit: on reading the summary blurb from the "translucent databases" book link that @specialist posted, what I described above is very much along those lines.


Leaked is the best word. It doesn't even have to be hackers or outside forces. An individual on the inside maliciously or ineptly handling records is another way things will get leaked. And will is the right word. In the digital age it's not a question of if something will get leaked, but when. And as systems become more complex and companies more expansive this is only going to become more true. And I think this is the point of view people should make when deciding what information to yield to companies, or deciding on their position of things like 'surveillance for your safety.'


I'm still at a loss as to how Ashely Madison is still going after their data breach.

Everyone user on that site (99% male) was exposed as a complete chump -- it's still a functioning business ...


I'm confused that they had a business BEFORE their data breach. That the same people who thought it was a good idea to publicly search and advertise for a secret affair would continue doing so after a privacy breach is less of a shock.


I once worked at a company that did about $50 million / year at 19.95/mo/customer where about 99% of those customers had forgotten they ever signed up. So it isn’t that surprising AM is still in business.


This is not actually an unreasonable business model, and is explicitly the profitability criteria for most gyms / fitness clubs.


AOL?


No, but more or less of that era. What was really interesting about it was that while I think there had been some dubious tactics to lure people to the product (before my time), the team I was on legitimately wanted to create a useful / valuable product for customers. Ironically, in the course of doing that it drove customers away as they became aware that they had signed up for this thing they had completely forgotten about. So in the course of making right with customers, that $50+MM/year stream disappeared from the company. There's a business lesson or two in there somewhere, but I'm not sure I like what it might say.


> I'm not sure I like what it might say.

To me it says economic performance isn't everything, and I like that idea.


I’m surprised that anyone thought there were a lot of women on that site. To me it seemed one step up fronm “Hot singles in <IP address location>!” Laughably transparent with a side order of extra skeeze.


It turns out -- IIRC -- that Ashley Madison was basically hiring women to flirt with potential customers. Something like that came out in the dump.

quick google: https://splinternews.com/ashley-madison-comes-clean-about-al...


I understand that, I’m just shocked anyone fell for it.


"Nobody ever went broke underestimating the intelligence of the American public." - H. L. Mencken


Where does this 99% figure come from? And how did they get exposed? Did someone contact everyone's friends and family that got leaked?


>Where does this 99% figure come from?

The data is public now -- AM were blackmailed and didn't pay -- I guess you can take a sample of the email addresses and compare with public profiles that match the emails. There were virtually no women -- in fact they were so paranoid the men would realize there weren't any women to have extra marital affairs with they had -- bots -- that flirted now and again just to keep the men paying.

Plenty of reporting was done on it after the breach. I recall saying to people that this would be a good example of a breach that would destroy a company by exposing that it's value proposition was nothing. I guess I was very wrong.


What if there is a value proposition in paying to have a bot flirt with you now and again?

I'm reminded of the entire "robot boyfriend/girlfriend" genre of TV/Movies/Comics from Japan/Korea, plus the "dating simulator" genre of video games. These are more fantasy-fulfillment, though.

I honestly think it would be a net good for society to have simulators/bots that teach people how to have real, healthy relationships with other humans. (There are anecdoates about kids with autism who became more verbal once they started talking to Siri or the Google Assistant, for example.) But I feel like working on or using such tools would probably be as stigmatized as sex toys in the near/medium term.


There's a difference between seeking out human interaction and then being deceived by a company's bots, and intentionally seeking out bots.


It’s more sad and funny to outside observers, that’s all.

Also, their target audience is nominally married people. So either these people have a fetish for cheating (weird, but not my problem), or they already are in relationships of some sort of another. One would hope that a bit couldn’t teach them anything new at that point.


Maybe it's the perfect place to cheat on your wife with a dude? I've never looked into their site, but maybe it is in practice a gay hookup place?

Or maybe the bots are so good they should just sell those as a service? Why bother chatting with a real woman when the bot experience is better? As I say this, why isn't it a thing already? Chatbot hookers. It has to be a thing already. The interactive experience for a fraction of the price. Or do camgirls set the price floor too low to make this viable?

It seems like our NLP AIs should be advanced enough to make a pretty compelling chatbots by now. People were fooled by Eliza 40 years ago. A lot of porn is pretty formulaic too, so getting the common case right shouldn't be that hard.


Several Wells Fargo executives came right out and openly confessed to fraud and theft of consumers' money, and none of them are in jail and the company has not been adversely affected in the slightest.

They paid some fines equivalent to what they make in profit every 13 hours. That's it. Nothing else.

If you're running a large profitable company why should you care about security at all? Why should you care about the rule of law at all? There is no accountability, there is no culpability, nobody will ever get in trouble.


The Wells Fargo thing I really don't understand -- even without government action, I can't believe that they still have customers. How does that happen? I know that when a car dealer does the financing, you may not know who the lender is until you get your first bill, but even so I can't see anyone willingly going with them for any accounts (savings/checking, loans, etc).


I'm a Well's Fargo "customer" (more like a hostage) because they bought my mortgage from the local lender I originally went through. I have no legal way to stop being a Well's Fargo "customer" that does not result in direct and long-term financial penalties. Considering that they hold one of the largest mortgage portfolios in the US, I'd guess that many other "customers" of Well's Fargo are in the same boat.

I hate both Bank of America and Well's Fargo with a passion and the outcome of 2008 didn't dissuade my disgust at them. Unfortunately both hold loans from me that I did not initially originate with them.


I've been a customer since before the fiasco and haven't changed because they have a ton of branches in the area and they have been great at quickly solving any issues I run into.

I've started using Chase and Cap One online but they have zero branches in my entire city...


I can't recommend e-banking enough. I use Fidelity, but Schwab is also similarly good. Totally free checking, no minimum balances, easy access to high rate CDs, bill pay, and you can use any ATM in the country, they pay all your fees. I keep a few hundred dollars in a savings account locally because I'm paranoid I might want to see a human being, but I've literally never had to use it.


The biggest problem with online-only banking is that there is really no way to get a large sum of money out immediately. The only way to withdraw a large sum is to transfer it to another bank account (1-2 days) and then go to that bank's branch.

Of course, you can use checks, but sometimes you need hard cash (e.g., buying a used car).


> Of course, you can use checks, but sometimes you need hard cash (e.g., buying a used car).

I use "cashier's checks" [1] to buy used cars, since that still offers more security and convenience (of not handling physically bulky bundles of currency).

A trick I learned from reading sites about how to buy at real estate auctions (where the final price can't be known up front) is to use multiple checks made out to oneself, in varying amounts. The ones needed for payment can be endorsed over to the seller, and the remainder can be re-deposited.

My credit union will send me an effecitvely unlimited number of these through the mail, if they're requested through the automated system and each for a at least some (reasonable) minimum amount.

Still, your point holds. If you need it in a hurry, a physical branch is required.

[1] check drawn on the financial institution, not my personal account


I've bought a couple of used cars in private sales. Give them a couple hundred dollars deposit, get a receipt, then make arrangements to do the full payment during business hours when you can meet at the seller's bank and do a bank wire, cashier's check, etc. It would be pretty rare that you can't get this worked out. And worst case scenario, you keep that local bank account open and do an EFT to it when you start shopping.


direct deposit works fine in AU. If what you are actually doing is cash to a person. Then you can stay electronic.


Often the effort or cost to get out of a home mortgage or car loan makes you stuck by inertia.


Wells Fargo wasn't really defrauding their customers, at least not at significant scale. Wells Fargo reps were defrauding their internal evaluation targets. The customer damages were incidental and generally small.


I'm seriously considering that on the next massive data breach of a public traded company I'm gonna buy some stock in the onset

Buy call options, that way if it doesn't go your way you're not out as much (in theory, anyway). Worked for me with Equifax and United Airlines, as two examples. Didn't work so well buying puts on Kodak when they announced crypto-whatever; market stayed irrational longer than the life of my contract.

Keeping in mind that the next Equifax might be the one that finally gets put against the wall.


That emphatically didn't work out for me with Valiant. Remember, options have expiries. And also it only makes sense if market implied volatility of your options contract is way below the upside move you expect.

I.e buying calls at 50% IV when you only expect a 30% move before expiry is... suboptimal.

In other words, there's no substitute for truly knowing what you're doing when trading options. (Learned this the hard way, AMA)


It did work out for me with GEO (doubled my position on the sharp drop) but I bought stock because I had no idea when it might rebound.

Options are far less forgiving if you're wrong on timing.


I saw a screen full of company logos on the news during the oil well blowout in the Gulf of Mexico and checked into their stock prices. I did OK on Halliburton. Did OK with Merck after the Vioxx news too. Buying on bad news is a pretty safe bet when the news is limited to a small part of a very large, diversified business.


This is an old investing strategy summarized by Rothschild in the anecdote, "The time to buy is when there's blood in the streets." I think that maxim captures the crass greed of the strategy.


Why is this strategy “crass greed”?


You're supposed to say "Be fearful when others are greedy and greedy when others are fearful" and then you sound kind and wise like Warren Buffet.


Indeed, it puts a floor on prices in freefall. If someone needs to sell, and nobody is willing to buy, the price goes to zero.


I'd be a bit worried that was like playing Russian Roulette. At some point, the companies have to actually correct right? I imagine the reason they were able to take the hit was the number of people waiting to buy low already. That number of people has to be consumed to a certain extent each time a cycle happens.

I mean, if there's fundamental problems, it eventually has to reflect in the stock price... right?

Please?


>I mean, if there's fundamental problems, it eventually has to reflect in the stock price... right?

From a business perspective, poor IT security doesn't have to be a fundamental problem. Occasionally beating up your customers may be fine, and being voted as worst company in America isn't nessesarily that bad either.

Are there fines high enough to impact business? Are you on the receiving end of new legislation because of the event? Is the market both capable and willing to punish you? Depressingly often the answer to all of those is "no".


Well, that's the thing, they don't _have_ to correct. Look at Enron. They were huge and I can clearly remember people buying in post revelations saying: "oh it's going to come back".


Sometimes, maybe. Perhaps at insolvency. Worldcom looked pretty healthy in 2002, then they were chapter 11.


Just like the previous mention of Enron, Worldcom engaged in systematic large-scale accounting fraud. Very different from the Equifax breach situation.


The thing about roulette is that you can bet most numbers except a couple, and do pretty well in the short term, albeit with small winnings. But the odds are still against you long term.


Just make sure you time it right. The problem with EFX was they announced on September 7, dipped, then announced additional badness on September 13, so double-dipped (https://s.yimg.com/uc/fin/chart/18/08/39d3a24.png) and you were out ~30% immediately.


After looking at stock drop and recoveries of other companies involved in major PII breaches in the last 10 years, I came on here recommending people look into options, as an opportunity to profit off the reactionary price tumble fallout from that breach. IIRC EFX, with a market cap of ~$16B, went from ~$140 to ~$90 over a few days. As a primarily B2B data broker that sells information without concern for the data subject's wishes, in an establishe and disruption resistant market with only 2 other competitors and decades of purchasing friendly regulations, a ~36% drop seemed unlikely to persist for very long. It's just betting against a large majority sharing, and sacrafising $, in order to adhere to those principles.


>it just seems like a too good of an opportunity to miss out on...

unless everyone's doing the same thing and it's already priced in the moment the news goes public.


Always bet against your priciples. If you win, you get a better society! If you lose, at least you're richer.


Speaking of investments against principles -- you might want to check out Facebook stock as of 9/20/18, it's trading at something like 20% under, if you think it will rebound. (I'm not a FB shill, just read my post history).


It's because the U.S. government is so corrupt right now that none of the existing large organizations/corporations will ever be punished the same way Enron was, unless the political landscape and system changes significantly.

That's why, regardless of how large a company scandal is, you can make a pretty good bet on the company surviving and recovering easily from the scandal, because the government will protect it/issue a tiny fine to save some face. In this case, Congress literally passed a law to save Equifax from private lawsuits. Absolute insanity. If this isn't a sign of a corrupt government, I don't know what is.

It's a very sad state of affairs that will likely get much worse before it starts getting better.


The difference was that in Enron's case it was clear criminal intent to defraud, as opposed to security failures.

As far as "Congress literally passed a law to save Equifax from private lawsuits", what they did was part of the larger sweep of allowing companies to force arbitration on consumers. It was unrelated to the breach.


The allowance of private arbitration is a pretty clear example of corruption though, class action lawsuits exist for the purpose of punishing companies more than delivering assets to the suffering parties and that route has been taking away in a lot of circumstances.


Can't argue there. The rise of arbitration clauses has done a huge amount of damage to consumer rights.


Perhaps this fits into the cliche about there being "no such thing as bad publicity."


Too big to fail. Substantially all loans are backed by credit scores from the big 3.


We are in one of the greatest bull markets in history. There are dollars, euros, yens, yuans, pounds, etc overflowing everywhere thanks to lax monetary policy all over the world.

Considering equifax is one of 3 monopolistic credit reporting agencies, you can be sure they won't go out of business. So you buy the dips.

Same thing with tesla. Same thing any established legit stock of a legit company. In a major bull market you buy the dips as long as the music is playing. But once the music stops, you don't want to be dancing.

Same thing with major bear markets. Short the bumps.


Isn't the claim made in the article as sensational -- that the investigators replayed what the attackers did -- just kinda good practice?

It'd just make sense to me that, during an investigation, one would replay what the attackers did to get a good understanding of the results. This just seems like responsible investigation.

(The flipside would instead be claiming that X was compromised, and not being able to honestly answer questions as to whether one retraced the attacker's steps to provide assurance.)


Yes, it's a good practice, even if you believe you have the necessary technology in place to fully log what was exfiltrated. That being said, it's good to hear that they're actually doing so.


Yes. Responding to this and some of the comments below, this has been a standard practice for some time. There was a defcon talk about this years ago.

https://www.defcon.org/images/defcon-20/dc-20-presentations/...

How do you record what was accessed by a query? Record the results of every query run somewhere? Imagine the resulting data volume explosion you would endure. Instead, you maintain transaction logs or database snapshots so you can approximate database state, and you record the queries that are run. thats a more efficient means, and lets you handle DR, compliance, and investigative needs together.


You would expect a serious firm to have auditing and logging that would provide the necessary forensics without having to rerun an exploit.


The headline doesn't say they reran the exploit, it says they reran the queries. Big difference. Auditing and logging will generally record what queries were run, but definitely won't record the results of those queries, so this is as expected.


They were able to identify and re-run the attack, so auditing and logging are likely present enough to capture what they attackers did, which in most breaches needs to be analyzed to see what they were able to accomplish.

This doesn't shout amateur hour to me directly, it seems more kicking a dead horse while its down, but that doesn't mean it isn't amateur hour or a complete company wide failure to prioritize security and secrecy of customer data at Equifax.

I'm not trying to defend them, just stating this sounds somewhat misleading and sensationalized.


> or a complete company wide failure to prioritize security and secrecy of customer data at Equifax.

I haven't seen any proof that Equifax operates in any other way. I am not attempting to be rude, only stating fact.


Unless you work for them, you haven't seen any proof of anything, outside of the tiny amount of information revealed since the breach.


I have seen proof that they have a complete disregard for the secure storage and transport of sensitive financial and PII of a large swath of the American public. I don’t need to be inside their org to see that.

Do you work for Equifax?


> I have seen proof that they have a complete disregard for the secure storage and transport of sensitive financial and PII of a large swath of the American public.

Can you share that proof?


> Can you share that proof?

https://www.consumer.ftc.gov/blog/2017/09/equifax-data-breac...

> If you have a credit report, there’s a good chance that you’re one of the 143 million American consumers whose sensitive personal information was exposed in a data breach at Equifax, one of the nation’s three major credit reporting agencies.

> Here are the facts, according to Equifax. The breach lasted from mid-May through July. The hackers accessed people’s names, Social Security numbers, birth dates, addresses and, in some instances, driver’s license numbers. They also stole credit card numbers for about 209,000 people and dispute documents with personal identifying information for about 182,000 people. And they grabbed personal information of people in the UK and Canada too.

Sensitive personal and financial data of half of American adults, as well as UK and Canadian citizens. For 90 days. If that isn't proof, I don't know what is.


That's proof of a security breach, not proof of "a complete disregard for the secure storage and transport of sensitive financial and PII of a large swath of the American public".


ISTR the details of the exploit have been reported, and relied on a known vulnerability which they had failed to patch, despite ample opportunity. I could be wrong, though.


This article from Bloomberg has the best and least clickbait reporting on the breach: https://www.bloomberg.com/news/features/2017-09-29/the-equif.... While it's true that the breach entry can be attributed to an unpatched vulnerability, it was unpatched due to a process failure, not a "complete disregard for the secure storage and transport of sensitive financial and PII" as the other poster claimed. According to the article, Equifax processed the vulnerability: they got notification and applied the update. But their process was flawed and missed at least one spot, and that was their undoing. Saying "they had failed to patch, despite ample opportunity" implies they knew about the issue and ignored it, which is not the case.


You would keep a copy of every query result set? The storage requirements alone would be insane. Seems perfectly reasonable to load that day’s snapshot/rewind the WAL and rerun the query.


I work at a similar sized company. It depends. If you ran a query on any of the schemas that have access to security and compliance sensitive data back in 2013 I can probably tell you what it returned (sans any PII, obviously we don't log stuff that would have compliance implications if we logged it) but I'd have to go digging in backups to find it.


> You would keep a copy of every query result set?

I am familiar with systems that do, and do not believe it to be an unreasonable ask depending on GRC [1] requirements. Storage is cheap, compression effective.

[1] https://en.wikipedia.org/wiki/Governance,_risk_management,_a...


You do not store query returns in your audit log it’s not only a nightmare from performance and storage standpoint but it’s also disallowed by the same GRC requirements you are touting.

The last thing you want is PII or other regulated date sitting in splunk in fact each large organization that handles such data will have systems to ensure that this does not happen as the regulatory requirement is simple - do not log sensitive data there are tons of both client side and server side plugins and tools for common logging frameworks and log aggregators that search and delete sensitive data or sanitize it before it being logged completely and usually you run a combination of both.

On the storage/performance side since you need to keep access logs sometimes for years if you store query results you’ll need storage 1000’s and 1000’s of times the size of your DB to store those logs this isn’t feasible.

What you log is often the query how many result did it return the user and or app that run the query.


Have you ever honestly worked on a system that logged the results of every query run?


Yes. In E-Health you have to be able say who and what was queried for. There's also https://en.wikipedia.org/wiki/Reliable_messaging We store every message and every response so you can tell what query was run and when.


You are confusing two concepts under HIPAA you are explicitly disallowed to log any EHR information.

Record systems must be able to represent the exact EHR presented to a query e.g. to check if there was a human error like some one missreading the record.

This is achieved by keeping record versions and a record history this isn’t achieved by having system logs and audit trails of queries returned.

Essentially your EHR system would work like git you’ll be able to query the same commit as the original query but it doesn’t mean that your database audit trail would record any parts of the electronic health record that is simply not allowed.


We're Canadian so don't fall under HIPAA. We record in a database the full request and response for each transaction submitted. We index with the UUID for the message. With that request and response I can tell you exactly what each query returned because we only return the data that goes in the message. I can't tell you what the database contained at that point but I know what is in the message.


The approach you are describing seems to extend the same access protection and audit requirements to the audit logs themselves. Otherwise, you have created a covert channel to access the private content by examining these logs rather than the original database.

The other approach, of being able to replay a historical query described in a log while disallowing the private content in the logs allows the audit logs to be stored in a way that can be biased for better storage durability, without quite the same recursive audit nightmare. The logs are much smaller and can be easily replicated and archived, without quite the same level of risk from exposure of log content. Not to say those other logs are not also worth protecting, but there are different ways to balance the risks and costs...


Yes. Financial services.


The observable functionality (what “requirements” ought to be specifying) is the same, no?


And yet, it does not mean you would not re-run queries to confirm even if you did.

I have encountered very few (not saying it's good, just that's how it is) companies with detailed database query+results logging that was stored for long enough, in any usable way...


While your data is being stolen and sold in the dark web, Equifax is happy to sell you Identity protection for your business: https://www.equifax.com/business/credit-monitoring-and-ident...


An extract from an email I recieved from BA after their latest breach:

We deeply apologise for any worry and inconvenience this criminal activity has caused. For your reassurance, we’re offering you 12 months of free credit and identity monitoring services, provided by Experian, one of the UK’s leading Credit Reference agencies.

Your free ProtectMyID membership...


If you accepted the free membership, you waived your rights to sue them. It was in the agreement.


They tried it, but after backlash they backed off. https://www.forbes.com/sites/dianahembree/2017/09/09/consume...


I always figured that the smarter scammers would just wait 15 months and THEN use the data, or even sell it to (many) others that would.


15 months is enough time to change your mother's maiden name and the name of your first pet.


You can't change the past


So, someone forgot to input this certificate into their Outlook Calendar, Slack /remind, or whatever, and as a result 150mm people are at risk for identity theft. Awesome. I'm so glad I have no option to prevent my data going to this super-competent company and there's no oversight by anyone external.


Imagine all the companies who Equifax is probably contracted that need bulk query access to their data. It’s easy for the crowd here to poo poo this kind of behavior, and it is bad, and they should be punished for their incompetence. They should not be in business any longer, it’s not like there aren’t other companies operating in this space to fill the void.

That said, how do you seriously prevent such a thing from eventually happening? In this case it was their systems that were compromised but it could have easily been a downstream user or similar that had enough direct access. I’m curious to know what, if any technical solutions could be possible?

I’d never take a tech job protecting such a thing. The only way I could think would be to have some very trusted people manually reviewing all access to the primary data store, and even that probably wouldn’t be enough. Miss one unauthorized query and you’re toast.

The entire system of social security numbers is flawed by design from a security perspective and there in lies the problem.


The apache struts vulnerability is easy enough to detect -- java runs programs it shouldn't. If a bigcorp like that doesn't have a nextgen av to detect that,executable logging+SIEM correlation would have done the trick.

They detected the traffic after the tls inspection box was fixed, that was the box that deteced it not the point of entry from what I understand. Regardless, TLS inspection has it's place (this is why you can't have end-to-end cryto in a corporate environment).

From my experience, most bigcorps do IT like it's still 2009. There is so much architectural bloat,bureaucracy and unseen system complexity,it reduces security controls to mere cosmetic theatrics.

It's like having a 200ft tall,50ft thick iron wall around your castle with 100k foot soldiers armed with the best weapons and training. The problem is that your soldiers(IT staff) can't act fast due to bureaucracy and half of the duties are someone else's problem due to over-segregation of duties. Your fancy wall(security solutions and controls) is neat but there are holes wide enough to fit ten people all over it.

In the end the enemy is complexity. You can't solve that by adding more security vendors,solutions and staff which is exactly what everyone seems to be doing.


Working at companies like this, you come to realize that they are so insulated in their special (monopolistic or almost) positions that they become blind to anything other than quarterly earnings per share.

On the rare case that something does blow up badly enough that executives must leave, they are still almost permanently cemented at their tier. So they just migrate to another poorly run large company to focus on their own short term gain.

Applying logic to this scenario doesn't work. It's really about balancing cost-benefits. Since the costs to being an idiot are very low once you reach a certain level, the downsides can be completely forgotten in pursuit of the upsides.


"Nextgen"AV would not have caught this. AV is still AV and looking at executables and binaries. Exploiting a vulnerability does not fit that bill. This incident had nothing to do with a 'virus'. Now many tools that include AV, say Crowdstrike, may have caught this...but AV is on the low-end of the hierarchy of needs when it comes securing your assets, especially servers. Much more so for Linux. AV is more of an end-user issue where they want to download files and execute them.


Crowdstrike,Carbonblack,windows atp,etc... Are "nextgen av". You can call them by their self proclaimed category as well.


Carbon Black and Crowdstrike were traditionally EDR platforms. Crowdstrike had no AV capabilities until last year. They have now morphed into the EPP category as per Gartner. They are a now a suite of tools, where AV is just a single part of it that is the least important.

Any beyond that, "NextGen AV would have caught this" is the original premise. Nextgen tools were not needed. All they had to do was patch a well known Vulnerability within a period of MONTHS.

You can buy the fancy tools. If you can't do the basics, they are worthless.


Yeah but you'll always have unpatched vulns or 0days. You should patch but you should also account for when patching isn't enough. Sometimes servers get missed. Some places even have forgotten servers that aren't part of asset management. Some places don't have asset management.

The fancy tools are not a onse size fits all solution just like patching and goof security hygeine isn't.


It's worth nothing that such a display of negligence didn't stop the IRS from awarding Equifax a 7 million dollar contract:

https://www.politico.com/story/2017/10/03/equifax-irs-fraud-...

And Equifax is the is same incompetent company providing identification services for healthcare:

http://www.specialtycreditreports.com/equifax-contract-healt...

Nor did it prevent 18F from awarding a similar multi-million dollar contract to Equifax for login.gov:

https://federalnewsradio.com/reporters-notebook-jason-miller...

There is no incentive for Equifax to take security seriously.


The best, and possibly only way of preventing the theft of personal data is to not have it.

The amount of surface area a large organization needs to always protect its just too large, we can just assume at some point all that info will be taken.


PSA: On Sept 21st freezing or thawing your credit will be free.

https://krebsonsecurity.com/2018/09/in-a-few-days-credit-fre...


I just tried to unfreeze my credit on Equifax and couldn't after several visits to their website and multiple phone calls. They are asking me to mail my SSN, address, birthday, etc..


Nicked.. British English is fun.. is there a compendium of phrases like this?




I actually read the GAO report and it does not say this.

Thought I should point out a major error right there in the title.


Just wondering, are the queries executed against the state of the databases in that moment?


The headline is by far the least concerning part of this.


At a certain point (age, level of wisdom, whatever) you realize when a company is focused on short term gain vs everything else. At that point, unless you just don't give a shit, you move on.

Not to bring politics into this, but there's a variant of capitalism that runs rampant in developed economies that is based almost exclusively on short term gain.

Equifax, like so many other companies, illustrates this story in painful detail - especially for those who work there.

There are so many executive poor behaviors that go on, affection not just their employees but their customers and beyond, that you might think by now there would be more attention paid to this. But the people who should be paying attention are quite like the executives who are overly focused on short term gain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: