Hacker News new | past | comments | ask | show | jobs | submit login
I'm Testifying to Congress about Data Breaches – What Should I Say? (troyhunt.com)
637 points by Ajedi32 on Nov 21, 2017 | hide | past | favorite | 230 comments



To me, the main issue is accountability.

A citizen's data can be collected, badly secured, stolen, and used by criminals without the user ever being aware of step 1. Just like a citizen can get ill from swimming in a river without ever being aware of the factories upstream.

The solution is not to force citizens to constantly be on the lookout. It's to severely punish polluters and leakers.

When a CTO says "let's collect geolocations", the CEO should should have legal and business reasons to say "no way, it's not worth the financial risk of losing them; it could destroy our company."

---

Update: I do not think the government should mandate specific practices; it's too complicated, too fast-changing, and too hard to police.

It should be entirely results-based. You lose people's data, you pay big bucks. Figuring out how not to lose it is your problem. The government sets the rules, and the market plays the game.


I had a non-technical friend whose fatalistic impression was that "these things happen and there's nothing that can be done given a determined attacker." Well, look, I said, these hackers aren't going in like Mission: Impossible. Equifax was incompetent, and there's zero penalty for utter incompetence. There must be.

The Equifax hack was the equivalent of data malpractice. It's like a hospital mixing up the labels on saline and hydrogen cyanide and then saying, "Whoops. Sorry about that." The cavalier attitude that companies have about data security infuriates me. Americans will be dealing with the repercussions of this for the rest of their lives.

Meanwhile, Equifax keeps making money.

Equifax's entire business was trading off of our data, but protecting that data was evidently not a priority for them. They should be fined into oblivion.


> It's like a hospital mixing up the labels on saline and hydrogen cyanide

I don't even think that's a strong enough comparison. It's more like a hospital not bothering to label their bottles at all, and then go "oh, sorry about that" when the inevitable happens.


If you think that is bad, you should read about the Toyota "unintended acceleration" case from several years back. It's a preview of exactly what will occur when the first self-driving cars hit the road. Toyotas gross negligence (through following standard business practices still in place and still followed by every company you can think of) surrounding software development resulted in the deaths of multiple people. And the courts tossed up their hands and said "we can't convict them of criminal negligence because there are no standards when it comes to software that they broke."

Hire the cheapest 'talent' you can find, refuse them training or the tools or environment or time needed to do a competent job, and give an MBA the reins to ignore their concerns and protests but push something out the door. It's a recipe for disastrously flawed systems and big piles of money for executives.


I thought the unintended acceleration issues had been largely disproven as user error.

That is, many other brands of cars had been reported to have the same issues by drivers. And basically a driver was put into a stressful situation, thought they were hitting the brakes, but were actually hitting the gas. Then, panicking that they can't stop the car, hit the "brakes" harder, exacerbating the problem.


Malcom Gladwell goes over this story on his Revisionist History podcast.

http://revisionisthistory.com/episodes/08-blame-game


This was what originally led me to do some research on the topic. I really thought the podcast was well done.


I don't think this is true. If you google for toyota and misra (a C coding standard for safety-critical systems) you can find many reports on an audit that was performed on their code and the _many thousands_ of violations that were found.


There may be a bit of both at work here, because I remember seeing a lot of issues with hacking the Prius at Defcon[0], but I vaguely recall the SUA incidents being mostly related to as pedal misapplication.

I know Wikipedia isn't exactly a great primary source here, but:

> From 2002 to 2009 there were many defect petitions made to the NHTSA regarding unintended acceleration in Toyota and Lexus vehicles, but many of them were determined to be caused by pedal misapplication, and the NHTSA noted that there was no statistical significance showing that Toyota vehicles had more SUA incidents than other manufacturers.

https://en.wikipedia.org/wiki/Sudden_unintended_acceleration...

In any case, I believe companies can definitely be guilty of criminal negligence (and Toyota did a lot of bad things during their SUA crisis). But I think the use of SUA in the comment I originally responded to sort of misrepresents the situation and mostly spreads a lot of FUD around self-driving cars.

[0]: https://www.engadget.com/2013/07/28/auto-hacked-ford-toyota-...


IIRC Toyota did have a real problem with clearance between the accelerator pedal and (their own aftermarket parts) floor mats. It was possible for the pedal to be wedged against the floor mat.

I don't think that was conclusively shown to the cause of any of the incedents though.


Malcom Gladwell goes over this story on his Revisionist History podcast.

http://revisionisthistory.com/episodes/08-blame-game


My opinion is that information about you should belong to you. Other entities should not be allowed to keep or trade information about individuals without paying them directly.

That way people will realise that their personal info is worth something.


HN is all like "speech should be free and unlimited and protected always... Unless someone's talking about me."

That sort of hard stance seems extremely unworkable. If I sign up for Hulu they need my email and billing information at the least. So they are supposed to pay me now because I gave them that to get a service? So then ot evens out so my streaming is free now? Goodbye any useful paid service. Goodbye any free service.

What you're really advocating here though is abolishing free society as we know it.

You think I'm kidding? All court proceeding would have to be in secret and unrecorded as saying "so and so convicted of manslaughter" is now illegal. All public records abolished entirely, no accountability anymore. No journalists could publish a story about anyone ever. Harvey Weinstein sexually assaults hundreds of women? Can't tell anyone or publish that information, Harvey Weinstein owns it. Even telling someone else about your date last night would become illegal.


I don't believe that's at all what the OP meant by:

> "Other entities should not be allowed to keep or trade information about individuals without paying them directly.

I interpreted the comment to mean if I subscribe to say Hulu they are not allowed to sell that information to third parties and also if I cancel my Hulu account my data is not kept around since its no longer needed.


I think gp ment commercial data, with the aim to prevent the resale rather than collection.


Though, it would seem like data in general is more commercializable now than it was a century ago.

I certainly don't want to be the person to have to draw a line in the sand between "non commercial data" and "commercial data".

Then again, maybe I want to sell access to brain scans of people using hardware I made to a 3rd Party in the future… :P


Its not any harder than labeling things for tax purposes. If you sell it, it is commercial and should have restrictions on how it's held and transmitted. It's not unreasonable to require part of that process to involve payment to the data originator, as a royalty for the unique data they created. This would also allow those creating the data an easy way to see how it is ultimately used.


>Its not any harder than labeling things for tax purposes.

And how many countries are companies able to skirt through financial engineering?

Yeah, people can create all the rules they want, but I'm more interested in how people plan on enforcing it… because from where I sit, that's where countries are lacking (esp with politicians accepting some kind of bribes/kickbacks/revolving door/election financing/lobbying doors in them all), and it isn't getting any better…

Even when the hardware for brain scans get cheaper and more open, the raw data it self in the hands of the average WeChat/Facebook/Line, user is useless unless one understands how to process it into something useable…

Companies that can attract talent will increasingly start making all the data that they collect public by default… side stepping the pain of data breeches when you keep costs down by making it public by default… how they process it, that's a different story… maybe those companies will get their useds pissed off enough to actually think about their choices of using their platforms and not use them… though, that last part seems increasingly unlikely.


> Other entities should not be allowed to keep or trade information about individuals without paying them directly.

To be the devils advocate for a moment, what about the other side of that? Should businesses not be able to keep and share records saying "This person owes us money but refuses to pay"?


>Should businesses not be able to keep and share records saying "This person owes us money but refuses to pay"?

No, because you would have practically no recourse if you were wrongly on such a list, and might not be able to prove you even were on it or that it exists. It's called "blacklisting".

You have a common name and share a birthday with a felon? Say goodbye to your chances of getting a credit card!


The biggest issue there is for screening for guns and some employment screening.

If you’re name is John Smith, good luck!


That's not how credit reporting laws work the United States.

It's true that they could/should be stronger/tightened/held to a higher standard, but there is a process to dispute your credit file. Creditors are legally required to give you certain information when they deny you for credit.


That is not how the credit reporting laws work, but it's how the system works.

That process is extremely flawed and unreliable. Have you ever tried disputing your credit? good fucking luck. I've been arguing with Equifax for over a year because they "have it on record" that I lived somewhere that I've never lived. I have no legal recourse unless I want to spend thousands of dollars on lawyer fees.


You shouldn't need a lawyer to pursue them. Assuming you're within the time periods prescribed in the FCRA, you can file your own complaint in your local small claims court.

Before that, I'd send them a letter clearly marked "Notice of Intent to Sue" (on the letter and envelope) stating what information is incorrect, and that you plan to file a suit in [insert name of small claims court] on [insert reasonable date] seeking $1,000 in damages for each violation per 15 USC 1681(n), unless the disputed information is removed before that date. And enclose any supporting documentation, and history of previous fruitless correspondence.

IANAL, but this has worked for me before with the credit bureaus, and I've never actually had to file the threatened suit. My best guess is that threatening litigation gets the case assigned to a legal team that is actually empowered to correct errors. It's a shame that that's how they've chosen to run their business, but then again, I'm not their customer, I'm their product.


The problem is this is effectively a reputation score; only you have very limited rights to see it* (once a year, per agency; and only if you ask... Oh and BTW, I pulled /all/ of mine just before it came out that the breach happened; :( )

Also a problem; they use simple knowledge of a single magic number as authentication ('identification').

The three current companies in the US could have gotten together and worked around the lack of a national ID and created a private SIN cryptographic... but that would cut in to their profits. Though I hear other countries that have done this also have issues with 'business individuals' being contacted and scam attempts rampantly abusing their data...


Posting my long comment here because the Disqus bot and I don't get along for some reason.

Troy, I think you have done a lot of good for infosec. Thank you. Here are my thoughts: You will be in a room full of lawyers. Examples that they understand are important. For example, (Full credit to @strandjs 2017 DerbyCon keynote) in the Crowdstrike v. NSS labs case, they sued to prevent "third parties to access or use the products" and prohibited "any competitive analysis on the product".

Sorry to get into the weeds here, but the TL/DR is the following.

- Delaware District court Judge Gregory Sleet's ruling supporting product performance assessments.

- Government funding to support projects like ModSecurity that contribute to US economic security.

- Whitelisting (that works) is the future.

Today's security vendor marketing seems to have a free pass to lie. Thankfully, on 2017-02-13, Judge Gregory Sleet of the Delaware District court ruled against Crowdstrike writing "The consumer review fairness act of 2016 underscores the public's interest in performance assessments. The new law voids provisions of form contracts that restrict a party to that contract from conducting a performance assessment of or other similar analysis of..." "The court finds the public has a very real interest in the dissemination of information regarding products in the marketplace." It goes on to say if NSS's data is inaccurate, Crowdstrike could publicly rebut that data with evidence and the public would benefit from the exchange helping inform the public if they should trust future NSS reports. He concludes "the public interest weighs strongly in favor of denying Crowdstrike's motion." https://www.csoonline.com/a...

Security is hard. I have been researching the Apache Struts2 exploits that Equifax was hit by. Assume another vulnerability like this exists right now. It could be Struts or some other web framework. Webshells used in the Struts hack are really hard to stop for many reasons. As far as I understand, if ModSecurity's open source web application firewall was installed and properly configured (not a simple task), the CRS (core rule set) would have prevented the Apache Struts2 exploit from working. Open source projects that make major contributions to protect United States national and economic security should receive more support and funding.

As a recognition for the contribution of all open source, Richard Stallman and Linus Torvalds should be recommended to receive the Presidential Medal of Freedom.

ModSecurity works by looking for known malicious patterns and blocks them. I hope one day we can get web application firewalls to work well using a whitelist setup. Instead of trusting everything and blocking things that look bad, on highly sensitive systems like Equifax, I hope to see a way to trust nothing and allow traffic that is known good. For example..

import re

def findWords(string1):

  return re.findall(r"\b[^\d\W]+\b",string1)
f = open("apache2-access.log", "r")

data = f.read()

data = data.upper()

answer = findWords(data)

for a in answer:

  print(a)
Now use bash to sort and get count..

$ python3 words.py | sort | uniq -c | sort -n

The next step is to create a modSecurity rule that uses the same regex "\b[^\d\W]+\b" to only allow REQUESTS that contain words that are on an approved list using the @pmf parameter file as in this following example. Note I just started looking into this, so I will leave the rest as an exercise for the reader :)

SecRule REQUEST_COOKIES|!REQUEST_COOKIES:/__utm/|REQUEST_COOKIES_NAMES|ARGS_NAMES|ARGS|XML:/* "@pmf windows-powershell-commands.dat

See github owasp-modsecurity-crs/rules/REQUEST-932-Application-ATTACK-RCE


I don’t know if you lost format or context moving from Disqus but I found this largely unreadable.

Of the parts that were readable, I find myself strongly disagreeing with a majority of your paragraphs, notably:

- I don’t see how Crowdstrike vs NSS has any relevance in this at all, especially given the preamble on Troy’s site. Reasonable people can disagree on the outcome of that case (as a former Crowdstrike employee, I can acknowledge my own biases there), but I just don’t see how it’s relevant.

- Similarly, I don’t see how formally recognizing Torvalds and RMS does anything meaningful, and I can think of a dozen other researchers who have had objectively more concrete contributions to SECURITY than those two.

- Whitelisting every url pattern isn’t going to ever be a viable solution - in large part because the white lists will eventually be regexes and people are bad at regex.


Replying to own comment..

whitelisting with modsecurity tutorial...

https://www.netnea.com/cms/apache-tutorial-6_embedding-modse...

Step 8: Writing simple whitelist rules "Using the rules described in Step 7, we were able to prevent access to a specific URL. We will now be using the opposite approach: We want to make sure that only one specific URL can be accessed. In addition, we will we only be accepting previously known POST parameters in a specified format. This is a very tight security technique which is also called positive security: It is no longer us trying to find known attacks in user submitted content, it is now the user who has to proof that his request meets all our criteria."

"Our example is a whitelist for a login with display of the form, submission of the credentials and the logout. We do not have the said login in place, but this does not stop us from defining the ruleset to protect this hypothetical service in our lab. And if you have a login or any other simple application you want to protect, you can take the code as a template and adopt as suitable."

SecRule REQUEST_FILENAME \ "@rx ^/login/(displayLogin|login|logout).do$" \ "id:10250,phase:1,pass,nolog,tag:'Login Whitelist',\ skipAfter:END_WHITELIST_URIBLOCK_login"

# If we land here, we are facing an unknown URI...


Folks seem to like my "polluting a river" analogy in the parent comment. I thought of a related point.

People have said that if there are consequences for losing customer data, companies will be motivated to cover up their mistakes. Part of the solution there would be "whistleblower" laws similar to what we have for OSHA violations. But another part would be legitimizing white hat hacking.

Suppose my apartment has some hazardous problem, like exposed wires. It's perfectly legal for me to notice that and tell my landlord. I can take pictures for proof, and if necessary I can report it to the government. The landlord will not be allowed to ignore me.

If, however, I notice a glaring security problem on a web site I use, there's no government agency to tell. If I tell the site owners, there's a good chance that they can ignore me or even punish me for noticing.

Now, going along with my "results-based" argument, unlike building codes, we don't want our laws to specify security practices. But if an outsider can demonstrate that they can obtain personally identifiable information from a computer system, the owners of that system should be fined and required to fix it, and the person who found the problem should be legally protected.

Imagine the mess a landlord would be in if somebody died because of a hazardous condition that they'd been notified of six month earlier. Now imagine that web sites were held to the same standard. "You had a massive data breach, and this security researcher has proof of notifying you six months earlier of the vulnerability. You're in big trouble."


> Folks seem to like my "polluting a river" analogy in the parent comment.

Yes, definitely!

Eben Moglen makes exactly this point in his lecture, Snowden and the Future Part III. He puts it as privacy being 'ecological not transactional', and uses 'pollution' as you have.

I can't resist an extended quote; Moglen puts it very eloquently:

> Those who wish to earn off you want to define privacy as a thing you transact about with them, just the two of you. They offer you free email service, in response to which you let them read all the mail, and that's that. It's just a transaction between two parties. They offer you free web hosting for your social communications, in return for watching everybody look at everything. They assert that's a transaction in which only the parties themselves are engaged.

> This is a convenient fraudulence. Another misdirection, misleading, and plain lying proposition. Because — as I suggested in the analytic definition of the components of privacy — privacy is always a relation among people. It is not transactional, an agreement between a listener or a spy or a peephole keeper and the person being spied on.

> If you accept this supposedly bilateral offer, to provide email service for you for free as long as it can all be read, then everybody who corresponds with you has been subjected to the bargain, which was supposedly bilateral in nature.

Full transcript+video+audio at http://snowdenandthefuture.info/PartIII.html


I love the polluted river analogy.

You often hear today that "Data isn't oil" but from a breach perspective it is a good analogy when considering its toxicity.


You have the key points: Some significant (edit: strict, not negligence-based) liability for data breaches is the key. And regulation of security practices is the worst idea ever; it will ossify current architectural mistakes at best and turn into a complete regulatory capture nightmare at worst.

Ideally the liability would actually go to compensate victims (for example, via class action). But even if the government keeps it it might be better than nothing.

A potential improvement would be to require companies to carry insurance (or be able to solvently self-insure) for the maximum possible liability if all the personal data they store was disclosed. That way a company like Equifax has to price the full risk of the data they store even if it is larger than their whole market cap. And the insurance industry might learn to do some due diligence, and be a source of "regulation" with much better incentives to be optimal than a government regulator has.


If we were to implement some form of insurance, I worry that executives who make bad decisions about user privacy won't ever see jail time. This needs to be something you can go to prison for.


Criminal law is a very blunt instrument, and for good reason is usually reserved for deliberate actions, not mistakes. If you want to make willfully concealing a data breach (to avoid liability or just bad publicity) a crime, that would be pretty reasonable.


We have laws for criminal negligence. If a company is building a bridge and the CEO ignores warnings from one of their engineers, or deprives them of the tools necessary to do their job, or hires inexperienced engineers because they are cheaper - that CEO goes to prison. This should be the case for any company which deals with the public. White collar crime causes more economic damage and kills more people every year than street crime does, but we punish it as if it doesn't matter and such crime has become utterly normalized and it needs to be stopped.


Sorry, I just don't agree. Strict liability is where it's at, because the reality is that standard practice in these matters is very risky, so you're never going to be able to pin negligence on anyone even in a civil case. Making companies fully internalize the expected cost of data breaches will actually solve the problem, even if it doesn't make you feel as good. What you propose will most likely be totally ineffective, and even if it does anything it will just be to create some kind of wasteful butt covering behavior totally orthogonal to actually solving the problem.


The financial penalty needs to be in two parts.

1. a cost for losing personal data.

2. a cost per person's data that was stolen.

That ensures there is a minimum penalty. Enough to make it CHEAPER to take a lot of security measures, even if you don't have many people's data yet. But also a gigantic penalty for the next Equifax.


Be careful of not setting the fine high enough to motivate companies to cover up breaches instead.

No one would have known about Equifax if they had not self disclosed. It doesn't even have to be the CEO - if I am the employee who is about to cost my company $$$$, I might just quietly fix it and say nothing.


If we had the will to implement law, this would be laughably easy to solve: A 'checks and balances' approach of [personal criminal liability of the individual] vs [financial liability of the company] would work quite well:

Person(s) act to cover up a data breach -> Individual criminal liability -> A long time in federal PRISON.

A data breach is disclosed -> Massive FINES on the company (up to and including liquidating and shutting the company down), but no personal liability or criminal charges.

So, the only really hard part of this problem is the political will (vs lobbying / powerful people) to implement law at all to address this. Until we start taking privacy and data leaks VERY SERIOUSLY from the perspective of liability, nothing will happen. Sadly, to me this means nothing will happen :(


So basically if I disclose a breach I lose my job because my company no longer exists?

Vs. Pretend I never saw any breach (how will you prove different?) and have a chance to avoid all penalties?


If you disclose a breach you might lose your job as the company might go down (financial liability for the company for losing personal data). But if you try to cover up the breach (pretend you never saw it) then you risk going to jail (personal criminal liability).

You get the idea. If the legal system was setup like that, you would be better off disclosing the breach and losing your job rather then risking prison time. You can always find another job.


No, if you disclose a data breach you get a small part (1-10%) of the total liquidation of the company before it goes out to pay the rest of the claims. Perhaps have some sort of national unemployment insurance for the rest of the employees paid for by another small part of the liquidation.


No, if you disclose a data breach you get a small part (1-10%) of the total liquidation of the company before it goes out to pay the rest of the claims.

That creates an incentive for employees to cause data breaches and then disclose them for massive profit.


> That creates an incentive for employees to cause data breaches and then disclose them for massive profit.

I think this is a false premise. What is to stop an IT employee from installing Oracle database in production without a license (assuming technical leadership is incompetent like at Equifax), waiting for a bit, and going to the BSA or Oracle and saying, "hey come get your pay day".

One, I think anyone with a shred of ethics won't do it and two, I think if we put the ceo AND the board in prison for the actions of the corporation and its employees (while following directions), then we create a strong incentive for corporations to be of manageable size. Now, I hear a lot from our politicians crow about how much they love small business and how much they hate "too big to fail". This will be a big boon for that. However, I am not holding my breath.


How about just straight un-employment but higher benefits (initially 100% salary/wages as normal) for employees who's companies shutdown or restructure?


This is a very legitimate question if the corporate death penalty is on the table. Maybe a better option would be C-level or VP and above have to go?


Covering up data breaches should be straight-up illegal, as in prison time. If I am the employee who is about to cost my company $$$$, I'd much rather report it than risk going away.


> Covering up data breaches should be straight-up illegal, as in prison time.

Is it also illegal to cause a breach? Do you understand the backwards motivations you are giving people?

He can disclose the breach, cause himself (or his co-workers) to risk prison, and cost his company $$$$, which is likely to get people fired.

Or,

Cover the whole thing up (i.e. quietly fix the bug), and no one will ever know.

And if they do find out, he can say "I didn't know anything about a breach, I proactively fixed a bug, I didn't realize someone had taken advantage of the bug" - i.e. he's safe either way.


There are two separate infractions here that shouldn't be conflated: allowing a data breach to occur, and failing to disclose a breach.

As far as I can tell, the perverse incentive you're describing is that the if the penalty for a breach occurring is too harsh, it could lead companies to cover up a breach rather than disclose it. A penalty for failing to disclose a breach would be meant to discourage that, and I don't see how that penalty is itself a perverse incentive.

The penalty for a data breach must be harsh enough that it would cost a company more than it would to guard against it. I don't believe that there exists a penalty that satisfies that condition yet is also light enough that a company wouldn't cover up any breaches that occurred.


Triple the fines in the event of a coverup would probably help too.


Under the European GDPR regs coming in next may it will be illegal not to disclose.


And GDPR regs apply not only to EU citizens, but also US citizens residing in the EU.


And to US companies who do business with EU citizens and to the data of US citizens who have business with EU companies

Or anything that touches the EU, really.


Have the standard penalty for if a third-party discovers and reports the breach. Reduce the penalty if the breached company finds and discloses it themselves. Increase the penalty if they knew about the breach but did not disclose, and give the difference to the whistleblower.


Ha, this would actually totally work.


This is a good point. The higher the penalty, the more incentive we provide for sweeping things under the rug. We have to have a sane balance.


I understand the problem but I am not sure what a sane balance might be. In general a data breach is something companies want to avoid already.


This is a bad idea; it just raises the cost of entry for startups unnecessarily. Liability should be in proportion to harm actually done, and harm actually done is in proportion to how much personal data is compromised.


Accountability is a petty side show. Who, in the first place, gave the providers consent for all this, and who appointed them owners of PII?

There is a gross lack of information and consent, and a violation of the proper ownership of PII, which should be with the user. Right now data is collected under the cover of overly broad user agreements that are never read, if even that, and data is dispersed among third parties without further user knowledge. The user is completely in the dark. They did not really agree with the collection of data, and cannot really see what was collected, where it ended up, and how it was used, not even at an aggregate level. Furthermore the user finds it very difficult to withdraw, even if they had actually given informed consent, since it's so hard to delete anything off of premises you do not own. It's a distraction to talk about how to punish people for not safekeeping something they stole from you in the first place.

After all, a so-called "breach" is just one thief stealing from another -- would you expect a thief to care that much on your behalf?


I'm not a big fan of contracts of adhesion. I think that to give up important rights by contract should require a correspondingly conspicuous degree of solemnity, not a link at the bottom of a web page to terms of service that state they can be changed at any time by one party.

But the reality is that no matter how much contractual solemnity you require to let other people trade in your personal information, most people will do it in order to get credit. If to get a home loan you had to copy down a contract giving the bank the right to do credit monitoring in longhand with a fountain pen and sign it sixteen times in your own blood, my guess is that people would still be doing it, because credit monitoring really does make loans cheaper. So as a practical matter, I don't think the Equifaxes of the world would go away if the problem you point to was solved. Whereas there's no reason they shouldn't at least internalize the cost of losing your information through incompetence.


Fully agree! This is the important point to stress here in my opinion. Was a bit surprised to not see more comments about this aspect of the problem here.


Penalties would need to be based on the size of the company or breach, otherwise this would result in smaller companies being unable to risk competing, right? If effective features or revenue require user data collection, but smaller companies are terrified of data loss, they're less likely to compete.

Admittedly, smaller companies will have minuscule teams and probably an increased risk of slipping up, but also smaller databases to lose or target. But I don't like the idea of only the largest companies having the ability/confidence to power on.

If Uber is getting a $20k fine, I doubt that's a deterrence for them, but it would be for a tiny startup.


Penalties need to impact CEO compensation. Punish them with a $200K max cash salary and no options or security instruments to get around the penalty.


No, just make them pay the fine directly, lose their jobs and right to be company directors, and bad cases, where they turned a blind eye, actually go to jail.


+1. The debate needs to be reframed. Right now we implicitly accept that corporations have a right to scrape and store our sensitive personal data with zero regard for security best practices, and we hem and haw over what the penalty should be, if any, when the leaks inevitably happen.

We need to look at it from the other direction. Leaking personal data harms consumers. If you do it and are shown to have been negligent, there should be a meaningful penalty. The legal/financial calculus needs to force corporations to take this stuff seriously, or not at all.

Corporation X does not have a right to exist in the future just because its business works in the present.


That's effectively what the European General Data Protection Regulation covers.

The good news for European citizens is that global organizations are accountable for safeguarding their data. The good news for global citizens is that European organizations are accountable for protecting their data too.

The gap? Non EU custodians of non EU citizens' personal data.

That's a big gap!

We need a Global Data Protection Accord.


Do you really think China and/or Russia would ever get in line with something like that, given that they're responsible for (or at least turning a blind eye to, in the cases where it's not government-sanctioned) almost all of this crap?


given that they're responsible for almost all of this crap

Where'd you get that impression? Almost all data public data breaches concern Western companies and Western hackers. And people have been warnings against the lax security standards and worse coding practices for decades.

Your trying to shift the blame to the russkies is just another example of denialism.


I see millions of hack attempts coming from Russia and China every day even on sites that see a tenth of that traffic, and have to deflect them constantly. Who do you work for?


A potential downside to "entirely results-based" enforcement is that companies now have a negative incentive to monitor for and report data breaches.

Why install an intrusion detection system if it could potentially cost the company millions when it triggers? Better just to not know that data was exfiltrated. You don't even have to cover anything up if you never learn there was a problem to begin with.


Twelve months ago I'd have called that a solved problem - c.f. entities like the EPA, OSHA, or NHTSA that conduct results-based inspections and investigations ("how much crap is actually in the water coming out of your pipes?", "was this injury/crash/accident a result of a violation") and come down on violators like a ton of bricks - but the suicide party is doing an excellent job of sabotaging our civilization's guardrails and sawing away at its fall-arrest lines.


Agreed. However, the punishment shouldn't just be financial, otherwise it will just be risk-assessed.

How about a ban on the data collection for a period, or lose the ability to use it.


I disagree. "You can't use the collected data anymore" is a million times harder to enforce than "pay us ten billion dollars."

> otherwise it will just be risk-assessed.

The key is to make the cost so large that the risk isn't worth it.

What's my geolocation history worth to Google? $20? $200? Make them pay $2,000 if they lose or misuse it.

Sure, maybe they'll buy "data loss insurance". But even then, there will be motivation to do things better. Eg, the insurer will refuse to cover claims where proper encryption wasn't used, will prorate policies based on data collected, etc.


From May 2018 in the EU the maximum fine for a data breach is €20m or 4% of global annual revenue. This is due to the GDPR.[0]

[0] https://en.wikipedia.org/wiki/General_Data_Protection_Regula...


So after a certain point, the bigger the company the lower the fine? 20 million is 0.01% of Apple's turnover.


Whichever is greater, Apple would probably pay about 20 million * 400 (going by 0.01%) => 8 billion dollars.

Which I think would hit Apple rather hard.


4% or 20 million, whichever is greater.


>>>I disagree. "You can't use the collected data anymore" is a million times harder to enforce than "pay us ten billion dollars."

Not especially hard; you send in auditors to delete the collected data from all servers. Oh, that kills the company? Too bad.


It wouldn't fly. And the data may have been "laundered" intentionally or not - used to send emails, get new customers and social media contacts, etc. It may be hard to even trace anymore.


Define 'misuse'.

If they sell ad space based on your geolocation, is that a misuse?

If they sell your geolocation, ip address, and search history together, is that a misuse?


Doesn't that still end up being financial? If someone is banned from collecting data and is later found to have been doing so anyway, the most likely recourse is fines, right? Or are you suggesting this would imply criminal consequences (i.e. jail)?


How about financial plus the CEO, and any management lower provably found negligent, can't work in a role where they have responsibility for data handling (which includes not being superior to anyone who is data handling).


What's wrong with that? You get the socially optimal amount of data collection.


> When a CTO says "let's collect geolocations", the CEO should should have legal and business reasons to say "no way, it's not worth the financial risk of losing them; it could destroy our company."

Sounds like the EU Data Protection law, which says you need a legitimate reason to store & process personal data.


Let's say that Party A collected private data as in your step 1. Then Party B stole that data as in your step 3.

Do you punish at step 3?

Party A's step 3 is Party B's step 1.

I don't much see the difference between them.

What if Party A willfully gave the data to Party B? No punishment? Equifax is a good example of Party B in this case. Can't we punish all the Parties A?


You just summarized the principle of GDPR legislation across the EU, coming into effect May 2018.

Fines of up to 4% of global revenue of the offender.

Looking forward to massive lawsuits against Facebook, Google, Uber, etc.


"I do not think the government should mandate specific practices; it's too complicated, too fast-changing, and too hard to police."

I disagree.

Breeches are inevitable. "when", not "if". Today, for interop, all demographic data must be stored as plaintext, because we don't have national identifiers.

The only fix is to issue globally unique identifiers. Then we can encrypt demographic data at rest. Greatly mitigating the damage of breeches.

That's why we need a federal level solution.


The only fix is to issue globally unique identifiers. Then we can encrypt demographic data at rest. Greatly mitigating the damage of breeches.

How does that follow? I work in a country with such identifiers, and I don't see the connection.

By the way, just because such identifiers exist, doesn't mean people are keen on giving them out to every company, and in fact, asking and storing it is frowned upon by our national data protection commission, unless you have a good reason to do so (just like any other personal data).


Just like passwords, you don't store the identifiers as-is, which is a rule that is easy to explain, enforce.

Translucent Databases: Confusion, Misdirection, Randomness, Sharing, Authentication And Steganography To Defend Privacy

http://a.co/hTfRPP9

http://www.wayner.org/node/39


There is no need for a single centralized identifier. Each need for an identifier has its own requirements, things like ensuring a person has been served only once, ensuring every person has been served at least once, ensuring every participant meets some criteria, etc. There is no need for, for example, the social security administration to be able to cross-reference with the drivers license registry. It makes no sense, and would be pretty dangerous, to create a single identifier that invites cross-relation of a persons entire existence into an easily-profileable bundle.


You perfectly articulated the opposition to Real ID. I usedto share your views.

Then I created regional health exchanges and worked in election integrity policy (voter registration databases).

Without guids, there is no way to both link demographic records across systems AND encrypt those records at rest.

I’d be okay with a handful of id (guid) issuing authorities per use case. Voting, health care, pension, etc.

I very much wanted infratructure for faceted identities (personas) which could not be correlated. But now I don’t think such a thing is possible. I believe, but cannot prove, that Big Data will always win the privacy vs deanonymization arm’s race.

PS- Corps like Facebook, Lexis/Nexus, NSA, etc have already uniquely identified everyone living and dead. My proposal daylights that fact, restoring power to the people, and insisting that our data is encrypted at rest.


In the Netherlands we've had a err, meldplicht / reporting obligation for data leaks for a while now. We also have laws about how user data should be protected, and an agency actively checking whether businesses uphold said laws.

It's not complicated for government rulings; the law just states what information is considered private (and this doesn't change too often), and make it obligatory to protect it. Which it already is I'm sure.


Perhaps a more pragmatic solution would be like truth in labeling laws on food cans. I.e. the company collecting your data must explicitly say what it is collecting, in a standardized format specified by the government.

Like if an app is going to transmit your email address book back to headquarters, it must specifically disclose it is doing so, otherwise the app maker is subject to enforcement action.


That won't solve the problem, because there are plenty of good reasons for people to share information with companies that they don't want to share with everyone. In the case of the Equifax breach, people shared personal information with their lenders; do you think they're going to stop doing that if they're informed?

See also my reply to @pishpash below.


Mandating specific practices with regard to technologies is probably not possible, but we can certainly come up with a general framework for deciding the degree of recklessness. For example, the penalty for allowing data to be stolen via an exploit for which a security fix was issued months prior should be more severe than via a zero-day.


The government shouldn't mandate any specific practice, but they could create a regulation. You lose people's data that you use for identification, that data can no longer be used for identification. If it is used, you are liable for any damages caused by a false identification.


Well put and a great analogy. I hope Troy uses it. Politicians pay attention to soundbites like that because they realize how those soundbites could be used against them if more people hear about them, too, and then find out their representatives did nothing to stop the companies from "harming the data environment."


Staying with accountability. The source of the vast majority of these attacks are from China, Russia, and Eastern European countries. Why shouldn't the people commiting the crimes be punished? Currently those governments and crimals almost always get away with it. There is no punishment for bad behavior. As most of us being technical engineers on HN know it is nearly impossible to completely prevent a breach or attack if a savvy group wants to put in the effort and time.

I'm not saying that Equifax and equivalent companies were not grossly inept and incompetent, but eventually we have to start putting pressure on the criminals and reduce the incentives. Cyber crime currently does pay. Extremely well.


> Currently those governments and crimals almost always get away with it.

> but eventually we have to start putting pressure on the criminals

Hmm. On the one hand you are certainly right - but on the other hand I'm sick of the consequences of decades of US interventions in foreign countries. To make stuff worse, China, Russia and Eastern Europe is, basically, one block. If the US were to intervene in ANY of the countries, and if only with economic sanctions, I can easily see either a hot war or more stealthy "counter interventions" like the Russian support for Trump on the horizon.

Pressure and (military?) intervention only really works against tiny/poor-ish countries in Africa and Southern America. The rest can do whatever they want and are accountable to no one.


There are already enough laws covering the unauthorised use of computer systems.

The US Congress (and for that matter European Parliament) are limited to acting upon companies and people within their borders.

Who should you go after? The people that keep stealing drugs from the pharmacy, or the pharmacist who keeps the keys to the drug locker on a chain beside the door.


While I agree that accountability matters, I believe you are stripping blame from U.S. actors when there is blame on the U.S. front as well. To think our country doesn't do these exact same things to other nations is blatantly ignorant.

It has been proven that U.S. agencies hold flaw information so that they may use it against foreign or domestic actors rather than informing the entities responsible for these vulnerabilities or the greater public.

This is truly a global issue and it is a mistake to focus blame on any singular nation states. You must not let the U.S. propeganda machine cloud your judgement.


Yes and hiding a leak should also be more severely punished.


“Seems like the only winning move is not to play.” - WOPR


not just big bucks. Jail time for those responsible of oversight in terms of data security sounds very appropriate. Call it jail or something else, but it needs to impact individuals and not just company money, and it should have dire consequences on their future job prospects in that field.


I've seen it mentioned below, but I feel this is deserving of its own comment.

If security researchers could report vulnerabilities with impunity it would drastically reduce the incentive to sell vulnerabilities to black market buyers.

This is a full-blown national security issue. If security researchers could work with our three letter agencies in defending our infrastructure it would go a long way towards securing the US against increasingly advanced opponents.

As it stands right now, if I found a critical vulnerability in a government system I think I would just drop it and tell nobody. I think it's more likely that I would be punished rather than rewarded, which destroys any incentive I would have to help.


Yeah, if only the three-letter agencies actually cared about cybersecurity. The agencies don't want all devices and servers to have "iPhone-level" security or better -- that much is clear. They're still fighting Apple over it and they try to get them to weaken their security "so law enforcement can get in" whenever it's convenient to them. Cybersecurity compromises be damned.

This is how they think. Their priorities are pretty far removed from "cybersecurity".

They also blew their chance with cybersecurity, when they passed a supposed Cybersecurity Act of 2016, that was meant to stop all of these data breaches from happening. But as all the critics said back then, the law was nothing more than additional surveillance powers given to the NSA and DHS/FBI. They never actually intended to use it to stop any of the data breaches - they just wanted to collect more data on people and add to the big stack of hay, in which they want to find their needle.


The iphone isn't as secure as Apple wants people to believe.


Do you have any examples of how the iPhone is insecure? Genuinely curious.


In the age of the IME and the hundreds of billions of dollars the US spends on US covert operations. I'll let you guess what the path of least resistance is.


I think “examples” refers to hard proof. Anyone could make an unsubstantiated claim about anyone or anything else if they wanted to, so it is not very productive to include.


"Data maximisation is the norm" is a big point. Emphasize that. If they are looking for a punitive legislative approach to curb breaches, remind them it's not just the "how", but the "what". Between HIPAA and SOX and the like, they are familiar with the burdens that come with different data types. Hopefully, between differentiating types of data and discouraging storage of data for other (law enforcement?) use, we can just have less data stored in general. At some point, companies should see data as a liability.


> At some point, companies should see data as a liability.

This is key. Currently, there appears to be no business downside to collecting PII, so companies collect as much as they possibly can. If, through regulation or some other means, data became a liability (or at least came with some downside risk) then perhaps companies would become more thoughtful about what they tried to collect and store.


Seems to differ between companies. We have seen PII as a liability since early on, but more so as we move closer to GDPR.

GDPR forces us to justify and be transparent about each data point we collect.

This has increased the urgency of cleaning up any non required data points, adding expiration to data and moving data from subcontractors in house.

Guess that a lot of companies are in our position.


GDPR seems to be particular to europe. It might be better to say it differs between countries, rather than companies.


GDPR affects any company that has european customers. So a lot of non-EU companies will be affected.

Guess it will be harder to enforce when a company does not have an EU sub though.


I think this is key. All the conversations we're hearing are about data breaches. In my opinion, that's merely a predictable and unavoidable symptom of data hoarding.

We need to move toward a recognition that collecting PII is inherently dangerous. Holding on to PII forever is inherently dangerous. And that's before the not insubstantial privacy risks these databases incur. I worry that any policy that solely concentrates on breaches is just going to lead companies to gamble that they won't be in the relatively small minority of companies that actually get hacked.

What can we do to incentivize a CTO not to have a customer database, or failing that to keep her company's database as minimalist as possible, even if she is 100% certain that database will never be stolen?


I agree, this is a nice convergence point between non-technical, comprehensibly regulateable and high impact. It's for sure the message I'd like to leave lingering in their mind.


Say there is no such thing as identity theft. Identity is not a material thing that can be stolen.

The issue is bank fraud. When someone uses stolen personally identifiable information to make fraudulent purchases/accounts/whatever it's the banks liability for allowing the wrong person to perform those actions.


Yes, this is the key issue, but it is broadly ignored.

In the old days you needed to walk into a bank with photo-ID and talk to a teller to open an account, let alone get credit.

The big banks have saved money by closing many branches.

These days, you can get a credit card online easily, no face-to-face interaction involved. Hackers pretending to be you can do it as easily.


Yes. Surprised to see this is only half the way up the page.

Data breaches wouldn't be the issue they are if banks were liable when they were fooled. This is definitely Don Quixote windmill territory though, the banks have a well-funded lobby.


There's also the libel issue. The bank reports (falsely) in writing that you have a debt: that's libel. The credit reporting agency reports (falsely) that you have the debt. Still libel.



The article says: “Incidentally, I've decided not to mention specific data breaches but rather to focus on the patterns...”. When trying to influence people who won’t be subject matter experts, I tend to think the opposite approach would be more effective. People readily latch on to tangible examples of what goes wrong, and while legislation is too often a knee-jerk reaction to single events for that reason, here is a chance to use that phenomenon to your advantage. If my measure of success was positive influence rather than pure education, that’s how I’d go.


I would seriously consider taking the list of Congresspeople who are going to be running this hearing, extracting out the information you already have on them from your dumps, and preparing them a personal sheet of paper for each one showing them what you have about them online.

If it's in public, you can't read it to them, but you might be able to hand it to them personally.

This will be far more powerful than almost anything else you could do. It's not a problem "Americans" have... it's a problem they have.


I think sharing congresspeople's private login information from leaked sites as public record would light a useful fire under their asses.

If its already freely available on the dark web, the bad guys already have this data. Might as well level the playing field and make the public aware of the extent of the issue.

At least prepare the number of .gov and .mil accounts that have been compromised, huge blackmail risk for people who have access to top secret info.


Produce that list with a note next to each piece of info for which company lost it...


And when. And the lag between when the breach occurred and when the public was notified of the breach, along with all relevant information the company provided to help customers determine the extent of their information that is now public.


You'd want to be careful doing that. When people are upset, they sometimes shoot the messenger. It seems to happen a lot in security.


Francesca Polletta's book "It Was Like a Fever" brings forward some really interesting points of discussion about storytelling as a political device.

Some relevant quotes from the text:

"If the fact that everyone can tell his own story makes it easier for people to challenge the assurances of the powerful that certain policies are to everyone’s advantage, the fact that narrative is seen as less authoritative than other discursive forms may weaken that challenge."

"We are ambivalent about storytelling, not dismissive of it. There is a strong vein of skepticism toward professional expertise in American culture. Against that skepticism, the authenticity of personal storytelling makes the form trustworthy—sometimes more trustworthy than the complex facts and figures offered by certified experts."

"...those wanting to effect social change must debunk beliefs that have the status of common sense, familiar stories are more obstacle than resource."

"when feminists brought sex discrimination suits against employers, they struggled to prove that women were underrepresented in higher-status, traditionally male jobs because they were discriminated against and not because they had no interest in those jobs. Arguing that women wanted the jobs put them at odds with a familiar story in which girls grow up to want women’s work, and men grow up to want men’s work. To judges, feminists seemed to be saying that women were just like men, something that flew in the face of good sense."


I have one area / worry that I would like Troy to introduce as a question and to give his advice / opinion as a subject matter expert:

1) The role and responsibilities of security researchers in discovering and investigating data breaches (maybe also discuss the spectrum from white-hat to black hat, yet the tools they all use are effectively the same).

2) The role and responsibilities of journalists in reporting the said breaches.

3) The impact of laws and litigation leveraged by governments and corporations to protect themselves in the event of data breaches.

4) Why a fair and happy balance between the interests of 1), 2) and 3) is a necessary part of mitigating and reducing the possibility of data breaches along with unhappy examples and their consequences.

I'll admit that four questions above are a kind of scope creep to the intended discussion, but my concern is pretty real. The laws and norms that we have today, while imperfect, are the reason why Troy is being asked to appear as a subject matter expert. Whatever the laws and norms of the future are, they will need to sufficiently flexible to allow future subject matter experts to learn and operate, so that they too can make meaningful contributions to the issues of their time.

Edit: formatting


Not to forget the DMCA anti-circumvention provisions which make it illegal to report on security breaches you discover on your own devices.


> I'll admit that four questions above are a kind of scope creep to the intended discussion

I think they are absolutely relevant, specifically around the context of disclosure. Frame it as "people shouldn't be afraid to report crimes or vigilant in watching for them" and you can get support. Yes, Troy should make it clear how he and his ilk are chilled by these same offending companies.


Trying to make data breaches much harder has some unsettling implications. "Make CEOs responsible" or "make companies in general responsible". Ok. Sucks if you're a startup, but ok. The big ones are going to turn around and put a clause into every engineer's contract to the effect of "you are now directly responsible for the code you write, if the company is sued because of a bug you caused, we're suing you". Then we'll either accept it, or go against our profession's history and form a union to fight it, which will inevitably lead to the union representatives accepting a compromised version on our behalf that's the same at all companies and now in addition to that there are various additional requirements that increase the barriers of entry to the profession and slow down getting work done. Either way has its downsides.

An alternative approach: assume everything is compromised all the time. Identify the material harms of such compromise, and work to minimize those harms. The SSN-is-proof-of-identity system is obviously a big source of harm, not so much applied to whoever was deceived, but inflicted on the actual person trying to put everything right again. There are many many changes to this one system that would help minimize the damage, ranging from complete overhauls to just minor things like being able to change one's SSN with more ease, or putting more pressure on companies to verify people instead of trusting the SSN. This is probably one of the few cases where doing just about anything to improve the system even a little is much better than doing nothing.

I doubt anything will come of it though. Congress routinely talks to domain experts who warn about the problems in the future should they not do what the expert suggests. Nothing gets done, and the problems manifest, sometimes worse than predicted.


>"you are now directly responsible for the code you write, if the company is sued because of a bug you caused, we're suing you"

You're implying HN doesn't want this. If companies pin liability on the engineers, engineers will respond by doing due diligence and ensuring nothing happens that is their fault. This will slow down developement near security critical features, which companies would hate, but engineers are begging for an excuse to do.

Companies would try to hire poor engineers who don't know any better but to take a cavalier approach to security while accepting monetary liability. This is where software engineers would need to organize, not as a union but as a profession. They wouldn't seek to regulate wages, just require that every practicing SE be a member of the profession. This would ensure that everyone who accepted responsibility for breaches was capable and willing to write secure code.

Of course, this is all extremely unlikely. If a breach costs a company several million dollars, then the responsible engineer couldn't pay the fine if they tried. The company would still be on the hook for the rest of the fine, which they would pay out of pocket. They would have accomplished screwing over an employee, and scaring future employees away, to no benefit. If companies were to pin blame on engineers, it would be because they were serious about security, and wanted their employees to be as serious as them, not because they expect employees to be able to cover the fines.

Not every data breach has a material harm. What if private picture were released and made public (ala "fappening")? Nobody gains money by having these pictures, and the original owner doesn't lose money either. Despite the lack of material loss, the owners of the data are still interested in preventing it from becoming public.


> Trying to make data breaches much harder has some unsettling implications. "Make CEOs responsible" or "make companies in general responsible". Ok. Sucks if you're a startup, but ok. The big ones are going to turn around and put a clause into every engineer's contract to the effect of "you are now directly responsible for the code you write, if the company is sued because of a bug you caused, we're suing you".

Contracts aren't laws and, particularly, don't have unlimited ability to create new forms of liability. Even if this would be effective, it can simply be negated by a safe harbor clause protecting individual employees without specified degree of authority from liability for breaches outside of actively concealing vulnerabilities from the employer or revealing them to exploiters.


> if the company is sued because of a bug you caused, we're suing you

It's totally implausible that companies are going to start regularly suing their engineers personally for writing bugs. They'd recover very little money and make it impossible to hire. Programmers already write bugs that create liability in businesses all over the world every day. Worry about something else.


Probably so, but an indemnification/hold-harmless agreement becomes a much more important part of the contract negotiation and individuals and businesses have to pay a lot closer attention when so far everyone's been happy enough with the "don't sue me, this software has no warranty" clause we slap on just about all our software, open or proprietary. If that is ever threatened, do you think a professional liability insurance industry won't pop up and become effectively mandatory even if you're just building a CRUD app? Now each company's gotta pay up. And you would have to pay for your own individual coverage too if you don't plan to work for the same big company your whole career like classical engineers often do, or don't want the risk of a company laying you off and then hiring you back as an independent contractor as is happening to some electrical engineers.

> Worry about something else.

Oh this is a very long way down my list of worries. But programmers should look very carefully at all the drawbacks of professional engineering before trying to shape software engineering into it, which is what regulation on something so domain cross cutting as data is doing. The fallout of GDPR will be interesting to watch. With the potential fines being in the millions, have any insurance companies appeared yet to offer insurance for US companies wanting to do business in the EU but not wanting to spend the time and money making sure they're fully compliant with each rule?


I'm not sure I think it would be a bad thing if proprietary software licenses stopped disclaiming all liability, but there's no evidence that creating liability for data breaches (for the entity that collected and stored the data, not whoever wrote the software they used to do it!) is going to change that. Plenty of companies already use software in ways that creates liability for them, and yet as you say shrink wrap software rarely comes with any warranty.

Individual liability seems much less likely to me, in the absence of statues explicitly forcing this liability onto individuals. If you are a consultant, and the market equilibrium actually shifts so that customers are demanding security warranties, your insurance costs would go up, but unless you are less competent than average your equilibrium income should go up to match. The legal incidence, and plausibly the economic incidence, is all on companies that actually handle personal data.


When Troy Hunt is testifying before congress you know that at least some part of the government is functioning :)

I don't know where it fits in but I wish congress understood how silly it is that knowing our birthdays and SSNs is still treated as proof of our identity in 2017


> When Troy Hunt is testifying before congress you know that at least some part of the government is functioning

That's if it's not just a ruse to lure him to the US where they can then arrest him for being in possession of too much illegally-obtained personal info.


That would be an international scandal. Heck, it's not like we're unfriendly with the Australians anyway.


To take a different perspective from the others here:

I think it is fundamentally problematic to try to regulate the sending / receiving of information. That's why the movie and music industries have had so much trouble, that's why classified info leaks are more common, and that's why cryptocurrencies haven't been squashed (yet). Trying to regulate this will have far-reaching bad side effects for all parties.

I also think the vast, silent majority is not actually as concerned about this stuff as perceived. They'll say so in polls, but if they're allowed to choose between the status quo or paying (even a little) to use Facebook, Google, etc. without data collection, they'll choose the free option almost every time. People don't like that their data is collected, but they like not paying for things more.

That said: I think the fundamental issue that needs to be addressed is that the data needs to be valueless.

The Experian hack is a problem because the data leaked has value. Some of it is public record stuff (effectively valueless), but SSNs follow you forever and cannot be changed. If it could be changed (or destroyed for a replacement), then the SSN would hold little value.

Mailing addresses are similar. In 2017, we should have a way of giving anonymous (perhaps even re-assignable) addresses to the organizations we interact with so we're not explicitly telling each of them where we live. If I could generate a new "pseudo mailing address" for each of these companies, I could then destroy it if it is ever compromised--and as a bonus, I'd be able to see how it's being shared (since I'd see other companies using the same one). At that point, having someone's "mailing address" becomes nearly valueless.

Some of this responsibility falls on the consumer, too. Obviously companies are still going to stockpile data to cross-reference and provide value, but that's reality. We've got ad / tracking blockers, anonymizers, and VPNs for the people who truly care, which help to make their own collected data valueless. But the way I see it is that the government needs to ensure they're not creating a system where citizens' data must have value, which is what has been done with immutable SSNs and mailing addresses.


I would add two things.

1. Encryption is the only way to secure data, and without it, data will always get stolen. And encryption with a backdoor is not encryption.

2. Data needs to be owned by the people. I should be able to go to Target or American Express and ask them to delete everything they know about me. Them not doing so should result in a, say, 10,000 dollar fine that goes all to me with admittance of insurmountable guilt.

A 200 million dollar fine may still have banks and telecom gaming the system, but if they had to admit guilt, the math becomes easy. Their egos won't permit it!

And with that, finally our data will be kept safe.


I’d suggest:

- Regulate that pseudonymous use of transactional services must be permitted (with exceptions for, e.g. banks, universities). You do not need my legal name or telephone number to ship me a package, or accept payment, for example.

- Get explicit, arduous permission from users whenever PII is obtained, if you intend to store it in any way, with exactly what it will be used for.

For existing data, over some long period of time (say, a couple of years) get users to explicitly agree to each use, or delete it.

This could work pretty much like default-off OAuth2.0 scopes. For example, you could give Facebook the ability to know your name for display on the site, but not for exchange with third parties for additional data on you, or for the purposes of advertising.

While the ‘default off’ requirement would reduce the amount of PII in circulation, it would also reduce the _false choice_ between security and privacy that Facebook, Google, Amazon, and others present between privacy and security. Want my telephone number for 2FA? Possibly, but that doesn’t mean that you can use it to buy/exchange a file on me from Equifax/Acxiom/MasterCard/whatever.


> You do not need my legal name or telephone number to ship me a package, or accept payment, for example.

That's not always true in my country. Sometimes if a package is not labelled with a name connected to the delivery address, the post office will simply return it once it reaches their sorting facility.

Which has annoyed me on two occasions. And I can't see a legitimate reason for it. And I don't know if it's connected to actual law, or just a practice.


One thing to make clear about the Equifax breach is that it wasn't caused by a single security update that was never applied, as has been put forward by Equifax. It was caused by an insecure information architecture that let web front ends have direct database access. Good security requires defense in depth, not an unbreachable perimeter. Equifax's security was a failure long before it was hacked.

Obviously the technical details may not be the appropriate level to discuss with Congress, but an analogy that they can understand might be helpful. What Equifax did was the equivalent of keeping the country's nuclear codes at an outpost in Afghanistan and then blaming a sentry for falling asleep when an enemy slipped in and stole them.


I would suggest to be very visual.

Bring a few things:

1. A box with a lock and key 2. A cardboard box+tape 3. An adult education device + a clear plastic bag.

Put the adult education device (texas terms) inside the clear plastic bag. Put that bag in the cardboard box and seal it with tape. Put the box in the lock box. (Kind of like a Russian doll situation).

(If you're allowed to do this at all.. do it after security)

Once you're up and presenting mention:

This is the best analogy that I have: This lock box illustrates an honest attempt at protecting data. What you have in this box is your data and is potentially embarrassing. It's yours but should the public know about it?

Tell them how you and only very specialized individuals know how to open the box. Open it.

Then explain that the next layer illustrates a company that doesn't know or doesn't want to invest much on protecting the data: anyone can tear open that box.

Once you have the box torn open and the bag+item is out on display. Ask congress: This is how well Equifax protected your data. Are you going to make them pay for the FCC fine that CSPAN is going to be hit with?


tomorrow on hackernews:

"Does anyone know a good lawyer?"


I think there are really only two possible ways to fix the problems with data breaches.

1. Impose a fine for every individual account leaked. Even if it were only $10, the recent Equifax loss of 143 million records is a $1.4 billion fine. It should probably be higher if gross negligence is involved. This would create a new industry (data breach insurance).

or

2. Make it so simply having all the leaked information on an individual isn't enough to cause any harm. I'm specifically thinking of some PKI-based scheme where I verify my identity by signing something with my private key.

There may be other variations, but the choice seems to be between forcing data brokers to be responsible or make it so that their irresponsibility is harmless.


FWIW data breach insurance exists and is relatively inexpensive since breaches are rare and cheap to remediate. These changes would have a profound impact on that market.


How do you get a grannie to use a public private key cryptography? It would probably have to be a physical device.


Yeah, some kind of card. I think Estonia was issuing something like this.


The discussion is a valuable one but the only buttons congress has are "regulation" and "funding". Some regulations might not make much sense, although I think the accountability one is a good place to start.

But the bar for gathering my data is so low! Read a few tutorials on how to write software with Your Favorite New Web Framework and off you go on making a new site that will happily leak data once discovered. The core competencies of my power company and my local cinema are definitely not IT -- nor security. Can we expect good results here? (I hope the answer is 'yes' but I think it's unlikely).

So to take a different tack -- could funding help here? What if there were funding and/or accreditation of some libraries or frameworks? These would use best practices regarding minimizing data loss (salt e.g), regular auditing of actual deployments of this technology, fuzzing of the underlying software, etc. A marketing/branding effort regarding the accreditation could also be helped w/funding. It needn't be a US-local solution, nor even a US-local agency. Though that would certainly minimize the bureaucracy to "only" the level of US Congress.

Instead of "Stop, Drop, and Roll" or "Only You Can Prevent Forest Fires", it could be "Never Roll Your Own User Account Database" (and OMG please help us if you rolled your own crypto).


Tell them to look no further for a solution than Europe's GDPR. The GDPR gives individuals the right to the privacy and security of their personal data. Companies that collect this data do so at their own risk, and have an obligation to secure and not proliferate that data.

This legislation has huge, sharp teeth. It comes into full-effect in May 2018, and every single multinational is running around in a "pants-of-fire" panic trying to figure out how to comply.

If Equifax had to pay 4% of their global annual turnover per day of non-compliance, would they act? Yes, of course.

---

> How is the fine calculated?

> Article 58 of the GDPR provides the supervisory authority with the power to impose administrative fines under Article 83 based on several factors, including:

> The nature, gravity and duration of the infringement (e.g., how many people were affected and how much damage was suffered by them)

> Whether the infringement was intentional or negligent

> Whether the controller or processor took any steps to mitigate the damage

> Technical and organizational measures that had been implemented by the controller or processor

> Prior infringements by the controller or processor

> The degree of cooperation with the regulator

> The types of personal data involved

> The way the regulator found out about the infringement

> The greater of €10 million or 2% of global annual turnover

> If it is determined that non-compliance was related to technical measures such as impact assessments, breach notifications and certifications, then the fine may be up to an amount that is the GREATER of €10 million or 2% of global annual turnover (revenue) from the prior year.

> The greater of €20 million or 4% of global annual turnover

> In the case of non-compliance with key provisions of the GDPR, regulators have the authority to levy a fine in an amount that is up to the GREATER of €20 million or 4% of global annual turnover in the prior year. Examples that fall under this category are non-adherence to the core principles of processing personal data, infringement of the rights of data subjects and the transfer of personal data to third countries or international organizations that do not ensure an adequate level of data protection.


Data Breach Transparency

  - At a minimum, their should be a penalty that grows from the time the breach was learned to when they disclose it publicly.
  - There should also be penalities for not being transparent about what exact data was leaked for what users.
Social Security Number

  - SSN is similar to a password- you want to keep it hidden, and if it leaked, you should change it. However, we can't change it. Perhaps it should be considered more as a password?
User Data Rights

  - People should know what personal data companies have on them. A good example of this is Equifax storing peoples home addresses- this could be disclosed. On the other hand, a it is probably fine to exclude other types of data, such as an advertiser storing your zip code- people probably don't care as much.
  - Should people have a right to have certain kinds of data (e.g. SSN) removed from websites?
Adoption from USA Nutrition Label

  - Is it a good idea to mandate companies disclose the security they use? For example, at one time reddit had their passwords stored as plaintext and they got hacked. Disclosing basic security hygiene (e.g. password storage) somewhere standardized in the website would make it much less outrageous.
Technology Improvement

  - Certain technologies enable hackers more than others. SQL seems to enable a lot of hacking. Should we discourage it?
  - Get rid of Intel ME technologies     - https://schd.ws/hosted_files/osseu17/84/Replace%20UEFI%20with%20Linux.pdf
  - Get rid of Intel hidden instructions - https://www.youtube.com/watch?v=KrksBdWcZgQ
  - Get rid of Simon and Speck           - https://www.reuters.com/article/us-cyber-standards-insight/distrustful-u-s-allies-force-spy-agency-to-back-down-in-encryption-fight-idUSKCN1BW0GV
  - What is "best for National Security" is actually worst for our own. It feels like people don't have a democratic say in the right balance either.
(edit trying to figure this formating out)


> SSN is similar to a password- you want to keep it hidden, and if it leaked, you should change it. However, we can't change it. Perhaps it should be considered more as a password?

I assume you meant

> SSN is similar to a password- you want to keep it hidden, and if it leaked, you should change it. However, we can't change it. Perhaps it should be considered more as a username?


Yeah that is good point. Either way we need a universal password.


Actually, yes. On second, thought that is a much better idea.


The important thing to remember is that everything of substance has already been written up and will be read by congress members and their staff. Keeping that in mind, the more important object is to make use of the five minutes of microphone time.

I think studying how other people have effectively used their five minutes in instructive. I'd start with the testimony of Fred Rogers where he gave a Senate statement on PBS funding: http://www.americanrhetoric.com/speeches/fredrogerssenatetes...


Having technical knowledge and entering a room of less technical policy-makers, it can be particularly important to leverage existing industry messaging rather than winging-it.

I would focus on the CIA triad + Accountability + Assurance. It's helpful to use standard terminology that is understood by existing privacy practitioners.

Personal information should be Confidential from unwanted disclosure.

Personal information should have Integrity with the creation, modification, and deletion of personal information only as authorized and intended.

Personal information should be accessible readily by authorized parties.

Personal information should have Accountability, with traceable ownership to a party responsible for Confidentiality, Integrity and Access.

Personal information should have Assurance, with appropriate audit of Confidentiality, Integrity, Access and Accountability; including the right to inspect.

Just as the Amendments to the Constitution form a latticework of protection for each other -- e.g. that freedom of press helps ensure other rights are not eroded -- the elements of CIA+A+A do the same.

Recommendations can then be framed for direct implementation:

* Confidentiality: Requirements for timely breach notice

* Access: The right of the consumer to be aware of and to have access to access data about them

* Integrity: The right of the consumer to repudiate data about them and demand removal

* Accountability: Direct ownership and legal teeth (fines, jail, and barring of eligibility from data or business management roles, etc.) to compel the presence and adherence of an appropriate privacy management program

* Assurance: Standardized audit reporting, guaranteed consumer right to inspect, etc.

Folks noting "accountability" often mean the entire CIA Triad + A + A, not the technical term "Accountability". This is likely the gap to bridge -- turning a sentiment that businesses are not operating appropriate privacy management programs in to an actionable path to compel existence, adherence, reporting and audit of such programs.


That CEOs should be accountable for lazy security.

That everyone does security theater, no one does real security. Mostly for convenience to marketing data is not encrypted in REST, data is not segmented into different data stores (own store for passwords etc.) but stored in the same MySQL database, employees can dump millions of records instead on one-record-at-a-time, people let fly unencrypted Excel sheets everywhere etc.


Why? What does a CEO know about InfoSec? Why make CEO's punching bags for fields they aren't experts in?


Then they manage the company and make sure they have someone in charge of overseeing InfoSec that does.


What does a CEO know about accounting? Should they be held liable for irresponsible accounting practices performed by their subordinates if they are not an expert in accounting?


There was a meme a while back that the USA should act upon Data Piracy the same way it acted upon High Seas Piracy at the turn of the 20th Century - acting unilaterally to clean up the high seas, making trade safer cheaper and faster for all nations.

A similar approach might be the best tack to take here

* Have a public register of breaches, with auditor sign off of the details of the event so we can all learn

* publically registering the breach gives some degree of protection from liability / punishment, but there is expectation of competence and good practise (very much like accounting)

* Work with EU over Data Protection definitions and approaches - if both US and EU are singing off same hymn sheet it will become globally de facto

* probably the biggest area to push in that is that personally identifiable information should belong to the person identified - and treated like an asset held in trust by those who hold it...

* beef up whistleblower laws and roles of researchers

* have the NSA buy back some of the world's trust by identifying and hunting down cyber criminals the same way actual violent terrorists are


Point out that the criminalization of "hacking" creates strong incentives towards negligence and both weakens and distorts the pool of cybersecurity talent, while making people afraid to report. Responsibility for data needs to be with its owners, not people who see it through open windows.


The biggest problem is that we've become conditioned to using our name/address/SSN/birthdate/etc for everything, so we give this info, without batting an eye, to services who have no legitimate need for it. We're basically using the same username/password for all of our most sensitive accounts.

For example, an orthodontist in my area asks for SSN, employer, marital status, spouse's SSN, spouse's employer, and states "you must complete the entire form". I only enter name/phone/insurance info, but I bet most people will just do what the form asks.

So part of the responsibility is on the user to not willingly give away irrelevant data. Part of the responsibility is on services to be good stewards of data.

What should Congress do? How about unifying PII and IP? Give Equifax the Napster treatment.


While my first intention was to set a high penalty too. After a few minutes I thought there might be an alternative.

How about prohibiting breachers (companies which have had data breaches in the past) to collect data which is not essential for their business for a limited time span?

Something like: Hey you have not secured your customers data? So why do you want to store that data anyway? You want to promote only the relevant products to your customers? So we will give you some time to get your storage security right and therefore, you are not allowed to collect any non essential data for the next two years.

Yes, the essential data is probably the more important one, but at least it would bring companies with low security to store less data for a while.

Just an idea, what do you think?


Two years is too short. I'd try maybe five.


I’d advocate mandatory disclosure, from all levels, if you know of a breach, you should be compelled to disclose it. I’d close the legal loopholes using attorney client privilege to hide breaches. I’d impose extremely stiff fines for coverups, potentially criminal ones. The same for willful ignorance of a breach.

I suspect that the unintended consequence would be that Eula’s and various business relationships would be adjusted to attempt to limit liability. Maybe let liability be a court matter, just knowing about the breaches would be a huge step for consumers though.


So I'm going through this thread and what I'm reading over and over is "huge penalties", "personal responsibility for people involved" and what I'm wondering to myself is if you've all gone mad.

Okay look, I get it, it is absolutely despicable what these companies are doing. But think about ordinary website operators for a minute. A lot of the proposals in this thread would basically criminalize running a basic web forum unless you're some kind of security ninja. Please, think before you write.


Ability to delete all data related to me is a key. It should be possible for me to go to Google or Facebook and ask them to delete all data they have collected on me over the years (emails, photos, text messages, list of friends, phone number, job history, geo location, search history etc). And by delete I mean actually get rid of the data so it is not recoverable. NOT soft delete.

If a company fails to do so (or doesn't provide some relatively simple way to do it like an online form), there should be harsh financial penalties.


My personal opinion is that in order to make sure that breaches like the Equifax one are preventable and handled correctly, these measures should be imposed

1) An entity storing personal data, must be audited by a government approved 3rd party and their rate must be made public. 2) Any breach (or suspicion of a breach) must be reported to this 3rd party within 24 hours.

The issues I saw with the Equifax breach was that there was NOTHING that told us how bad their security was and they were allowed to not report the breach for a month.


The immutable data they hold on us has always scared me. I have always imagined a middle man or "middle service" would be helpful. This service would be like a database and for my example I'll just use email addresses but imagine also passwords, birthdays, maiden names "encrypted of course" but that's beyond the detail I'll go into. This email service would let me tell the mailman my email is somethingUnique :myRealEmail@domain

Well if I start getting spam at somethingUnique:myRealEmail@domain then I can call the person out about it I gave it to "I'm looking at you Facebook". So some spammers now have that address but I can shut it down or change it, I also could have had a list of people who had that address and if they still use it I will still get messages. There's accountability and even though something static was given away it was only a "pointer" to a static item so that I can change the pointer. I should know who knows what, how they think they know it, have the option to take it away, etc. but that's not really possible when companies like equifax collect data on us and it never really seems like they ask, but I'm sure we signed our life away while buying a car and thinking about picking up chicks in it signing papers in a daze.


I've been doing this for a few years using my own domain and "recipient_delimiter = -" in postfix and actually (annoyingly) have yet to see an email address leaked. considering all the spam i got on my Gmail address, I was thinking it would happen fast, but I guess 1) I'm probably being more careful and 2) these things take time.


I experimented with different email addresses for different purposes and came to the same conclusion.

Then again, perhaps SPAM isn't that lucrative anymore. You might earn much more by selling profiles these days. This only dawned on me after disqus has been hacked and 'lost' personal information such as e-mail addresses (my personal one among them).

What criminals could do now is simply collect breaches, put them into a database and make "e-mail" the join criteria for a query. Based on the output they can generate comprehensive profiles of people that weren't available on the market before.

Date from 'shop a' may reveal my DOB and my real name, Data from disqus allows them to extract what values I hold and so on...

In a way they bank on not sending you spam, because if they would you may change your e-mail and they are thrown off the track ;)


I got one leaked! The provider of my VPS (sold/leaked?) their entire customer base by the look of the CC field. Lots of +galaxyhostplus


Treat it like HIPPA. You will will be banned from doing business if you fail to secure data properly.


I'm happy to see that the context for this testimony is related to ID verification. The bank example Troy gives is a simple & understandable problem that we all can wrap our heads around (Congress too!), which provides a concrete scenario to explore.

The way I always approach this problem is one of ownership. Congress, implicitly, assumes that individuals own their data. That is why it used to be possible to ask individuals to prove their identity by asking them to verify data they own, and presumably, haven't shared too broadly.

Identity theft is really a robbery then, because data which belongs to you is stollen. The government's opening position then needs to be that data belongs to consumers.

What isn't clear is the physical metaphor for what happens when you give your data to a third party, explicitly or implicitly. Am I 'leasing' my data to Facebook? Am I granting them shared ownership? Or is it more like a Bank... I'm allowing Facebook to "re-invest" my asset, they can make money off of it while they have it, but in exchange, they have to keep it safe.

I personally really like the Bank metaphor, and like banks, you can get certified as safe by the government to be a Bank, and have that certification taken away... We already have data protection rules like this in BTW, called HIPPA


I think I also agree with @nathan_long's point, in that perhaps the government shouldn't be concerned with the how data is kept safe, but rather than track breaches and complaints... that would be what leads to a revocation of status.


The security triad is the following:

1. Something you know 2. Something you are 3. Something you have

Unfortunately, the security of an SSN is it's something which the government gave you, and doesn't fall into the security triad.

So when a loan is approved, it's approved with data that may have been made public, either through public records, or through data breaches. The SSN and birth date were never meant to secure financial loans with, and should never have been in the first place.


It is time for a federal breach notification law. There are 30+ State breach disclosure laws and Congress has been working on a Federal law to supersede them since 2005 with no success. A Federal law must be stricter that California 1386 or Massachusetts's law in order to be relevant. It will make compliance easier and companies like Uber will think twice about covering up breaches. Nothing as drastic as GDPR but something.


We need a suite of “HIPAA/FOIA”-inspired bills that radically shift power over private/financial details and information security with these changes:

- decommodify personal details by making them illegal to resell or distribute without permission

- restore individual control: require giving distribution and update/modify/delete rights back to the individual

- mandatory incident reporting requirements with personal criminal liabilities to executives for failing to disclose breaches

- mandatory compensation, determined by independent government risk management, to customers based on the expectation of risk incurred plus insurance against losses for 5 years

- ensure minimum security requirements, similar to PCI-DSS, by formation of a federal information security standards agency that produces practical, effective configuration and architectural standards and collects external/internal audits and conducts spot-check compliance audits similar to the IRS for taxes

- institute an opt-in national identity virtual & physical card with provably secure public/private key management, open-source infrastructure that isn’t based on social security numbers. Perhaps managed by a non-profit which includes security researchers and consumer advocates, with industry advisers with less power.

- phase out use of social security numbers as a primary key, eventually making it illegal to use and require a unique identifier generated for each service by previous system that is not shared by any other system. To connect two identifiers requires the person’s approval, the person can change their per-service identifier at any time and it changes once per year anyhow


Troy you have bulleted out most items of interest, I would also advise you to add mitigation methods: as a computer scientist I am shocked that more conglomerates are not legally obligated to encrypt information and also ensure that information is only decrypted while it's being used and looked at. Data that is not being currently used should be hashed and inaccessible to someone without the credentials (unless you're the NSA and have the brute force to work the search space). Data, like a person's address and social security number, can be encrypted such that we only need a small portion of the correct data to decrypt the entry, but we need to tie the ends of the conversation together: "can I have your secret answer and last 4 digits of your SSN?" should actually initiate the decryption of the data, and should be a real preventative layer instead of just a delay for the clever social engineering ploy.

So please recommend that conglomerates be forced to encrypt data in ways that protects the vast majority of accounts in case of a data breach.

Thanks a lot for taking the initiative to ask the community, regards to you my friend in freedom.


Define 'breach'.

Did Equifax obtain information about me via a breach?

Show me where in my mortgage contract I gave the bank permission to disclose anything to Equifax.

Show me any agreement with my employer that authorizes them to disclose salary information to Equifax.

Out of the thousands of points of data Equifax has collected about me, how many of those were obtained through 'breaches' by some definition?


Companies have no inherent right to data collection, if your business is based off of information regarding other people then it's entirely on you if that information gets compromised. In the case of equifax et all, nobody ever walked out their door and asked some shadowy company to track various data points about them, equifax decided to do that of their own volition, thus all the damages related to this issue should be put entirely on their shoulders.

In addition, incentivized and high-pressure opt-in have the same issue, companies are stealing data about customers to improve their business without offering compensation in kind. The business should be liable for any mishandling of this information and any fallout from this mishandling, if a loan was taken out in your name falsely due to the equifax breech they should have full liability and, due to their voluntary role in this incident, they should be presumed guilty unless they are able to prove their innocence.


I think the starting point should be some sort of ownership of personal data, that all my data is mine and any company processing that data should do so only with my approval. It is not their data to give away. Obviously one can not physically own data, but the similarities are closer than with, for example, intellectual property.

If anything happens outside the boundaries of what data processing is allowed, by negligence or malice, this then becomes a matter of civil law. In the case of data leaks it would be up to a court to assess any economic value to the data. A multiple of the value it could have been sold for is what settlements are based on in similar matters, and this also scales well to multiple claimants.

European data protection laws have taken small steps in this direction and I think it is a sound principle.


Not to put you out of a job, but I do think it's important for the government to take responsibility to coordinate and inform citizens of known issues (via CFPB) and provide a clearing house of information and research for citizens. CERT, who tracks vulnerabilities, is almost useless it seems today, but if it would coordinate with the CFPB to determine impact to citizens by various business, creating a proactive service for business as well as citizen. Information coordinated between CFPB an NSA on the vulnerabilities they find as they float their way thru the internet, goes to CERT and CFPB, then used to both inform the companies of their issues and make for speedier fix/notifications/reparations. While NSA wants to exploit what it finds for purposes of cyberwar advantage, has a duty first to protect it's own citizens.

In such a system both business and citizen gain value. The information from this arrangement can give business advantage of the expertise and leverage the information quickly to resolve issues they may not have $$(expertise) to find, so they appear to customers proactive and responsive. Customers gain with visibility and accountability of the companies who don't resolve issues by making public that same information. I'm not opposed to reasonable grace periods between telling a company they have a problem, and before it's made fully public.

Companies generally only focus on security to the extent they understand the risks of the impact. By making impact very clear and those impacted given a strong selection of reparations (a credit monitor being only a pale solution - useless to many today). I think CFPB could come up with additional recommendations on reparations for companies that citizens would accept.

Weakness of this argument. Gov't is slow, evidently even with information. Coordination and competing purposes between Govt Orgs are nothing to sneeze at - they are hard to resolve. Businesses who feel they cannot quickly or inexpensively resolve a problem given to them will attempt to hide it, working against the idea business will gain value.


What I find important to talk about is the narrative that "if you think your company hasn't been broken into, you just don't know it".

Of course no system is 100% secure, but the narrative that it's inevitable anyways is often used as a defense for bad practices (like in the Equifax case).

Breaches might happen but it is not inevitable. And good practices can still have an impact on how often breaches happen, how much is stolen, and how useful the data is to the intruder.

I always get angry when I see some company head like "we will be broken into anyways, why even try?". They won't do that with their physical company location, why with their network? Because they don't really care about other people's data.


I'd love to see a discussion around data breaches with the use of public records laws. There's a huge lack of responsibility and auditing around it, which lead to large batches of information being wrongly released to the public. This is a systemic problem whose fix likely sits somewhere within a strongly enforced legal process and accountability framework.

For example, when Seattle accidentally gave me millions of emails: http://crosscut.com/2017/10/seattle-information-technology-d...


- Current penalties do not weight the cost-benefit equation sufficiently to overcome the financial and agility costs of implementing strong data security policies.

- Current remedies are designed for the convenience of the entity (write a check to a credit monitoring service and issue a public apology). The burden of action remains on the person whose data was collected, possibly without their knowledge or consent.

- Lack of accountability. It is difficult to overstate the impact of a breach like the Office of Personnel Management's SF-86 database, or any of the NSA leaks. That degree of negligence could arguably have been treated as a treasonable offense.


Power and prestige are another aspect.

If companies like Equafax held only collections of public records (facts) then a 'breach' like this would have no consequences; all of the data would already be public.

What presently gives them power and what makes this breach so bad, is that these facts are used as proxies for an actual form of identification/authentication.

A national ID based on strong cryptographic solutions and issued to all citizens, preferably with their own private keys being also signed by the government if they desire, is how we properly enable digital signatures and progress to an age where forgery of identity is far more difficult.


Shouldn't consumers have privacy and anonymity by default? Unless there is a specific, explicit permissions to collect data that can be personalized it shouldn't be created.

Its fine for companies to collect basic usage and telemetry data--but when they start personalizing it without a user's permission/consent (i.e. when they start tying it together with personally identifiable information [as Facebook and Google do] then it becomes weaponized and can then do great harm to individuals. Privacy by default--and anonymous by default would essentially prevent this.


Storing passwords in plaintext or without salting is something even teenagers don't do.

I'm not kidding - some teenager using Django will have a site with better security than what we've seen from some large companies in their data breaches. This is inexcusable. The narrative often is "smart hackers", when it's really "we did less than my teenage kid did in securing the data"


Extend existing product liability/tort legislation to data privacy.

Product safety in the US is the world standard precisely because plaintiff attorneys extracted enough cash from manufacturers that shareholders, banks, investors, board members and insurers forced reform.

Make failure expensive. Limit liability for small business. Make officers personally liable in cases of gross negligence.

This and about 5 years will deliver results.


The CEO and company management should eat their own dogfood. Their personal data should be stored in the same insecure systems as their victims.


I don’t think LifeLock’s founder regrets making his Social Security number public. Sure his identity was stolen over a dozen times [1]. But he made millions. Making businesses liable for data loss is the only stable long-term solution.

[1] http://www.businessinsider.com/lifelock-symantec-ceo-identit...


A similar thing happened to Jeremy Clarkson of top gear fame in 2008 when he said the theft of bank numbers wasn't a big deal and published his bank details.

Someone donated £500 to a charity on his behalf to prove him wrong.

http://news.bbc.co.uk/1/hi/7174760.stm


Was it stolen because he literally broadcast it, or because it was stored in his service? Big difference.


Point is the cost of putting his information at risk outweighed the benefits he derived from the company.


Advice AGAINST reference solutions or examples for how to achieve compliance; they will cause paralysis and market stagnation.

An example of this can be seen in data-science related to FDA activities; there is an incredibly heavy bias towards specific proprietary software and data-storage formats (and I don't mean Microsoft).



Start by typing the names and email addresses of every congressman/woman into https://haveibeenpwned.com/. If the results are scary, share them in your speech.


One thing that could limit the exposure would be to allow for individuals to limit and set controls on the information about them that is being collected. Less information collected and stored means less of a liability for both parties.


DATA BREACHES by an unauthorized third party ARE A likely RESULT OF forcing companies to use weak encryption and BACK DOORS in order to allow government access.

Also, if you can throw something in about continuing net neutrality, that would be great.


Tell them to pass a law that makes it illegal to set obvious passwords. That is negligent and it harms others.

It's illegal for a bus driver to get drunk and drive a bus full of people. Why is it not illegal for a sys admin to set 'password2017!' or a developer to set 'developer2017!' as the admin password on a website and increment the year when it's time for a change? I've seen first-hand (on multiple occasions) how bad passwords like these harm people on-line. It ought to be against the law.

If you do security basics right (patching, passwords and logging) you'll be fine 99% of the time. But people won't even do that (it's tedious and boring and not sophisticated). Instead, they obsess over APT's, zero-day exploits and nation state actors, when they really just need to start by patching and setting decent passwords.


SIGINT resources are under-allocated for domestic defence of civilian assets.


Tell them that the data security of US companies is a matter of national defense. The US military would not stand for foreign powers attacking US interests abroad (shipping ports, oil wells, fisheries, etc), and they should not stand for it here.

A reasonable amount of money should be dedicated to an intelligence service attempting to penetrate companies which are of significant national interest. Fines with increasing severity should be assessed to the responsible parties - and the vulnerabilities should be communicated in private.

For cases of the most persistent gross incompetence and negligence, companies should not be permitted to continue operation. Such powers exist in other agencies.


Tell them the truth:

All demographic data every where is vulnerable, because it must be stored as plaintext, because we don't have nation wide unique identifiers.


Make it like health records: huge penalties, based on what you store /gets hacked. Then watch them try to get insurance without security.


Please make sure they understand if they mandate any encryption 'backdoors', these breaches will happen much more often.


Social Security Numbers are broken as a "unique secure identifier". We need some sort of certificate-based solution.


Encourage them to start using password managers so that it becomes more difficult to steal the identities of congressmen.


We never signed anything giving permission to sell our private lives and personal data to anyone that is selling it.


It should be illegal to use knowledge of ID numbers to authenticate things that will effect your credit report.


Why should a company have all of our user data to begin with? Why do you store it in the first place?


Tell them you have their private browser history.


Say "Well I don't know, but I asked hacker news and here is what they said"


Not even a single company should have our private data. Probably this.


tell them to return to paper voting


Look to GDPR for ideas. The EU is moving into a "privacy by default" direction of making companies increasingly more liable for hoarding user data that's not absolutely necessary for the functioning of the service. I believe a technical committee of the EU Parliament has even proposed to encourage end-to-end encryption use for services.

Here's an excerpt:

> The providers of electronic communications services shall ensure that there is sufficient protection in place against unauthorised access or alterations to the electronic communications data, and that the confidentiality and safety of the transmission are also guaranteed by the nature of the means of transmission used or by state-of-the-art end-to-end encryption of the electronic communications data. Furthermore, when encryption of electronic communications data is used, decryption, reverse engineering or monitoring of such communications shall be prohibited. Member States shall not impose any obligations on electronic communications service providers that would result in the weakening of the security and encryption of their networks and services

http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-%2f%2f...

And here's an article, too:

http://www.bbc.com/news/technology-40326544


Most of our representatives are so subservient to money that this seems more like an exercise in exploit-the-nerd for CYA than anything constructive.

I think the best you could do is condense some of the worst offenses into tweet-worthy "sick burns" that will hopefully be remembered for more than a few minutes.


: "People or companies that need to be accountable need someone to be made accountable - for example a CIO/CSO".


Tell them that data breaches will be a thing of the past if they pass two laws:

1. Criminalize moving PII out of the country. 2. Personal liability for every person involved in gathering and protecting the data, and those involved in managing the teams and companies.

The fact that people can get paid while externalizing the downsides of their failures is why this is a problem. Make the personally responsible and it goes away.


> 1. Criminalize moving PII out of the country.

Please, please, don't. This forces global-scale operations to build computing in a number of places around the world to comply. National boundaries are meaningless on the Internet, and we already have enough of those jurisdictions in existence that doing global-scale operations with PII or even GIS data is an international legal minefield. Do you really want to suppress startup development by forcing global services to talk to 200+ different lawyers about what they're doing, in the long term?

I've worked on products where entire datacenters are forced to exist in order to comply with a law somewhere. Like this law, which I bet you didn't even know existed:

https://www.quora.com/Why-is-Google-Maps-unavailable-in-Sout...


> Personal liability for every person involved in gathering and protecting the data

So basically you want a law requiring all employees to hide any evidence of a breach?

It reminds me of the law requiring all TV's to be quietly dropped of at night at a random neighbor, because it's no longer legal to throw them away, and the legal methods are expensive and inconvenient.


> Personal liability for every person involved in gathering and protecting the data

I think it should only be the business that incurs the penalty. That way it is the business that is motivated to sufficiently train and oversee the employees.

Otherwise, you have employees on the hook, with employers incentivized to make them take shortcuts.


What's the point of "Criminalize moving PII out of the country" ?

Most of the risk comes from people who are already inside your country, or from people who break into your servers and therefore won't care about moving it outside your country.


Point 2 becomes meaningless if a company can just move their operations off shore while fully operating in your country.


You are out of your mind. Please just stop and think about what you are saying before posting.

You essentially want to go to prison for something your co-worker overlooked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: