Evaluating the GCHQ Exceptional Access Proposal

The so-called Crypto Wars have been going on for 25 years now. Basically, the FBI—and some of their peer agencies in the UK, Australia, and elsewhere—argue that the pervasive use of civilian encryption is hampering their ability to solve crimes and that they need the tech companies to make their systems susceptible to government eavesdropping. Sometimes their complaint is about communications systems, like voice or messaging apps. Sometimes it’s about end-user devices. On the other side of this debate is pretty much all technologists working in computer security and cryptography, who argue that adding eavesdropping features fundamentally makes those systems less secure.

A recent entry in this debate is a proposal by Ian Levy and Crispin Robinson, both from the UK’s GCHQ (the British signals-intelligence agency—basically, its NSA). It’s actually a positive contribution to the discourse around backdoors; most of the time government officials broadly demand that the tech companies figure out a way to meet their requirements, without providing any details. Levy and Robinson write:

In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved—they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

On the surface, this isn’t a big ask. It doesn’t affect the encryption that protects the communications. It only affects the authentication that assures people of whom they are talking to. But it’s no less dangerous a backdoor than any others that have been proposed: It exploits a security vulnerability rather than fixing it, and it opens all users of the system to exploitation of that same vulnerability by others.

In a blog post, cryptographer Matthew Green summarized the technical problems with this GCHQ proposal. Basically, making this backdoor work requires not only changing the cloud computers that oversee communications, but it also means changing the client program on everyone’s phone and computer. And that change makes all of those systems less secure. Levy and Robinson make a big deal of the fact that their backdoor would only be targeted against specific individuals and their communications, but it’s still a general backdoor that could be used against anybody.

The basic problem is that a backdoor is a technical capability—a vulnerability—that is available to anyone who knows about it and has access to it. Surrounding that vulnerability is a procedural system that tries to limit access to that capability. Computers, especially internet-connected computers, are inherently hackable, limiting the effectiveness of any procedures. The best defense is to not have the vulnerability at all.

That old physical eavesdropping system Levy and Robinson allude to also exploits a security vulnerability. Because telephone conversations were unencrypted as they passed through the physical wires of the phone system, the police were able to go to a switch in a phone company facility or a junction box on the street and manually attach alligator clips to a specific pair and listen in to what that phone transmitted and received. It was a vulnerability that anyone could exploit—not just the police—but was mitigated by the fact that the phone company was a monolithic monopoly, and physical access to the wires was either difficult (inside a phone company building) or obvious (on the street at a junction box).

The functional equivalent of physical eavesdropping for modern computer phone switches is a requirement of a 1994 U.S. law called CALEA—and similar laws in other countries. By law, telephone companies must engineer phone switches that the government can eavesdrop, mirroring that old physical system with computers. It is not the same thing, though. It doesn’t have those same physical limitations that make it more secure. It can be administered remotely. And it’s implemented by a computer, which makes it vulnerable to the same hacking that every other computer is vulnerable to.

This isn’t a theoretical problem; these systems have been subverted. The most public incident dates from 2004 in Greece. Vodafone Greece had phone switches with the eavesdropping feature mandated by CALEA. It was turned off by default in the Greek phone system, but the NSA managed to surreptitiously turn it on and use it to eavesdrop on the Greek prime minister and over 100 other high-ranking dignitaries.

There’s nothing distinct about a phone switch that makes it any different from other modern encrypted voice or chat systems; any remotely administered backdoor system will be just as vulnerable. Imagine a chat program added this GCHQ backdoor. It would have to add a feature that added additional parties to a chat from somewhere in the system—and not by the people at the endpoints. It would have to suppress any messages alerting users to another party being added to that chat. Since some chat programs, like iMessage and Signal, automatically send such messages, it would force those systems to lie to their users. Other systems would simply never implement the “tell me who is in this chat conversation” feature­which amounts to the same thing.

And once that’s in place, every government will try to hack it for its own purposes­—just as the NSA hacked Vodafone Greece. Again, this is nothing new. In 2010, China successfully hacked the back-door mechanism Google put in place to meet law-enforcement requests. In 2015, someone—we don’t know who—hacked an NSA backdoor in a random-number generator used to create encryption keys, changing the parameters so they could also eavesdrop on the communications. There are certainly other stories that haven’t been made public.

Simply adding the feature erodes public trust. If you were a dissident in a totalitarian country trying to communicate securely, would you want to use a voice or messaging system that is known to have this sort of backdoor? Who would you bet on, especially when the cost of losing the bet might be imprisonment or worse: the company that runs the system, or your country’s government intelligence agency? If you were a senior government official, or the head of a large multinational corporation, or the security manager or a critical technician at a power plant, would you want to use this system?

Of course not.

Two years ago, there was a rumor of a WhatsApp backdoor. The details are complicated, and calling it a backdoor or a vulnerability is largely inaccurate—but the resultant confusion caused some people to abandon the encrypted messaging service.

Trust is fragile, and transparency is essential to trust. And while Levy and Robinson state that “any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users,” this proposal does exactly that. Communications companies could no longer be honest about what their systems were doing, and we would have no reason to trust them if they tried.

In the end, all of these exceptional access mechanisms, whether they exploit existing vulnerabilities that should be closed or force vendors to open new ones, reduce the security of the underlying system. They reduce our reliance on security technologies we know how to do well—cryptography—to computer security technologies we are much less good at. Even worse, they replace technical security measures with organizational procedures. Whether it’s a database of master keys that could decrypt an iPhone or a communications switch that orchestrates who is securely chatting with whom, it is vulnerable to attack. And it will be attacked.

The foregoing discussion is a specific example of a broader discussion that we need to have, and it’s about the attack/defense balance. Which should we prioritize? Should we design our systems to be open to attack, in which case they can be exploited by law enforcement—and others? Or should we design our systems to be as secure as possible, which means they will be better protected from hackers, criminals, foreign governments and—unavoidably—law enforcement as well?

This discussion is larger than the FBI’s ability to solve crimes or the NSA’s ability to spy. We know that foreign intelligence services are targeting the communications of our elected officials, our power infrastructure, and our voting systems. Do we really want some foreign country penetrating our lawful-access backdoor in the same way the NSA penetrated Greece’s?

I have long maintained that we need to adopt a defense-dominant strategy: We should prioritize our need for security over our need for surveillance. This is especially true in the new world of physically capable computers. Yes, it will mean that law enforcement will have a harder time eavesdropping on communications and unlocking computing devices. But law enforcement has other forensic techniques to collect surveillance data in our highly networked world. We’d be much better off increasing law enforcement’s technical ability to investigate crimes in the modern digital world than we would be to weaken security for everyone. The ability to surreptitiously add ghost users to a conversation is a vulnerability, and it’s one that we would be better served by closing than exploiting.

This essay originally appeared on Lawfare.com.

EDITED TO ADD (1/30): More commentary.

Posted on January 18, 2019 at 5:54 AM46 Comments

Comments

Denton Scratch January 18, 2019 6:57 AM

Law enforcement’s persistent demands for access to encrypted material are self-defeating; they are already incapable of processing the plaintext material on a seized laptop within the timescale of a criminal investigation. Seized computers just languish in the evidence locker for years, long after the case has been dropped. There aren’t enough cops with technical chops to examine all that irrelevant data. Modern multi-terabyte hard drives are not making their problem more tractable!

At least, that is in the UK. Both of the seizures I have direct personal knowledge of in the UK were in connection with charges that were dropped; in both cases it took years to recover the seized equipment – which was (a) largely obsolete by the time it was recovered, and (b) had anyway long ago been replaced. So really these seizures are a kind of extra-judicial punishment.

So having breaks for encrypted systems and channels wouldn’t help law enforcement collect evidence; it would just enhance their ability to intimidate the populace. That is politically undesirable, IMO.

de la Boetie January 18, 2019 9:38 AM

Are we supposed to applaud the restraint of GCHQ here? When they have serially lied over the years (your point on trust applies). When it is clear that they have every intention to do bulk intercept and hacking? When the belated post-hoc enabling legislation (the IPA) is repeatedly found to be unlawful?

Their statements and analogies are disingenuous: “exceptional access” is routine. Crocodile clips will be attached en masse, probably algorithmically. IF the mass surveillance powers were abandoned, then you wouldn’t need these exceptional access mechanisms because their hacking powers would easily enable them to subvert the relevant keys and credentials on the client, end-to-end encryption or not.

The point being that the network metadata will already link the interlocuters, so in order to listen in, they just need to hack the clients. Putting in dangerous backdoors to group mechanisms is still a backdoor which puts everyone at risk. And as with end-to-end encryption (where it seems they’ve accepted the inevitable), it will spur the development of group communications where identity and conversation metadata is also better protected.

We also already know that they optimise for themselves, and the business cases for these things does not include the very large cost of the iatrogenics – the sufferers of which have no effective redress.

Gweihir January 18, 2019 10:21 AM

Also keep in mind that law enforcement must always be carefully limited in what they can do or they will do anything that is possible and thereby destroy free society. It is just a kind of blindness to reality people that go into law enforcement have. Their purpose is not to solve or prevent every or even most crime. Their purpose is to make sure crime stays an annoyance and that society and trust, as its most important basis, keep working. Sabotaging trust by mandating backdoors is about the worst thing they can do.

Of course, historically, law enforcement did not serve the people at all. Its purpose was to keep the unwashed masses under control, nothing else. It is important to keep that in mind as well.

Clive Robinson January 18, 2019 10:40 AM

@ Bruce,

A recent entry in this debate is a proposal by Ian Levy and Crispin Robinson, both from the U.K.’s GCHQ … It’s actually a positive contribution to the discourse around backdoors; most of the time government officials broadly demand that the tech companies figure out a way to meet their requirements, without providing any details.

Please do not give any credit to this pair of insulting liers.

As I’ve already noted, if you read down their spiel far enough you come to this,

    We also need to be very careful not to take any component or proposal and claim that it proves that the problem is either totally solved or totally insoluble. That’s just bad science and solutions are going to be more complex than that.

This not just insulting to honest scientists, which the pair are obviously not, they are deliberatly lying to to sell a false primise that as far as electrical communications was very much know to be a lie back in the Victorian era. Worse for this pair also befor GCHQ the indea of theoreticaly unbreakable ciphers (OTP) and their deficiencies was known over a century ago.

Put simply they are trying to distract people from realising that their proposal fails miserably unless there is some “Man In The Middle” to fritz with the issuing of keys. That is they are talking about genuine End-2-End encryption but one where the survice supplier can force a third party to be given keys without either the first or second party being aware of it.

Not only do we know this to be a very bad idea we also know it’s got a significant failing attached, which compleatly destroyes their “NOBUS Good Guys” argument out right.

They talk about “lawfull access” but what does that actually mean to the “service provider”. What it means is a piece of paper turns up at their front door with a court stamp saying the request is legal within the courts jurisdiction.

What that realy means is a country rights legislation to give courts the authority –as the UK has done with RIPA– to “lawfully” demand access to any network in the world that can be reached from their nation (as the UK has done).

So lawfull access and judicial oversight are compleatly meaningless and very very easy to subvert.

True end to end encryption has been used since the Victorian era back in the mid 1800’s. They used insecure codes back then, today we have theoreticaly secure ciphers. However a hundred years ago the idea of theoretically secure codes was proposed and proved. Which is what we have with the One Time Pad and One Time Phrases both of which got extensively used towards,the end of WWII when the UK’s Secret Intelligence Service’s (SiS / MI6) deliberate weakening of SOE and other behind the enemy front officers communications was defeated[1].

The pair are trying the same tricks of “selling the big lie” to who ever will listen.

If they have any technical competence they will know that secure comms that even GCHQ can not break have been not only invented, proved, deployed and used successfully even befor GCHQ existed.

The second thing they are trying to do is hide the idea of the failure of “smart devices” when it comes to end point security, which I’ve mentioned several times in the past on this blog and other places. It’s the primary reason none of the existing messaging apps can ever be secure.

Both of these people have in this one article proved themselves to be of extream bad faith, worse denigrating honest scientists and engineers who can tell people what they realy need to do to be secure against not just LEO’s but all the various SigInt agencies…

My advice consign their words to either the wastebin or that draw reserved for “Snake Oil Merchants, fraudsters and conartists”.

[1] This has been written up by a number of those involved, perhaps the easiest to get hold of is “Between Silk and Cyanide: A Codemaker’s War 1941–1945” by Leo Marks who had to fight SiS and others to get the required “silks” printed with OTPs for the French Resistance and SOE (ISBN 0-00-255944-7)

Wael January 18, 2019 12:03 PM

@Clive Robinson,

Please do not give any credit to this pair of insulting liars [edited].

That’s clear as daylight. You didn’t expect less from them, did you1?

get hold of is “Between Silk and Cyanide: A Codemaker’s War 1941–1945” by Leo Marks

The book has been recommended by a few members here. Someone described it as “It is an astonishingly brilliant book”, and you subsequently recommended it. I got it and tried to read it, but it turned out, with the exception of a few sections, to be a supremely boring book. It’s one of the books I use for a sleeping pill. Somehow I’m not tuned to UK books, especially technical and historical ones.

[1] I wonder if the time has come to narrate the two stories:

Two stories during college days. One is not that interesting about a chess game, and the other is funny / strange and hard to decipher, but could potentially be offensive to some.

I know I’ll get some flak for them, hence I am hesitant.

SF January 18, 2019 1:02 PM

Won’t change a thing. Bad guys will just either A) cook their own crypto B) use already strong cryptography out there but without Internet.

For example, if Im a bad terrorist, I could just as well post my fellow terrorist USB stick with content encrypted with some of the strong ciphers that existed before Internet (yes there is such thing), with unique, changing encryption key scheme.

For example: table of pre-generated, random & unique, long enough keys, one for each day (365 unique keys) of the year that are exchanged in person once a year with fellow terrorist.

Absolute no statistical analysis will break that. And if I don’t trust that the post will deliver my USB stick? No problem, use a damn pigeons.

People think that they so clever when they missing the obvious low tech work arounds….

AnonCoward January 18, 2019 1:56 PM

Shouldn’t organizations like banks and health record companies be heard arguing for ironclad security? Is it just an attitude of “We don’t want to be bothered” or “That sounds like it would cost us money”? Anyone have any insights?

Humdee January 18, 2019 2:49 PM

@Bruce writes, We’d be much better off increasing law enforcement’s technical ability to investigate crimes in the modern digital world than we would be to weaken security for everyone.

The counterpoint is that this type of investigative technique doesn’t scale efficiently. So that means that either more criminals will slip through the cracks with results some would deem socially unacceptable or increased law enforcement resources with the opportunity costs that such diversion of resources imposes.

FWIW as a substantive matter I am with Bruce but I have been and remain skeptical that American culture is ready and willing to go down the privacy road.

Clive Robinson January 18, 2019 2:59 PM

@ Wael,

I wonder if the time has come to narrate the two stories…

If in doubt “duck, cover and run”.

Clive Robinson January 18, 2019 3:13 PM

@ Humdee,

I have been and remain skeptical that American culture is ready and willing to go down the privacy road.

Which part?

1, Government / guard labour.
2, Corporations.
3, Citizens.

Slow as it might appear I get the feeling that the ordinary citizens increasingly want privacy when they understand why they are not getting it.

Not really annonymous January 18, 2019 3:34 PM

The counterpoint is that this type of investigative technique doesn’t scale efficiently.

That is a feature, not a bug. The cost will act as a limit on inappropriate surveillance.

Clive Robinson January 18, 2019 4:08 PM

@ AnonCoward,

Anyone have any insights?

As our host @Bruce observed some years ago, “Banks externalize risk”. They also limit liability by collectively not being proactive, especially when the cost is not just pointless it actually reduces “deniability”.

Back in the 1980’s and into the 90’s considerable work was done on something called the Secure Electronic Transaction (SET) protocol[1]. It gave quite good security and privacy via dual signirures, but in the wrong places. Deniability was an issue for,

1, Issuing bank.
2, Merchant
3, Card holder.

We have seen the same issue with electronic contracts on a blockchain, where the only way to perform dispute resolution was to fork the blockchain prior to the theft, both of which were not supposed to be possible…

But cost in defence is one of those awkward questions. It’s easy to know you have spent to little or in the wrong places because the barbarians beach the gates of your citadel. Conversly you have no idea, if it’s to much or if anyone has noticed you’ve spent to little or in the wrong places, if either,the don’t attack, or attack not in a pillage and plunder way but covertly. Further you likewise have no idea when you are spending to much on defence.

But worse all defence spending is regarded as “Sunk Costs” with no direct return now or indead in the future. Which with the Western trend towards “no skin in the game” managment who’s only real interest is the next couple of quaters profit, usually means defensive spending does not get much of a look in except where liability is considered and can not be externalized.

[1] https://en.m.wikipedia.org/wiki/Secure_Electronic_Transaction

Cassandra January 18, 2019 4:16 PM

@Clive Robinson

Thank-you for pointing things out clearly and succinctly.

Anyone who understands these things knows that the two gentlemen in question are putting lipstick on a pig. The audience is not people who know their pigs, but rather those who look at the lipstick and think that ‘this is not so bad’, in other words, the ignorant who think that by this, GCHQ are being reasonable and ‘willing to compromise’.

It is a shame that the CESD was merged into GCHQ – the result is that GCHQ, like the NSA, have multiple-personality disorder-like affliction in being charged with both communications (and daat) protection as well as signals intelligence. The signals intelligence side takes priority.

Cassandra

Sancho_P January 18, 2019 4:27 PM

@Bruce: Excellent essay, thank you.

Re the original (Levy and Robinson) paper at lawfare:

  • A very bad beginning:
    To include intelligence agencies in the term “law enforcement” is troublesome, to say the least.
    Law enforcement should be the good guys, clean and transparent, visible to all parties, to prosecution, court and defense.

The shady business of espionage has nothing to do with trust in justice.
Confusing that spoils the case before any discussion could start.
See @Bruce’s Athens example (again, thanks for the boldness).

Point 4) ”… the service provider—in the form of a real human—is involved in enacting every authorized request, …”
– We left the Strowger exchange already, nothing to attach crocodile clips to.
– Who would be the poor chap at the provider to decide what is authorized or not?
– How to detect unauthorized access?
– Would the shady business present their individual authorization?
To talk about that solution is disingenuous, forget it.

Point 5) is inconsistent, a dream.
Point 6) is OK in the headline, but the rest is moot, if nothing else.

However, their move to go public is appreciated.
Discussion is good, but we must not start with a “solution” until we define the problem.
Criminals will always find a way around.

Encrypted content is not the problem:

a) LE got complete metadata, that’s more valuable than content.
And
b) This is much more than they had in the times of Strowger exchanges.

The content belongs to the participants, it’s out of limits without their knowledge.

Clive Robinson January 18, 2019 4:49 PM

@ SF,

People think that they so clever when they missing the obvious low tech work arounds….

The two GCHQ authors are either hoping you don’t know a hundred and fifty years of crypto history to do with electrical/electronic communications or are too lazy to read it… So they can lie to people.

Which is why I maintain the point that the Five-Eyes in particular are more interested in spying on each others citizens than Serious Organised Criminals or the more intelligent terrorists etc…

For those that want to set up their own highly secure system they could read up on the Soviet system using codes superenciphered by OTP in an “out station to home station” or “web” arrangement. Oddly I guess is the fact that the Soviets made the cardinal mistake of “reusing KeyMat” in the later stages of WWII that we can now read about how they encrypted and transmitted information (see project VENONA).

Wael January 18, 2019 9:06 PM

@Clive Robinson,

If in doubt “duck, cover and run”.

The first story doesn’t require these maneuvers.

One late night in the dorms, I picked my chess board and intended to go over a game between Alekhine and Capablanca (two of my favorite players.) A guy came by — this was at 2:00AM, and asked if he can play a game. Told him sure.

He said he hadn’t played in a long time and forgot how to play, so I told him I’ll remind you. We chatted about our majors and stuff. He introduced himself as Richard, a political science major.

I took it easy on him in the opening but noticed that his moves are pretty solid. In the middle game, my position became hopeless, so I told him you play pretty well! I thought you don’t know how to play! He checkmated me then said: I’m a chess master.

Told him let’s play another game then, because I took it easy on you; you’ll likely win anyway, but I want to play a fair game. He never gave me that opportunity and was very happy that he pulled a fast one on me.

That was my first encounter with a “Politician”.

Gerard van Vooren January 18, 2019 10:10 PM

@ SF,

“Won’t change a thing. Bad guys will just either A) cook their own crypto B) use already strong cryptography out there but without Internet.”

I can write with less than 50 lines of code and a USB stick some pretty secure system, as long as you use a gigantic OTP and off-line systems.

The GCHQ / NSA and smaller TLA’s are never meant to really deal with terrorism. That was probably a joke made by G.W.Bush that unfortunately was welcomed by his successor. No, these agencies are made to get intel that matters, such as wiretapping Angela Merkel. And they are serious about that.

65535 January 19, 2019 1:42 AM

@ Clive Robinson and others

“Please do not give any credit to this pair of insulting liers.”

I agree with Clive Robinson.

I have a suggestion. Why not do an independent analysis of exactly how useful the money budgeted to the GCHQ [and the NSA] is efficiently spent? I don’t see evidence of the GCHQ has actually stopping any terrorist bombings in London’s railways and buses lines. Could said money be better used elsewhere? I think it can.

Is the GCHQ [and the NSA] mission objectives of protecting its citizens against true nuclear war is needed? Probably not.

If not then why not simply cut their budget by say 30 percent and see if they can to more with less money. I bet they can do more with less money if necessary.

Further, it is necessary for GCHQ and the NSA to be in a continuous state of “mission creep” or are they trying to justify their well entrench positions to keep their lush budgets intact?

My guess is that much of their original objectives are no longer valid and their “mission creep” into law enforcement/mass surveillance roles is not justified – nor are their budgets. Also I am sick of the old line of it is “National Security” and you cannot know what we are spending – just give us blanks checks to spend.

Other agencies can probably do a better job than these huge overpaid dinosaurs agencies. Maybe it is time these old cold war beasts should be put out to pasture – so to speak.

Jim January 19, 2019 3:28 AM

@ Clive Robinson wrote,

“Please do not give any credit to this pair of insulting liers.”

I did not catch where’s the lie? If anything appeared fishy, it was their blatent honesty.

As a system grows, its data morphs into various states geographically. thus, in order for this to work at a massive scale, it had to be done at system genesis. Think of it as the crypto genesis block. If the pair is going back to ask service providers to clip this feature onto an existing messaging system, it will most definitely fail and probalby won’t go un-noticed. IMHO

Having that said, without the ability to analyze our contents, how else would social apps harvest enough data for “marketing purposes” and justify massive AI spendings to their investors?

Clive Robinson January 19, 2019 6:21 AM

@ Jim,

I did not catch where’s the lie?

It’s in the little snippet from their article I give above in my first comment on this thread, that I’ve also repeated below.

Once you see it for what it is other parts of their weasely words hit you in the eye as it unravels to a propaganda excercise to sow FUD and make people accept weak insecure systems that give no privacy what so ever.

The pair working for who they do, should darn well know they are pushing easily disprovable FUD. In their circles they call this sort of snow job as “finessing” but this os way to inept to be given that title. Whilst some would say this sort of lying is par for the course, actually insulting others by claiming what they do is “Bad Science” brings not only the pair of them into disrepute, but also the organisation they work for that is very dependent on scientists.

    We also need to be very careful not to take any component or proposal and claim that it proves that the problem is either totally solved or totally insoluble. That’s just bad science and solutions are going to be more complex than that.

We know factually that the problem of secure communications is solved without question and it’s not as they say “bad science”.

What we know and have done for just over a century (Vernam / Maubourgne)[1], that for electronic communications,

1, The correct use of a provably secure cipher if used correctly is secure.

There is no argument with this, it is in no way “bad science”, to argue otherwise is to deny the truth (which most people call lying).

What we also know is,

2, Incorrect placement of the security end point with respect to the communications end point will always be insecure, no matter how secure the cipher in use is.

There is again no argument with this it is in no way “bad science”. It’s difficult to say who proved the issues of end point ordering as history shows the correct ordering has been used in diplomatic circles for a number of centuries. However it’s proof is obvious when you draw up layered Shannon Channel’s. Again to argue otherwise is to deny the truth (which again is what most people call lying).

These two facts have been known since before both GCHQ and the NSA, or any other modern National SigInt agency came into existence. Thus I find it difficult to believe that the pair are not fully cognizant of these facts.

Even the NSA in their museum point out the security of the One Time Pad and it’s equivalents the One Time Tape and One Time Phrases all of which were used during WWII, and have been well written about in the public domain.

[1] Gilbert Vernam was an AT&T engineer looking to solve the security problem of telex communications. He pattented back in 1917 the machinery for making and using an electronic cipher. Cptn Joseph Maubourgne laid down certain conditions for the use of that equipment that gave it the required security proofs. The NSA say that the OTP actually goes back to 1882 and Frank Miller who was a Yale Graduate who wrote in the front of a code book of the time the idea, but never went forward with it and this book was only discovered in 2011. Claude Shannon proved mathmatically in the early 1940s, if you work within all of the restrictions of an OTP of KeyGen with truly random numbers and the non reuse of the pad that all messages of equal length are equally as likely, thus you have no way to know which is the actuall message sent. This was published secretly in 1945 and later in 1949 publically. Apparently Vladimir Kotelnikov whilst working at the Moscow Engineering Institute came up with the same/similar proof and it to was published as a secret paper that remains clasified still.

Clive Robinson January 19, 2019 6:55 AM

@ Jim,

Having that said, without the ability to analyze our contents, how else would social apps harvest enough data for “marketing purposes” and justify massive AI spendings to their investors?

Well all those services are “Man in The Middle” attackers, if you expose your plaintext then they will take advantage of it.

But crypto is just part of the problem, there is also meta-data that our current networks generate and “Traffic Analysis” will reveal a great deal of information without having to see the message context as plaintex.

But not talked about much is meta-meta-data, which is information that can be used to show hidden meta-data exists. In effect if you set up a covert channel that traffic analysis can not it’s self show, that abscence can with meta-meta-data show that there must exist a comms path somewhere that still has to be found.

The existance of meta-meta-data has been visable to anyone who has read about the “Ultra Secret”. The use of German Enigma intercepts and decrypts was so important it had to be kept hidden. Thus any oddity in allied behaviour could reveal that Enigma had been broken. Thus every time Enigma or other German cipher system intercept traffic was used it had to have it’s believable if not verifiable cover story, to stop meta-meta-data indicating to the Germans that all their communications systems were not sufficiently secure. Admiral Karl Donitz was far from daft, and he repeatedly suspected that his U-Boat traffic was being read, hence the changes in rotors and the introduction of the fourth rotor. Eventually the Admiral became suspicious enough to compleatly change the way the U-Boat radio traffic was sent, finally locking the allies out, but by that point in the war it made little or no difference to the course of the war.

Wael January 19, 2019 10:37 AM

@Clive Robinson,

If in doubt “duck, cover and run”.

The right time for the second story has not come.

Neil Rest January 19, 2019 11:58 AM

Meanwhile, in The Real World(tm) any scheme like this, regardless of details, depends on propagating tens or hundreds of millions of copies of the compromised code.
So within hours it would have been decompiled and various retro-patches would go into circulation.
I want the one that plays the soundtrack of “What’s Opera Doc” to any eavesdropper.

Clive Robinson January 19, 2019 12:18 PM

@ Wael,

The right time for the second story has not come.

Maybe some day, but don’t worry if it dosen’t happen soon.

Speaking of happening the SockPuppet that is “a sour old wine” is still trying “new bottles” against @Bruces rules… Worse who ever it is appears to be getting desperate… A new year resolution of mine “not to kick them back under the bridge” appears to have got them neurotic…

Wael January 19, 2019 1:00 PM

@Clive Robinson,

Speaking of […] SockPuppet that is “a sour old wine”

Two things,

1: Story cannot be told by a Sock Puppet. Not my style unless it’s amusing

2: I recognize the writing style behind the disguise. What cannot be disguised is the mindless thinking, and I’m using “thinking” loosely here. Apparently there are more than one. I know one:

Kind of like your assertions about the bitcoin theft motive, what with attribution being “so very difficult” and all.

With a high degree of confidence, and it’s disappointing. I would not expose any because the moderator doesn’t like sock puppetry accusations.

is still trying “new bottles” against @Bruces rules

This one… just ignore. I also recognize the style. This one isn’t disappointing — it’s expected. If you throw a stone at every dog that barks at you, stone prices would surpass zero-day vulnerability prices. The thing is, some will disagree with your disposition on some subjects, but they lack the mental capacity to either defend their point of view or to mount a structured attack on yours. They take the easy and cowardly way out. Not worth your time.

Clive Robinson January 19, 2019 2:09 PM

@ Wael,

This one… just ignore. I also recognize the style.

As I said, they are not worth kicking back under their bridge.

But you are correct there appears to be a couple, one used to try and bate our host, which is possibly why we get much shorter intro’s on a number of the threads.

There is a Hebrew word for a person who adds more to the room by leaving it, I know it’s not balaganist as that is derived from Russian meaning someone who creates a mighty mess/SNAFU, but for the life of me I can’t remember it for some reason, I’ll put it down to thinking loftier thoughts 😉

MichaelG January 20, 2019 12:07 AM

If we make the backdoor one that is implemented physically: you have to have a machine with a specific chip to read/access AND you have to have a machine with a specific chip serving AND the two must be physically connected then things start look possible, maybe even more secure than our current systems if implemented correctly.

But like all security considerations this involves money and would require subsidy.

Lorenzo January 20, 2019 8:14 AM

Bruce,

From a technical point of view, lawful interception of e2e messaging can be done already without government “databases” and the risks associated with master keys. It should not be too difficult to do, and I’d be surprised if it wasn’t already in place.

Let us take WhatsApp as an example. In a hypothetical scenario, let us imagine a lawful intercept and gag request filed against WhatsApp in the UK, mandating forward interception of a single chat between two individuals. In this case, the company could comply by creating a new version of the application as an upgrade.

Such version has the following characteristics:

<

ul>

  • It would contain a trigger function which, upon receipt of a specific, signed message from WhatsApp servers, will turn the targeted private chat into a group chat. Group chats in WhatsApp are implemented with shared keys, distributed to all members of the group (when a new member joins, keys are re-generated)
  • Prevent the UI to show the “ghost” member, or even show any trace that the chat had been transformed into a group chat
  • The ‘trigger’ message would be encrypted with a private key, only in possession of WhatsApp. This would prevent anyone from mounting a global interception, unless they acquire the private key. In this scenario, the government would not ask WhatsApp to turn into the key; in fact, WhatsApp could even claim they do not infringe on users’ security by allowing blank interception of e2e messages. This significantly reduces the risk of unlawful use of the interception mechanism.

    The intercepted conversation would not contain any past history due to the group chat feature of the protocol explained earlier.

    Unless the users are extremely tech savy and perform active monitoring of their phone messages and can spot the change in metadata (if any!), then this won’t be discovered. I’m afraid I’m not familiar enough with WhisperSystems protocol in detail to know whether group chats are significantly different than single user chat.

    Reverse engineering the application could give away this behaviour, and while there are attempts at RE WhatsApp web messaging protocol, I’m sure this can be hidden for long enough by sheer obfuscation of the code. This “update” could even be deployed as a targeted update only for the individuals named in the court order, making the ‘main’ application clean. For example, instead of updating itself through the App or Play store, this specific version of WhatsApp could upgrade itself by downloading components from WhatsApp servers – in a manner similar to what Telegram does, when it downloads a ‘blob’ in the desktop version.

    Ultimately it all boils down to trusting a closed-source application and its implementation of the protocol.

    Clive Robinson January 20, 2019 8:24 AM

    @ Wael,

    It appears we are both on the right side of,

    But you are correct there appears to be a couple, one used to try and bate our host,

    The current squid page, appears to have been visited twice over night.

    The first is a wooly foot cover naming himself after “Geppetto’s son” and is showing all the signs of a “Do It Yourself” merchant.

    The second attack with a quite related name has put in an “endless page” which has caused probs with the squid page, and would have done with the Recent Comments page if it did not have a “Displayed character limiter”.

    The latter obviously being directed at either our host, or the community in general.

    Obviously being ignored is not what they want, so a fit of peak has resulted…

    Wael January 20, 2019 9:06 AM

    @Clive Robinson,

    The latter obviously being directed at either our host, or the community in general.

    Some juvenile, unimaginative script kiddie. I don’t see this any different to the usual spam, with the exception that one doesn’t need to read it. Immediately detectable, so in a way it saves time 🙂

    Clive Robinson January 20, 2019 9:27 AM

    @ Wael,

    Definitely somebody is upset, and want’s to ruin it for others in a rather obvious and pathetic way.

    I did want to post a reply to some queries on the squid page, but I’m guessing that is not happening today, it can wait untill things clear up a bit.

    Add it to the list of reasons I do not have my own blog.

    Steven Wittens January 20, 2019 12:57 PM

    I wish these agencies would have the smarts to see how self-defeating this all is.

    By compromising the trust in encrypted apps and making everyone live in fear of mass surveillance—which they’ve redefined to only mean a human reviewing data that’s already been collected—they make it so nobody with a real sense of the dangers wants to work with or for them. Free expression sacrificed on the altar of national security is such a cliché that numerous trite quotes exist about it.

    Furthermore, by leveraging the failure modes of the political system to enact this agenda, they’re also enabling the worst excesses of opportunistic governance and ass-covering. These systems have already abused by mediocre bureaucrats whenever possible.

    It’s simply not worth it. The argument that they’re losing access is not credible to me, because the abundance of digital communication is a luxury they didn’t have until a decade or two ago. Sure, they could intercept phone calls, but they still needed to figure out whose calls to intercept. That required physical policing, while a desk job was a post of shame. Maybe it’s time they got up instead of expecting to Google the wrongdoers from their cushy offices.

    Clive Robinson January 20, 2019 2:09 PM

    @ Steven Wittens,

    I wish these agencies would have the smarts to see how self-defeating this all is.

    I’m sure they do, but that is the longterm, past their nighthoods and lucrative pensions.

    In the short term “Empire is the name of the game” build it or die forgotten is the ethos.

    To get up the top for the big prizes to be put in your hand out of sight behind your back, you have to not just climb the internal tree you have to be known by not just insiders but importantly outsiders. It is these outsiders who have product to sell with profits extraordinaire which alows some to be “bunged back” in the great game of “nest feathering” that is the jobs turnstile for senior civil servants and government ministers. Such is “Representational politics” (which whatever else you call it, “It ain’t democracy”).

    Thus to get a big spending budget, you not only need a department large enough to be considered an Empire, you must have a never ending “defensive” mission that through “significant agency” we might otherwise call “first strike” war mongering you protect some ill defined part of National Security.

    Why defensive? Well it’s kind of simple,

    1, If you are being attacked then you are not devoting sufficient resources to the defence of National Security, thus you should spend more, a lot more.

    2, If however you are not subject to current attack, this is not due to sufficient spending or even excessive spending, but because imminent threats have not yet built to overwhelming odds by the potential attackers. Thus you must spend even more to “nip it in the bud” with more “first strike deterrent”.

    3, Thirdly and colour me stupid if you don’t need even more resources and spending to “ensure national integrity” and to defend against retaliation that your first strike action caused…

    Which ever way you try to frame it they will always come up with reasons to do a “Please Sir can I have some more” to the Treasury Ministers, such is the way that mighty bureaucratic empires “Survive and thrive”…

    Look at it this way J Edgar Hoover was as they say “A very bad man”, he blackmailed and even possibly organised “Wet Work hits” that were for his own personal gain. Having done so much harm he eventually “died on the job” but still got buildings named after him etc…

    Only the truely evil know how to plan not just their empire, but also how to ensure their name lives on for ever, and not just on park benches and grave stones…

    justinacolmena January 20, 2019 2:12 PM

    Basically, the FBI — and some of their peer agencies in the UK, Australia, and elsewhere

    Basically, NAZIs, but we never mention Germany.

    lawfare.com

    Russian thieves. They are always “at law,” and the United States has always been at war with Eastasia.

    gordo January 21, 2019 3:24 PM

    A reasonable summary . . .

    Translated Article: Campaign of the Spy Alliance “Five Eyes” against WhatsApp and Co
    The current scattered news and reports on “encryption” belong together. The military secret services of the “Five Eyes” conduct a global campaign; in Australia they’ve already reached their first milestone.
    by Erich Moechel | posted on January 8, 2019

    Every two years, around the same time, a campaign of the espionage alliance “Five Eyes” against encryption programs takes place.

    [. . .]

    In the meantime, further traces of this campaign have been discovered in international standardization committees.

    https://blog.deepsec.net/translated-article-campaign-of-the-spy-alliance-five-eyes-against-whatsapp-and-co/

    Clive Robinson January 21, 2019 5:15 PM

    @ gordo,

    With regards the quote you give, this snippit is important,

      In the meantime, further traces of this campaign have been discovered in international standardization committees.

    I can verify first hand I’ve seen it in action. As for the every couple of years, the first I can remember was Louis Freeh then director of the FBI on his “secret european tour”. He new darn well that the citizens of the US would not give him what he wanted (unfettered access at all times without notice or request). So he came to Europe and tried the old “If you know what I know, but I can’t tell you…” routine. The purpose being if he could get some movment in Europe he could use that as leverage in the US to then use as leverage in Europe and go round and round untill he got his way.

    In this endevor he was a failure, likewise he was one of the worst FBI directors of more modern times. Perhaps if he had stuck to the job he was being paid to do things would have been more as they should have been under his tenure, and less people would have suffered needlessly…

    This is not the first time I’ve mentioned this on this blog, I think I first mentioned it, before this blog existed back when our host did things differently.

    gordo January 22, 2019 12:28 AM

    Ten years ago . . .

    January 2008: FBI begins briefing lawmakers about the encryption threat

    The first instance of the FBI using its now-famous name for encryption shrouding criminals’ communications—“going dark”—appears to have been in early 2008. In January, then-FBI Director Robert Mueller testified before both houses of Congress and included a “going dark” page in his briefing book, according to the Electronic Frontier Foundation, which obtained the documents through the Freedom of Information Act.

    https://www.dailydot.com/layer8/encryption-crypto-wars-backdoors-timeline-security-privacy/

    gordo January 22, 2019 1:35 AM

    @ Clive Robinson,

    I can verify first hand I’ve seen it in action.

    There does appear to be a “controversy” afoot. Moechel’s followup piece is here, in German: https://fm4.orf.at/stories/2958984/. This is the article’s first paragraph per a browser translator tool:

    There has been a controversy between the Internet Standards Panel (IETF) and the European Telecommunications Standards Institute (ETSI) about the new encryption standard TLS 1.3 for secure Internet communication. At ETSI, the technical committee TC Cyber ​​had created, almost in parallel, an overrun version called “eTLS”, which included a backdoor for surveillance.

    See also (as was linked and quoted in the Moechel article):

    [Statement from the IETF SEC Area Directors regarding “enterprise TLS”

    . . .]

    We took this to mean that you had agreed to not use TLS in the name of this
    ETSI work product. The “Middlebox Security Protocol, Part 3” specification
    continues to use TLS in its protocol’s name in ways that are likely to
    confuse the technical community about the security properties of the
    proposal or of TLS.

    We agree with the statement in the document that what it calls “eTLS” is an “implementation variant of Transport Layer Security (TLS) version 1.3” and that unmodified TLS 1.3 clients can interoperate with MSP servers. At a protocol level, the main area of divergence from TLS 1.3 to this MSP profile is the replacement of the server’s “ephemeral” DH key with a “static” DH key, which suffices to violate the design and operational assumptions of TLS 1.3 and render this MSP profile as a qualitatively different protocol that should be named accordingly. [pars. 4-5; emphasis as per Moechel]

    https://datatracker.ietf.org/liaison/1616/

    5G infrastructure and “who has whose hooks in whom” seems to be another part of the larger picture.

    Clive Robinson January 22, 2019 2:58 AM

    @ gordo,

    With regards the snippet,

      January 2008: FBI begins briefing lawmakers about the encryption threat

    It’s way longer ago than that Louis Freeh was still director of the FBI at the turn of the century.

    If you look in his page on Wikipedia,

    https://en.m.wikipedia.org/wiki/Louis_Freeh

    You will find the following,

      In testimony to the Senate Judiciary Committee, Freeh said that the widespread use of effective encryption “is one of the most difficult problems for law enforcement as the next century approaches”.[16] He considered the loss of wiretapping to law enforcement as a result of encryption to be dangerous and said that the “country [would] be unable to protect itself” against terrorism and serious crimes.[17]

    I’m aware you should not treat Wikipedia as a “reliable source” but with the “cite_note” [16] & [17] dated as being from 2000 and 1995 respectively there is external references. I’ve not chased them up, but the date and transcript of such “testimony” will be in the public domain in the official records, so verifiable.

    But realistically “Crypto Wars I” was back then, with all it’s orchestrated FUD. Which means that the NSA amongst others had been “Pump Priming” atleast half a decade earlier so going back to 1990 or earlier.

    Louis Freeh’s behaviours have made people wonder if he is a member of the “Opus Dei” organisation that many are suspicious of. Apparently Louis is not but atleast one of his children has been educated by them. The same “I am righteous” and “Doing the Work” behaviours would have made him an ideal “target of conviction” for the NSA etc to “convert” to their point of view relatively easily. If they did the question would be when and how, my feeling is it would be by the “drip-drip” POV modification technique which takes the “instilling route” to alter the persons outlook, which is a relatively slow but safe process. Thus might have started in the 1980’s when Louis Freeh was according to Wikipedia,

      Freeh was an FBI Special Agent from 1975 to 1981 in the New York City field office and at FBI Headquarters in Washington, D.C. In 1981, he joined the U.S. Attorney’s Office for the Southern District of New York as an Assistant United States Attorney. Subsequently, he held positions there as Chief of the Organized Crime Unit, Deputy United States Attorney, and Associate United States Attorney. He was also a first lieutenant in the United States Army Reserve.

    As a speculative guess being in charge of the “Organized Crime Unit” would have been a time when he would have “come to the attention” of various IC entities, and he would have bumped up against criminal use of crypto.

    The NSA and GCHQ are certainly used to “playing the long game” and yes ETSI would be an extended Five-Eyes target for “tweeking standards and protocols” the CCITT got the “finessing” behaviour bassed on the old “Safety argument” which is so very very hard to argue against. Especially when two or three committee / working group members are secretly working in concert…

    John Beattie January 22, 2019 7:36 AM

    It bears repeating that adding crocodile clips is a manual process requiring human beings; eavesdropping by network hacking can be done by computers which is a completely different thing.

    Good to discuss this publicly but the argument supporting the proposal is still at the disingenuous level.

    gordo January 29, 2019 6:34 PM

    Controversial Internet “security standard” eTLS is being renamed
    The controversial ETSI encryption standard was renamed from “eTLS” to “ETS”. This was what the IETF internet standards experts demanded to avoid confusion with their secure TLS 1.3 standard.
    By Erich Moechel | Published on 29.01.2019

    “ETLS” is now called “Enterprise Transport Security” (ETS), but technically it has not changed.

    [. . .]

    Between two communication partners pushes an invisible third instance, which encodes in both directions, pretending to be one of the two communicating parties.

    [. . .]

    A secure TLS connection is – as described here for the ETSI website – currently mostly signaled by a green lock. Which symbol will be chosen for ETS is currently unknown.

    https://fm4.orf.at/stories/2961307/

    [The above is a translation from German to English per a browser translator tool]

    Leave a comment

    Login

    Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

    Sidebar photo of Bruce Schneier by Joe MacInnis.