mikeash.com: just this guy, you know?

Posted at 2016-02-19 14:40 | RSS feed (Full text feed) | Blog Index
Next article: Friday Q&A 2016-03-04: Swift Asserts
Previous article: Friday Q&A 2016-01-29: Swift Struct Storage
Tags: apple fridayqna security
Friday Q&A 2016-02-19: What Is the Secure Enclave?
by Mike Ash  
This article is also available in Lithuanian (translation by Giedrius Sadauskas).

The big tech news this week is that the FBI is trying to force Apple to unlock a suspect's iPhone. One of the interesting points around this story is that the iPhone in question is an older one, an iPhone 5c. Newer phones have what Apple calls the Secure Enclave, which some say protects against requests of this nature; even if Apple wanted to break into your phone, they couldn't. Which then brings up an interesting question I've seen a lot of people asking: what exactly is the Secure Enclave, and what role does it play here?

A quick note before I get started: my usual approach to writing articles is to dig all the way down to the bits and bytes and then discuss what's there. This is necessarily somewhat different. By its very nature, the Secure Enclave is inaccessible to mere mortals like myself. Instead, most of the knowledge here comes from the information in Apple's iOS Security Guide, plus some general theory. The intent is to extract the relevant bits from that guide, explain them, and think through the implications. This article must assume Apple's information is accurate, since there's no practical way to check from the outside. The result will only be as good as the product of the guide's accuracy and my own understanding, so reader beware.

Also, this is intended to examine the technical aspects of this case and the Secure Enclave technology. No opinions on the merits of the FBI's request or Apple's response or any other political matters are stated or implied. If you want to discuss the political aspects of this case, there are many other places where you can do so.

With that out of the way, let's get started.

Review
The iPhone in question is protected by a passcode, which isn't stored anywhere on the device. The only way to get in is to brute-force the passcode. The computation needed to verify a passcode is deliberately slow, requiring about 80 milliseconds per attempt. Still, brute force cracking is feasible. For a four-digit passcode, trying 10,000 combinations at 80 milliseconds each would take less than 15 minutes. A six-digit passcode would take about a day.

This means that passcodes are not terribly secure. Apple mitigates this with additional delays after too many failed attempts. After a few failed attempts, the iPhone will make you wait before you can try again, starting with a one-minute delay, then a five-minute delay, and beyond. This makes a brute force attack impractical.

One might think that you could work around this if you pulled the flash memory out of the iPhone, copied its contents, then tried to crack it on a fast computer. You wouldn't have any software imposing additional delays. As a bonus, the 80 milliseconds needed for each attempt could go a lot faster with a faster computer, and you could run many attempts in parallel. However, this doesn't work. The data encryption is tied to the hardware, so the brute force attack must be run on the original device.

On the older iPhones without the Secure Enclave, there is a weakness in this system. The escalating delays prevent brute force cracking of the passcode, but these delays are just a feature of the phone's OS. The 80 millisecond key derivation is an inherent property of that computation, but the additional minutes or hours delay after too many failed attempts is just the OS code refusing to accept additional input until some time has passed. The FBI wants Apple to build and install a special OS version that doesn't enforce these delays and which allows passcodes to be submitted electronically. This would allow the FBI to crack the passcode within a few minutes or hours. iPhones won't accept OS updates from anyone besides Apple, so this system is secure from outside attackers, but it's not secure against Apple itself.

Note that this is based on the user using a numeric passcode. A good complex password is still secure even on these older iPhones. An eight-character alphanumeric password would take about 550,000 years to try all possible combinations, for example.

Unreadable UIDs
Each iOS CPU is built with a 256-bit unique identifier or UID. This UID is burned into the hardware and not stored anywhere else. The UID is not only inaccessible to the outside world, but it's inaccessible even to the software running at the highest privilege levels on the CPU. Instead, the CPU contains a hardware AES encryption engine, and the only way the UID can be accessed by the hardware is by loading it into the AES engine as a key and using it to encrypt or decrypt data.

Apple uses this hardware to entangle the user's passcode with the device. By setting the device's UID as the AES key and then encrypting the passcode, the result is a random-looking bunch of data which can't be recreated anywhere else, since it depends on both the user's passcode and the secret, unreadable, device-specific UID. This process is repeated over many rounds using the PBKDF2 function, feeding each round's output back into the next round's input, performing the heavy computation needed to force 80 milliseconds of work for each passcode verification attempt.

Secure Enclave
The Secure Enclave was introduced with Apple's A7 system on a chip. All iPhones starting with the 5s have it. The 5/5c and below do not. On the iPad side, everything starting from the iPad Mini 2 and the iPad Air have it.

The Secure Enclave is a separate CPU within the A7 (or later) that's responsible for low-level cryptographic operations. It doesn't run iOS or anything resembling iOS, but instead runs a modified L4 microkernel. L4 is intended to run as little code as possible in the kernel, which should theoretically make the system more secure by reducing the amount of potentially buggy code running with elevated privileges. The Secure Enclave uses a secure boot system to ensure that it the code it runs can't be modified, and it uses encrypted memory to ensure that the rest of the system can't read or tamper with its data. This effectively forms a little computer within the computer that's difficult to attack.

The Secure Enclave contains its own UID and hardware AES engine. The passcode verification process takes place here, separated from the rest of the system. The Secure Enclave also handles Touch ID fingerprint processing and matching, and authorizing payments for Apple Pay.

The Secure Enclave performs all key management for encrypted files. File encryption applies to nearly all user data. Most system apps use it, and third party apps all use it by default if running on iOS 7 or later. Each encrypted file has a unique key, which is in turn encrypted with another key derived from the device UID and the user's passcode. The main CPU can't read encrypted files on its own. It must request the file's keys from the Secure Enclave, which in turn is unable to provide them without the user's passcode.

The escalating delays for failed passcode attempts are enforced by the Secure Enclave. The main CPU merely submits passcodes and receives the results. The Secure Enclave performs the checks, and if there have been too many failures it will delay peforming those checks. The main CPU can't speed things along.

Implications
What does the Secure Enclave mean for the security of the system as a whole?

On most systems, if you can get into the OS kernel then you own the entire system. The kernel can do anything. It can read and write every byte of system memory, it can control all of the hardware, and it's in charge of all of the application code the system runs, which it can subvert at will.

Since the Secure Enclave is a separate CPU mostly cut off from the rest of the system, it isn't under the kernel's control. On an older iPhone, owning the kernel means owning everything done by the system, including the passcode verification process. With the Secure Enclave, no matter who is in control of the main CPU, no matter what code is in the OS running on it, the basic security functions remain intact.

This system essentially allows arbitrary code to be placed in front of cryptographic functions such that this arbitrary code can't be bypassed. It's a bit like a super-sized version of the 80 millisecond computation time for password attempts. That delay is enforced by using a calculation that inherently takes that much time. This means it can't be bypassed, but there are limits on what you can create from the inherent limitations of calculations. For example, you can't add a one-minute delay to the fifth attempt, because raw cryptographic constructs don't have a concept of "fifth attempt." With the Secure Enclave, that one-minute delay can be enforced, since even with the rest of the system subverted, the delay code in the Secure Enclave remains intact.

Software Updates
The iPhone 5c (and other pre-A7 iPhones) can be subjected to a brute force attack by creating a new OS without the artificial delays, loading that onto the device, and then testing passcodes as fast as the hardware can compute. The Secure Enclave prevents this. But what about carrying out the same kind of attack one level further down, by loading new software into the Secure Enclave which eliminates its artificial delays?

Apple's guide contains this discussion of software updates for the Secure Enclave:

"It utilizes its own secure boot and personalized software update separate from the application processor."

That's it! There are no details whatsoever. What is the actual situation? Here, we must enter the realm of speculation, because as far as I can dig up there is no information out there about how Secure Enclave software updates actually work. I see two possibilities.

The first possibility is that the Secure Enclave uses the same sort of software update mechanism as the rest of the device. That is, updates must be signed by Apple, but can be freely applied. This would make the Secure Enclave useless against an attack by Apple itself, since Apple could just create new Secure Enclave software that removes the limitations. The Secure Enclave would still be a useful feature, helping to protect the user if the main OS is exploited by a third party, but it would be irrelevant to the question whether Apple can break into its own devices.

The second possibility is that the Secure Enclave's software update mechanism does something more sophisticated to protect against an attack even from Apple. The whole idea of the Secure Enclave is that it enforces additional rules that can't be bypassed from the outside. This could include additional rules about its own software updates. Given the goal of protecting the user's data, it would make a lot of sense for the Secure Enclave to refuse to apply any software update unless the device has already been unlocked with the user's passcode. For a case where the user has forgotten the passcode and wants to wipe the device and start over, the Secure Enclave could allow updates but delete the master keys protecting the user's data.

Which one is true? For now, we don't know. Apple put in a lot of effort to protect user data, and it would make a lot of sense for them to take the second approach, where updates wipe the device if applied without the user's passcode. This would be fairly easy to implement, and shouldn't affect the usability of the device. Given Apple's public stance on user privacy, I would find it extremely weird if it the Secure Enclave's software update mechanism wasn't implemented in this way. On the other hand, Tim Cook's public letter implies that all iPhone models are potentially vulnerable, so perhaps they haven't taken this extra step.

When it comes to the matter of law enforcement forcing Apple to attack an iOS device, this is the key question. If Secure Enclave updates are secured even against Apple, then the FBI's ability to make these requests stops at the iPhone 5s. If they're not, then even the latest 6s could potentially be attacked. I am deeply interested in learning the answer to this question.

Conclusion
The Secure Enclave adds an additional line of defense against attacks by implementing core security and cryptography features in a separate CPU within Apple's hardware. This separate CPU runs special software and is walled off from the rest of the system, placing it outside the control of the main OS, including the kernel. The Secure Enclave implements device passcode verification, file encryption, Touch ID recognition, and Apple Pay, and enforces security restrictions such as the escalating delays applied after excessive incorrect passcode attempts.

The iPhone 5c that the FBI is asking Apple to break into predates the Secure Enclave, and so can be subverted by installing a new OS signed by Apple that removes the artificial passcode delays. Whether newer phones can be similarly subverted depends on how the Secure Enclave's software update mechanism is implemented. If software updates erase the master encryption keys when installed without the passcode, then newer iPhones can't be attacked in this way, even by Apple. If updates are allowed without the passcode and without erasing keys, then the Secure Enclave can potentially be subverted just as older iPhones can be. As far as I'm able to determine, whether this is the case remains an open question.

That's it for today! Come back again for more exciting adventures, probably with fewer inaccessible and opaque CPUs. Friday Q&A is driven by reader ideas, so if you have something you'd like to see covered here, please send it in!

Did you enjoy this article? I'm selling whole books full of them! Volumes II and III are now out! They're available as ePub, PDF, print, and on iBooks and Kindle. Click here for more information.

Comments:

Secure Enclave could allow updates but delete the master keys


Does the Secure Enclave contain some sort of secure non-volatile memory? If not, I'm not sure how it can delete something in a verifiable way.

I guess you could make the passcode hash dependent on both the passcode itself, the internal secure secret key, and some property of the current OS (eg: a checksum). Then when you do a software update, the Secure Enclave would have to recalculate the passcode hash...
While everyone is talking about Apple hardware since that's what brought this whole conversation about, I'm wondering about Android users (let's not forget that Android represents the larger market share). Since there are hundreds of Android-powered phone models out there with many different hardware designs, it's a tricky conversation to have. Perhaps, though, we could focus for a moment on comparisons of modern hardware: if I'm buying a current-generation flagship (say, a Nexus 5x, Nexus 6p, or Samsung S6) I'm probably buying a phone that has something built on ARM's TrustZone technology (such as Qualcomm's SecureMSM/Haven or Samsung KNOX). How do these compare to Secure Enclave? Do they provide similar protections? Do they have similar vulnerabilities?
There's been some speculation about using electron microscopes/etc to read the UID right off the chip to allow offline brute forcing. Do you think anyone is capable of doing that?
ech, i pretty sure the UID is covered in epoxy, if you tried to open it to read the keys, the whole enclaved would be destroyed.
Thomas:

From what I've read, Android has nothing like this. If someone gets your device and they know what you're doing, they get everything.

There was a report a while back when some hacker-for-hire company had its data leaked. They said they could hack any Android or jailbroken iOS device ... but not a standard iOS device.
I remember reading something between speculation and a petition that Apple would replace Mach with L4 in OS X several years ago. Now I wonder if that was based on Apple research activity that ultimately became the Secure Enclave (or if some of those people got jobs at Apple and drove its adoption themselves).

Apart from the size, one interesting security feature of the seL4 implementation (which I assume Apple are using) is that the code has been formally proven correct.
According to a former Apple security engineer it's likely that it can be updated without wiping the keys. https://twitter.com/JohnHedge/status/699892550832762880
vasi: You're right, it would need its own nonvolatile storage in order to erase keys. It wouldn't need much, just 256 bits to store a master key. It wouldn't need to be very durable either, since it wouldn't be rewritten often. So it seems like it would be totally feasible, but I have no idea if the real thing actually has any. Using OS checksums as part of the key could help protect against outside attackers but wouldn't do anything against Apple, since they can just include the checksums from another version in their custom software build.

ech: That sort of hardware stuff is way outside my expertise, but speculation is fun, so.... I'd guess that it is absolutely possible, but the question is how much money it would cost. This is definitely not a weekend project for a couple of guys at the lab. You'd have to remove the packaging from the chip without destroying the guts, scan the chip to figure out where the UID is stored, then scan that area with an electron microscope. The individual components are 28 nanometers wide on the A7, and even smaller on newer ones, so everything is delicate and tiny. I wouldn't be surprised if the part that encodes the UID is somehow built to make it difficult to remove the packaging without destroying it.

There's some discussion about it here:

https://www.theiphonewiki.com/wiki/GID_Key

That's mostly about the GID key, not the UID key, but the principle is the same. The GID key is another key baked into the hardware in a way that makes it difficult or impossible to retrieve, like the UID, but the GID is shared across all devices of a given model.

asdf: Apple's paper says they're using L4 but doesn't mention seL4. I have no idea if that's because they're not using seL4 or if that's just loose terminology.

dkasper: I saw that, and it's pretty convincing. I think there's still some room for key erasure when applying updates given that information. For example, the Secure Enclave could store a secure checksum of its firmware (again, assuming the SE has any of its own storage at all) and the hardcoded secure boot sequence could erase the master key (again, assuming there's storage in the SE) if the checksum doesn't match but the firmware still has a valid signature. But this is just a story I'm making up about how it could work, it says nothing about how it actually does. Right now I'm leaning towards the idea that it doesn't erase anything on update, but I'm still not sure.
The OS implements the minutes/hours delay + potential wiping setting on iPhone 5c. But where does the OS store this state? If there is no Security Enclave, what's to prevent someone with physical access to the chips on the phone from resetting the OS and its memory and starting over again every few attempts, to avoid the minutes/hours delay and prevent it from potentially wiping the phone?
How is the delay on the SE enforced? If you try once, cut the power, then power it back on, does it forget you failed once? Thus you could try really quickly by powercycling the SE?
Each iOS CPU is built with a 256-bit unique identifier or UID. This UID is burned into the hardware and not stored anywhere else. The UID is not only inaccessible to the outside world, but it's inaccessible even to the software running at the highest privilege levels on the CPU.

The premise of this security design is that these 256-bit UIDs are truly random (and nobody keeps a list that matches them to the phone's S/N).

Would it be possible to (indirectly) test their randomness by setting the passcode to e.g. "0000", encrypting some specific data with the hardware AES engine and then collect the encrypted data from many iPhones and check for duplicates? I, for one, would participate in such a test if there was an app for that.

Of course, if there are no duplicates the above mentioned list could still exist.
bob: I see no problem with that approach in theory. In practice, this stuff is so tightly packed I'd bet it's hard to get access to the flash from outside without breaking the phone in some way. Resetting the timer probably means rebooting the phone which will also slow things down a lot.

There was actually an example of something like this from about a year ago:

https://www.intego.com/mac-security-blog/iphone-pin-pass-code/

This takes advantage of a vulnerability where the phone didn't write out the information about the failed attempt right away, so by cutting power at just the right moment, it bypassed the artificial delay. This vulnerability has since been fixed.

delay: As you can see, a vulnerability like you describe did exist. I imagine they've fixed it now. Apple's guide actually mentions this briefly:

On devices with an A7 or later A-series processor, the delays are enforced by the Secure Enclave. If the device is restarted during a timed delay, the delay is still enforced, with the timer starting over for the current period.

I imagine it writes out information about the failure before reporting it back to the main CPU, so you can't know to cut the power until it's too late.

kaw: I'm not sure if you can get access to the AES engine normally. I'd assume you could access the one in the main CPU if you jailbroke. I don't think you could access the one in the Secure Eclave at all (barring a vulnerability in its code).

I'd be surprised if there were any duplicates. Even with bad randomness, it's more likely that they're all different, but more predictable. For an extreme case, imagine if the UIDs were just a counter starting from some random value. Totally predictable, but all unique, and impossible to detect from the outside.

And as you say, they could be random but the manufacturer could keep a list of all of them.

I'd love to know exactly how the UIDs are generated and written into the chips. I'd hope it uses some sort of inherently random physical process which doesn't require the information to exist outside the chip, but I really don't know. Because of it's very nature, we have to trust Apple and their chip suppliers that it works the way they say.
Why does the secure enclave need nonvolatile storage? Can't it just use its internal UID as a key to store whatever it wants encrypted on the phones' normal flash memory?
Jakob: In theory, if all of the enclave's memory were encrypted with the same UID all the time, it's memory could be rewritten with a replay attack. Let's say the Enclave writes-out its keys, as encrypted data, to the filesystem. If this file were encrypted with the same UID each time, a malicious agent could copy that file and then rewind subsequent changes to the keys in the file, and the Enclave wouldn't have a way of knowing; the replayed file would look just like one it had created (this might allow someone to circumvent failed password back off intervals, or use an old password to unlock protected data, among other things). So, at the very least, the Enclave has to be able to keep a counter or nonce to mix with the UID, so replay attacks can't be done on it's Flash file.

It also needs enough memory to keep its own RTC, to prevent replays of saved RTC values interfering with things like password intervals and cert expirations.

This is all in a regime where you trust the Flash and RAM that's connected. It's completely possible that if the Flash (or the intervening kernel and APIs) are compromised, when the Enclave demands a key be deleted from the FS, the kernel simply lies to the Enclave, saying it does when it really doesn't. On an Apple device though the Boot firmware and the hardware configuration are signed by an Apply public key cert, and the Secure Enclave needs to do this validation (I think!), and it has to do it at a stage in boot when the Flash isn't available. (This is related to the whole Error 53 business. If the security infrastructure thinks a piece of its trusted hardware has been fiddled with during boot, it dumps out.)
Both speculations have their own problem:

- Not deleting keys: security failure, makes brute force attack viable.
- Deleting keys: anyone can wipe your device.

Possibly, the first option is better for anyone who needs to hide some info, but the second one is worse for most of the people.
luiX_: If you can upgrade the OS on a phone, you can wipe it regardless of the Secure Enclave's desires. You don't have to be able to decrypt data to be able to erase it.

They could prevent wiping the device without the passcode, but then you've made it so that forgetting the passcode permanently bricks your device.
I don't understand all this 'can they update it without wiping the iPhone' musing.

Let's say they have a good deal of confidence in their secure software, but not enough to make it automatically wipe your phone when updating. Why not just, when there is a secure enclave update, make the person authenticate, then decrypt their drive with the old software (perhaps writing it to the drive encrypted by the method that the 5c uses), then update, then reencrypt with the new? Sure, it will take a while, but it will not be too disruptive and it won't compromise security except while it is actually happening... and, given apps that can hold the iPhone unlocked for as long as they want, and given the apparently-quite-secure way the iPhone has of erasing flash after its data is deleted, I don't see this as inherently posing a security issue that isn't there already.
Fred Fnord: There has to be a way to update the phone without the passcode, otherwise the hardware becomes trash if you ever forget your passcode.

I don't understand what problem your decrypt-then-reencrypt procedure is solving. If the user enters the passcode then the Secure Enclave can just update without wiping anything in the first place. The question is what happens if you try to update it without the passcode.
"There has to be a way to update the phone without the passcode, otherwise the hardware becomes trash if you ever forget your passcode."

There has to be a way to erase the phone and set a new passcode without the passcode, but I don't see why there has to be a way to update the secure enclave software while leaving the data intact without the passcode. Obviously you do: what is it?

(This is why my response made no sense to you... obviously, I was completely mistaking your argument in the first place.)
Sounds like "Trusted Platform Module" Can something like this will work against enclave, using electron microscope https://gcn.com/Articles/2010/02/02/Black-Hat-chip-crack-020210.aspx
Fred Fnord: I still don't get it. I clearly don't think there must be a way to update the secure enclave without the passcode leaving data intact. I laid out the two possibilities in the article: either there are no checks and it leaves data intact, or there are checks and it wipes data. Which one is actually true is unknown. But it clearly can be updated without the passcode, since iPhones don't become un-updateable if you forget your passcode and wipe the device. So the question is just whether data wiping is enforced in that case.

anurag: Yes, that's the sort of thing you'd have to do. It's hard to estimate the difficulty, Apple's chips could be much harder to get into than that one, or much easier. In this particular case there's also the problem that you need to break into one particular device, not just any random iPhone. That means that your technique needs to be pretty reliable, because if you break the chip while trying to get into it, you're screwed.
Does the software fix for Error 53 give you any more clues on how this works?
I don't think the Error 53 story has much to say here. The error itself was about a failure to authenticate the Touch ID hardware. It seems to confirm that the Secure Enclave software can in fact be updated, but we pretty much knew that already. The big question is how that update is implemented, and I don't think there are any clues about that.
Zaph: That is true, but I'm not sure why you'd bring that up.
The article contains the following:

"The Secure Enclave also handles Touch ID fingerprint processing and matching, and authorizing payments for Apple Pay."

But it is the Secure Element not the Secure Enclave that is used for authorizing payments for Apple Pay.

So the statement is at least to broad.
From Apple's security guide, under "Apple Pay components":

"On iPhone and iPad, the Secure Enclave manages the authentication process and enables a payment transaction to proceed."

Both the Secure Element and the Secure Enclave participate.
Hi! ARM architecture describes the TrustZone technology (http://www.arm.com/products/processors/technologies/trustzone/). It describes separate virtual CPUs for trusted and non-trusted worlds, those CPUs running on one physical CPU. Starting from ARMv8, the TrustZone has full hardware support. Apple A7 is first ARMv8 based Apple CPU. Appearance of the Secure Enclave and a hardware based TrustZone coincides. Don't you think, that Apple could just use TrustZone as a backing for the Secure Enclave? If that is true, Apple would not need any additional physical CPU, and just run Secure Enclave kernel on the main CPU.
The difference is "authorizing payments" is done by the Secure Element, the Touch ID is done by the Secure Enclave. Essentially the Secure Element is a crypto replacement for using the SIM card for payments. SoftCard was a payment method being pushed by the phone carriers (it was owned by three of them, later sold to Google) that used the SIM card and the Carriers would not allow any other entity to use them for payments. The option Apple took was to create the same SIM Card payment functionality in new Secure Element hardware. (Now other payment schemes are using HCE). I think that if you look at the Apple Pay documentation you will see the different functionality.
Kamil Apple had to meet the requirements of the card issuers, they did not have a free hand. Being the first they were subjected to more requirements than latter entrants.
Zaph: I see the confusion. By "authorizing payments" I didn't mean communicating with the payment terminals. I merely meant that it is the one which manages the user's authorization to make a payment. The Secure Element handles the details with the NFC terminal, including making the authorization there.

Kamil Borzym: That's an interesting question. Perhaps TrustZone wasn't available early enough and Apple went their own way. Or perhaps it's ultimately less isolated from the rest of the system. I'm not familiar enough with it to say.
From Apple iOS Security Guide https://www.apple.com/business/docs/iOS_Security_Guide.pdf

Secure Element: The Secure Element is an industry-standard, certified chip running the Java Card platform, which is compliant with financial industry requirements for electronic payments.

Secure Enclave: On iPhone and iPad, the Secure Enclave manages the authentication process and enables a payment transaction to proceed. It stores fingerprint data for Touch ID.
Secure Enclave is really a great security feature for Apple devices.

No one really knows if Apple can bypass it, because Apple's statements about this are not clear.

However, it's good to know that I'm safe, at least from third party users.
So... apparently it's unlocked now, anyone know how they did it?
Excellent article, did not know how the security enclave works.

It sounds very secure, the combination of:

UID + PASSCODE + SALT and the timers on failed attempts.
Without a replay protected memory no limitation on number of attempts can be enforced. There are a few options available. Embedded flash is very versatile, but adds extra masks and thus cost and is normally not used for processors but could be possible with the margins Apple have. Flash chips with replay protection and cryptographicaly tied to the processor is commonly used by mobile phones in the form of eMMC. It requires that you trust two chips though. If the NAND controller is not embedded in the NAND die depackaging attacks can allow replay.

Comments RSS feed for this page

Add your thoughts, post a comment:

Spam and off-topic posts will be deleted without notice. Culprits may be publicly humiliated at my sole discretion.

Name:
The Answer to the Ultimate Question of Life, the Universe, and Everything?
Comment:
Formatting: <i> <b> <blockquote> <code>.
NOTE: Due to an increase in spam, URLs are forbidden! Please provide search terms or fragment your URLs so they don't look like URLs.
Hosted at DigitalOcean.