Slug seth cover?fm=jpg&fl=progressive&q=75&w=300

Swift-ly Secure

With the recent open-sourcing of Swift, the barrier to entry to create iOS and OS X apps has been lowered, but old vulnerabilities still exist and developers still make mistakes that violate users’ privacy and expose an organization to additional risk. In this talk, Seth Law covers some common vulnerabilities as they exist in Swift applications, examples of failures, and how to prevent the same.


My name is Seth Wall; I’m VP of Research and Development at a small application security consulting firm named nVisium. All we do is application penetration testing, developer training. I do a lot of open source work. I wrangle the R&D, the Research and Development efforts at nVisium, where we’re trying to improve the state of security in the development community.

Considering Security With Swift (1:01)

From the days of Objective-C, Swift is just nothing but awesome. From a security perspective, it mirrors Objective-C pretty closely, which is not surprising given the APIs and the things that we’re calling from a platform perspective that Apple provides to us. The whole point of my talk tonight is to make your application a fortress. The last thing that we want is your application to end up like some of those mobile apps that are out there, that have just been completely burned to the ground.

You don’t want your data exposed to people that shouldn’t have access to it. When it comes down to it do you think of security when you think of Swift? Has it ever crossed your mind? In general there’s not a huge backlog of old security information that’s out there. We know it’s a compiled language, it’s based on some of the C-isms that are in there, which aren’t secure, and there’re things there that you can still call from Swift but you probably shouldn’t. So, what does this mean to you? The vulnerabilities that are inherent in these C-style languages are addressed in Swift function. This is just like Objective-C.

If you’re using the API’s that Apple provides to you, they tend to protect you from the things in C that would be a problem. But the bad thing is your code is only as secure as you make it. This means you can still make mistakes. There are things that you can do that would result in a buffer overflow if you were to handle your data in an insecure manner. If you’re passing an Int into some C library that has problems, guess what? You’re still going to have that problem in you Swift application.

Get more development news like this

When you Google Swift language and security, you find 25 million results. Number one is what the new language did not fix. The second one is SecuriTay, or @SwiftOnSecurity. I don’t know if anyone has ever followed the @SwiftOnSecurity account, but it has nothing to do with the Swift language. Nevertheless, more than half of these results are that account. It’s pretty entertaining regardless.

Common Security Gotchas (4:10)

We thus have a problem when it comes to Swift and security. There’s just not a lot of institutional knowledge around Swift, but there’s not a lot of libraries, there’s not a lot exposing us up to this point. If we look at the lists that are out there, this is similar to the steps that they go through:

  • Unvalidated Input – what my application is taking in and then what is it trusting from the user.
  • Access Control – what can they get to when they access things through my application.
  • Secure Storage/Encryption
  • XSS and Injection

Trust: The Security Mindset (6:20)

When you’re building an application the security mindset that you need to take on is that it comes down to some level of trust. The trust that you can defend against an attacker of a certain level of skill set. This doesn’t mean that you’re going to protect against a malicious insider that already has access to your database and all that information. But if your application is in its normal environment you want to make sure that you are not the lowest hanging fruit. That’s where we see the huge problems.

If you’re Facebook you want to make sure you’re not storing that Facebook token in an insecure manner, which they did for a while. You’ve got to trust that you can recover from that which you can’t prevent – some sort of monitoring, especially on the back end. Your users have to be able to trust your product that they’re not going to be exposed. If I put my credit card number in there, the last thing that I want is it to end up in some other country.

Most importantly your product does not trust its users. If there’s one thing you take out of this is that you cannot trust your users. That’s where the majority of these attacks happen, with the vulnerabilities, or get exposed. The two vulnerabilities that I want to talk about, that we see most often when we’re analyzing mobile applications and Swift applications is insecure data storage, and a lack of secure networks communications.

Preventing Insecure Data Storage (8:50)

If you’re attacking an application, or if you’re trying to protect your application, the last thing that you want is somebody stealing those cookies. You don’t want somebody coming in and getting access to things that they shouldn’t be able to see. In the OWASP Mobile Top Ten Risks, No. 2 is insecure data storage. No. 1 is actually insecure controls on your back end. This is the highest mobile risk for a mobile application. This lines up pretty closely because there’re so many places that we can fail. We can store things insecurely.

The phone itself in iOS is encrypted by default. However, once you get access to that layer, it’s very easy to extract things from Core Data, plists, and more. That’s what we’re trying to protect against. Digging through Swift code, it’s very easy to identify where an application actually uses Core Data. In this case there’s a managed object context. I’m using Swift.nV as an example here. Swift.nV is an intentionally vulnerable application that we’ve released out on GitHub.

Since Apple stores Core Data in the form of SQLite, I can step into the core simulator device directory from the Terminal, pull open SQLite, and actually take a look and see how it’s structuring the data. In this case I’ve got user names and passwords that are in there. It’s very easy to find, it’s very easy to discover, and it’s very easy to manipulate.

Apple has done a couple of things to make it harder to manipulate on an un-jailbroken device as of iOS 8.3. They limit access to the file system. I can’t actually see into those application data folders without specific permissions to do that. What I can see is the backups. What does that mean for your application? If I’m storing data and your user backs up their phone, that means that that data is stored in that back up folder. Now it’s protected within their iTunes account, but it’s very easy to identify and extract that

For this reason, jailbreaking on your primary iPhone is insecure as root access on iOS cuts out all the separations that are built into iOS, and it makes it very easy for an attacker to jump between applications. When we test applications we do it on a jailbroken device because it gives us full access to what the application is doing. However, you shouldn’t do it with your main phone.

Candy Crush Insecurity (12:35)

Candy Crush is another example of storing things insecurely. Candy Crush has a binary data file that it stores in the documents directory. Do you want to know what’s in that binary data file? Everything and anything you want to do within Candy Crush. You want to give yourselves extra lives? Go for it. All you have to do is change one portion of the file from to FF, and all the sudden you have 255 lives. Or, if you decide you got stuck on a particularly hard level, it’s easy enough to go in and just skip you past it.

This has actual monetary consequences for the developers of Candy Crush, because if someone releases a tool that goes in there and just bypasses all this information, or constantly resets the lives, gives them whatever they want, then they’re not going to make as much money because their whole business plan revolves around it. The “Buy More Lives” button just modify that binary file. It’s very easy to do. It has been made harder as of 8.3 because they don’t have access to that database any more. However, on a jailbroken device you can still do it.

Encryption: Data Storage Defense (14:37)

From a defense perspective you must use encryption. However you need to handle that, don’t store sensitive data in the Property List. Anything that you want to store securely, store it in the Keychain. In this case I’ve called out Valet by Square, which does a pretty good job of abstracting out old Keychain API, and is trustworthy, which is something to be wary of with these types of libraries.

When defending, plan for the worst: your application is being used in the worst possible manner on a jailbroken device. How do you make that secure? If we plan for that case, it’s a lot easier to protect it when somebody is not using it in that sort of a situation.

Network Communications (15:25)

Network communications is M3, the third top mobile risks of OWASP. You’ve got to make sure that those communication mechanisms are secure. Sometimes it doesn’t matter, so we can turn that off, but it’s very easy to sniff traffic in free WiFi.

For example, allowing arbitrary loads is great when you’re in development, but don’t forget to disable that before it goes to production, before you would load something into the App Store. The nice thing is that there’s 20,000 instances of Allow Arbitrary Loads in GitHub. Now not all of these are Swift obviously, but there’s a fair number of them that are.

As far as certificates and encryption go, there’s little reason no to turn it on for your backend communications. If you haven’t seen letsencrypt.org, it’s a free CA that’s out there that’s actually in all the major platforms at this point, and they’ll give you a certificate to encrypt your traffic, to keep people from sniffing it. From a trust perspective, Good is an internal CA. Use some internal certificate authority that is loaded onto your different testing platforms. Better as an external CA, go use Verisign or somebody like that to actually sign your certificates. Best is Certificate Pinning. Certificate Pinning is basically identifying signatures, and attributes of the issued certificate, and checking those in the code itself. So making sure that the signature is one that matches up with the signature that your CA generated.

The second that you do that it means that the application can’t use something like a proxy to sniff information, or sniff that connection as it goes back and forth. Granted if they’ve loaded, there’s ways to get around it, but it becomes very, very difficult. From a testing perspective it becomes very difficult to actually break that encryption and test those devices. Lastly, use TLS 1.2.

Dos and Don’ts of Security Resources (20:55)

From a resource perspective, I wanted to talk about Swift Toolbox, GitHub, Stack Overflow, OWASP, and Swift.nV.

In general, if you haven’t audited it yourself, then don’t use external libraries for security purposes. Make sure that, anytime you import those libraries, you’re trusting those people with your application and with your application’s data. And if they happen to change something, and you just pull it down wholesale, all the sudden they could extract, they could be malicious. They’re probably not, but they could be. That’s the danger when it comes to trusting those third parties. The same thing goes for GitHub and Swift Toolbox. There’re hundreds of Swift projects that exist, and they may or may not be trustworthy. As far as Stack Overflow goes, also be cautious with copying and pasting code, always make sure you understand what you’re doing.

Finally, check out OWASP if you never have before. Most of the compliance industry depends on OWASP, and depends on this list. They have a lot of good advice out there such as how to handle input handling and other things. Lastly, if you’ve never played with any of these things before, go take a look at Swift.nV because that’s what it steps through. It demonstrates a significant majority of the OWASP Top Ten. You can take a look in the code itself, change it, modify it, play with it.

Conclusion (23:32)

Security is hard. Try harder.

In general though, I’m excited about Swift from a security perspective. We don’t have all the resources out there, but the community is growing. I’m glad to see that people are interested in it.

Q&A (26:16)

Q: What’s the single biggest thing you feel like the community needs but doesn’t have yet from a Swift-oriented security standpoint??

Seth: I would like to see some sort of Swift Security Group. If I look at Node, for instance, there’s a nodesecurity.io. They go through the different modules and the different popular resources that are out there, and edit that code or audit that code for vulnerabilities. They check to see that hey, you are encrypting something before you store it on a device, and check for release vulnerability.

This came about with a couple of guys that got together and started looking at different libraries, and they started posting vulnerabilities.

Q: Apple enforced a static build requirement on Objective-C, but now with Swift and frameworks, I’m assuming it can be dynamic. How does dynamic linking affect security in Swift based upon executing code within a certain context?

Seth: Charlie Miller actually went through and did that in Objective-C after Apple implemented their static build requirement. He was able to bypass it and load libraries. What you’re getting at is actually a lot of the way that the dynamic library injection happens, is what they use for jailbreaking. If you find a way to bypass that, then you’re able to actually jailbreak the full device. I don’t think there’s been a lot of research done on it within the application context. So if I just do it within one application and try and take that over, and don’t worry about taking over the overall operating system. This would be an interesting area to explore, because it is definitely possible. Any time that you can load that dynamic code, you should be able to do anything and everything within that context.

About the content

This content has been published here with the express permission of the author.

Seth Law

Seth Law is the VP of Research & Development at nVisium and wrangles the research efforts into all areas of application security. An experienced Application Security Professional with years of security experience, Seth has worked in multiple disciplines, from software development to network protection, as a manager, contributor, and speaker.

4 design patterns for a RESTless mobile integration »

close