Thousands of new businesses are born every day. And more of them than ever before are offering a mobile application as their primary product.

When it comes to developing the app itself, a range of technologies can be used. From native Android and iOS development - using Kotlin and Swift respectively - through to cross-platform React-Native, Flutter and Web-based hybrid apps.

It turns out that each mobile app development framework has its own pros and cons from a security standpoint.

So, in this article, we’ll shed some light on these considerations and we’ll provide you with a comparison to help guide you when it comes to developing your own mobile application.


A quick note about creating a threat model

Forward-thinking development houses and vendors acknowledge the importance of apps to overall business success. And so they also recognize that it’s vital to analyze the threats their app might be up against after publication. That way they can apply a necessary set of security solutions to keep it safe.

This process of surveying potential risks is called creating a threat model and is a must regardless of the mobile app development framework you end up choosing.

Let's imagine for a moment what a threat model could look like for a modern neobanking app. This is a deliberate choice given that banking apps hold an end user’s money and so are particularly attractive targets for bad actors.

First things first, the app should be protected from unauthorized use. Indeed, that’s why most mobile banking apps use biometric authentication and a pincode in order for the legitimate owner of the account to gain access.

Another threat comes in the shape of traffic sniffing and talking to a malicious server instead of the original one. This threat exists if apps do not leverage application level encryption, certificate verification and SSL Pinning.

The banking application might also be subject to reverse engineering attempts. This in turn could lead to repackaging with altered logic, traffic dumping, and other even more frightening changes.

The dependencies which make up a significant part of every modern app can be tampered with, too. This is known as a supply chain attack.

And our neobank app can also be run on a device infected with malware. This malware can try to steal banking data, intercept text messages with OTP passwords, or show phishing screens.

You might be reading this and thinking “wait a second, the threats covered here aren’t necessarily specific to a mobile banking application.” Well, you’re completely right to think that way. After all, insecure networks pose a threat to any application which exchanges sensitive data over the network.

It just happens to be the case that mobile banking apps are a particular target right now.

We highlighted the major threats facing mobile apps in our State Of Mobile App Security Report a few months ago. But in this article about threats for specific mobile development frameworks, we’re going to focus on just four of them:

Reverse engineering, tampering, man-in-the-middle attacks, and software supply chain attacks.


Mobile app development frameworks

So, which frameworks will we cover?

We’ll explore native applications - whether that’s Kotlin for Android or Swift for iOS - where you largely rely on the libraries and frameworks shipped by Google and Apple respectively.

We’ll also look at building an application for two platforms and maximizing code reuse: so, React-Native, Xamarin and Flutter. Flutter is offered by Google, too, whereas React-Native is a Meta solution, and Xamarin is owned by Microsoft. The development of these apps happens with programming languages which are being run through a respected virtual machine or compiled to native code.

And then there’s always the option of creating an application using HTML/CSS and javascript and running the code in a WebView inside the app. Those apps are called hybrid apps, and we’ll cover them here as well.

If you’re ready, let’s jump right in.


Security risks with Swift and Kotlin for app development

Native Applications

Every app in the App Store and Google Play is a native application at its core. Native applications are built with languages and tools officially supported by the OS vendor. And that’s a positive right off the bat because it means that Google and Apple pay close attention to the security aspects of the application and ship security controls and tools to review it. For example, Android Studio comes with a set of inspections of both general code quality rules alongside security checks.

That said, this doesn’t mean these platforms will solve every security concern for you. They do their part of the job, ensuring the OS has a minimum attack surface and providing you with timely updates. But there are other security issues that are the responsibility of the developer - that means you.

Here’s what you need to know:

Kotlin and Swift are compiled into binaries, but that doesn’t mean they can’t be reverse-engineered. Such an attack can expose the internal logic of the application and give up the keys and tokens accidentally left in the app. Decompiled files can be also modified; for example, Android’s smali file has a text format. The attacker can alter the logic of the application, unlocking features, adding remote logs, and compromising security in general. After such modification the app can be redistributed via social engineering and third-party app stores. App protection tools - beyond what Apple and Google provide - are needed to counter these risks.

Native development can also be prone to supply chain attacks. If somebody snicks in a patched version of a famous library in maven central or an alternative jar repository, the application will find itself at risk. Make sure to verify the checksums of the packages you use, scan those packages for known vulnerabilities, and consider limiting the number of dependencies you use in the development process.

When it comes to man-in-the-middle attacks, both Android and iOS have lists of trusted certificates. All the network connections to the https URLs will be checked against those and will be denied if the remote server certificate is not trusted. But it’s not enough to rely on this system alone. Attackers can trick the user into installing certificates on the device, and then the application might be subject to MitM attack. Please be sure to implement SSL Pinning to block this particular attack vector.


Security risks with React Native for app development

React-Native

React-Native uses JavaScript (and TypeScript) as programming languages. All the source files get transpiled (source code is transformed into another more compact form without types and using only vanilla js). The source code itself exists in the form of a single-file bundle which is interpreted when the app is running on the device. Yes, the source code is available in the application itself. What’s more, React-Native supports over-the-air bundle updates. That means the number one risk for a React-Native application is bundle modification and exposure to reverse engineering.

What about the risk of supply chain attacks? Well, the React-Native framework itself is shipped and maintained by Meta. React-Native is used for some Meta apps, but those are different versions really. You see there’s an internal one and a public one, and the latter gets updates from the former. At the end of the day it’s a question of how comfortable you are with another company - however well respected it is - being part of your application supply chain.

React-Native apps also use NPM packages as their dependencies. The problem with them is that compared to maven central, the publication of NPM packages is much easier. But even if they are genuine, there are no guarantees that the crucial packages are implemented correctly or provide reasonable config or default parameters.

Let’s go back to our neobanking app example again. Say we store our auth token in an Android Keystore via react-native-keychain, default configuration allows “null initialization vector” attack which could potentially result in a stolen banking account.

React-Native consists of the JS part, the React-Native bridge, and the native part for both Android and iOS. This architecture undoubtedly expands the attack surface. For example, if a vulnerability were found in the bridge itself or in the JS VM, then all the applications which use React-Native would be affected. Consider remote code injection: your app’s logic and data could be in danger. In the case of our neobanking application, your customer’s hard-earned money might end up on an account different from the target one.

The aforementioned JS part, known as JS Bundle, is the source code of the React-Native application, which is combined into a single file, compressed and slightly obfuscated. Still, the source code is in plain text which means one can read it as regular code with little effort as there will be a lot of anchors in the code like React Hooks, Components, Reducers and other parts. The logic of the application will be obvious to the reverse engineer.

Although React-Native uses JavaScript APIs for network connectivity, they still call platform APIs under the hood: in this sense React-Native is still a native application. That means the risks of a man-in-the-middle attack are the same as they are for a native app. And the countermeasures are the same as well.

If you want to know more details, follow the article on React-Native application security by Cossack Labs.


Security risks with Flutter for app developments

Flutter

React-Native is a cross-platform framework to make apps for iOS and Android (among others). Flutter does the same thing but in a different way. It uses the Dart programming language and its own VM. But Dart VM is only used in debug mode; for production applications, Dart is compiled into native libraries for the required CPU architectures. This approach significantly reduces the risk of reverse engineering by increasing the skill level required to perform it. But the risk is not eliminated entirely: reverse engineering tools do exist for Flutter. And these could allow an attacker to get a project structure and dart class names and infer the app logic quite conveniently. Some also allow traffic interception for apps.

With a native Android app you could easily recompile the artefacts of reverse engineering into a new application; fortunately this is not the case with Flutter. As of now, we’re not aware of any tooling which can conveniently build up the application from decompiled Dart code. The update of the Dart part of the application is intentionally not supported either.

As with React-Native, Dart code which is responsible for the network connectivity still uses platform APIs. And so the framework doesn’t bring any additional protection against man-in-the-middle attacks by itself. Use HTTPS, apply encryption to the traffic at transit and verify the server origin by using SSL Pinning.

Another thing worth noting about Flutter is that the framework is relatively young. That means there aren’t many security-related materials, infrastructure and products. If you take a look at the official security page for Flutter, you won’t find much beyond short, general recommendations. We’re also not aware of any quality security scanners for the framework. But this can also be viewed from another, more positive perspective. Namely that the novelty of a technology allied with such a speedy pace of change these days makes for a challenging environment for reverse engineers. As the binary format of an executable changes fast, the tools to reverse them should change fast too, which can be expensive to keep up with.

Flutter pulls its dependencies from the pub.dev resource: it’s a repository of Dart and Flutter packages. That means the supply chain concerns we mentioned for native and React-Native applications apply for Flutter as well. In order to publish to pub.dev you only need a Google account, which means that virtually anybody could upload a malicious package. Be aware of the packages you use, verify their origin, and make sure they don’t come with any known vulnerabilities.


Security risks with Xamarin for app development

Xamarin

Xamarin is an open-source technology from Microsoft with applications developed using C# and .NET Framework. The code is compiled into a DLL and then shipped as a part of native application. At runtime the app bootstraps the Mono VM which in turn loads and executes said DLL.

The techniques to reverse engineer such an apk are known: it’s enough to decompile the APK or IPA file, find the DLL file within the assemblies folder, and then make an extraction and decompile as a regular .NET DLL. You’d  need an additional extraction because a DLL is compressed for size optimization, but it can be done with a one-line script. And once you’ve obtained a DLL, the whole application is available for your inspection.

So, in other words the application logic is revealed as well as any hardcoded secrets and auth tokens. Surprisingly enough, some developers put the azure tokens straight into the application code because it’s convenient to use the same technology ecosystem for mobile, server side and infrastructure. But clearly this comes at the cost of basic security measures.

It’s also easy to replace the DLL file inside the app and repackage the whole application. This leaves it vulnerable to tampering which can eventually be used to trick your user into using the wrong version.

Xamarin proxies the network connectivity calls to the OS through System APIs. The framework doesn’t introduce any measures to control the internet connection out of the box, thus not adding protection against MitM attacks. So, as always, make sure to use SSL Pinning, and verify the origin and the date of the certificate of the remote server your app works with.

As with every other technology out there, nothing in the mobile application tends to be built completely from scratch. And Xamarin is no exception. Being part of .NET ecosystem, Xamarin uses NuGet package manager to leverage dependencies. It’s relatively easy to submit a package there, and so we can’t call them secure. To avoid becoming a victim of a supply chain attack, check the packages, scan them for vulnerabilities, and be aware of any changes.


Security risks with web-based frameworks for app development

Web-based Hybrid Apps

Hybrid apps are native applications with a single screen which holds the WebView component. The whole app is basically a web app loaded into a WebView engine that is developed using HTML, CSS and JavaScript. Using WebView in mobile applications brings a whole new set of technology risks.

The first of these is that WebView doesn’t limit the URLs you can navigate to. That means that failing to filter user input or just an error in the code can result in your user being taken not to a part of your application but somewhere else entirely. That’s why it’s a good idea to explicitly include an allow list.

Hybrid apps still need to communicate with the underlying operating system. There are different frameworks to do that like Cordova, which provides this via a native bridge. The problem is that you need to restrict the code that can access it. For example, iFrames shares this access to any code loaded there. That’s why it’s generally recommended not to use it.

Other problems are similar to those we’ve covered for the previous technologies. As with React-Native, Hybrid Apps’ bundle format brings the application logic in the form of JavaScript code, which is really easy to read, understand, and modify. This results in a high level of risk from reverse engineering and repackaging point of view. Especially if the application doesn't apply any obfuscation.

Hybrid apps are also subject to the same problems as React-Native apps when it comes to supply chain attacks as dependencies use the same NPM ecosystem.

And then there’s network interception. The fact that the app uses WebView does not help with certificate validation here. It means that the process should be done manually. However, with Cordova the process itself is not that easy, so please consider that aspect carefully.


Comparing mobile app development frameworks

As we said at the outset of this piece, a vital part of setting out to develop a mobile app is understanding the type of attacks it will have to defend against. We’ve covered some of them in this article, but the comparison table below gives you a colour-coded relative probability of each threat by framework.

For example, the probability of tampering for a React-Native application is high because the JS-bundle, while obfuscated, is still in plain text format and so doesn’t require a deep level of expertise. If you compare it to the risk of tampering with a native application, you’ll find the latter much harder, and thus the risk is shown as low.

But remember, low doesn’t mean impossible.

Comparison chart to show threat probability by framework

A mobile application - especially one dealing with sensitive data or money - should be properly and robustly protected against the security threats we’ve covered in this article.

Whichever framework you pick for your app, you’ll definitely face the threats of reverse engineering, your app being redistributed, shipping vulnerabilities within the app, or being a victim of supply chain attacks.

Mobile platforms do a good job of thinking about the security of the platform, shipping hardware and software solutions like encryption tools, secret stores, certificate checks, and so on. Whatever technology you use, you need to know what these tools are and how to use them properly.

But you also need to know the tools offered by external app security providers that you can use (for each particular framework) to block the threats covered above. These include mechanisms to detect dynamic instrumentation tools like Frida as well as RASP checks of your app’s immediate environment. Not to mention tools to security check your dependencies, code hardening (robust encryption and obfuscation), and integrity control solutions that can detect whether a bad actor has tampered with your application since you published it.

Modern mobile app security relies on all of these interconnected layers working together to frustrate attackers. And that goes for any of the mobile app development frameworks we’ve covered in this article.

So, if you’re deciding which technology to go with for your build, we hope this piece will prove helpful when it comes to making a choice.

This article is more like an overview. In order to dig deep into the mobile security, please follow the Mobile Application Security Verification Standard.  

P.S. Big Shout out to Anastasiia Vixentael for answering millions of my questions and helping preparing this article.

Like the article? Consider helping to run the blog at Patreon or Boosty. The funds go to pay for the hosting and some software like a Camo Studio license. Patrons and Boosty subscribers of a certain level also get access to a private Architecture Community. Big thanks to Nikita, Anatoly, Oleksandr, Dima, Pavel, Robert, Roman and Andrey for already supporting the newsletter.