October 2018

Volume 33 Number 10

Xamarin - Augmented Reality in Xamarin.Forms

By Rajeev K

Augmented reality (AR) is quickly emerging as an incredibly useful tool for solving everyday problems. AR is an interactive, reality-based display environment that blends virtual objects with real ones to create an immersive UX. Enabled by advanced hardware like Microsoft HoloLens, it employs virtual 2-D/3-D objects, sound, text, effects and tracking.

AR has found a ready home in mobile devices and smartphones, thanks to high-resolution cameras, fast processors and high-bandwidth wireless networks. Support for AR experiences in mobile OSes like iOS and Android lower the barrier for entry. Broad mobile support makes AR an attractive target for Xamarin.Forms developers, who can leverage the code-sharing benefits of the framework and mature tooling with all the compelling benefits of AR.

Enterprises are already using AR to engage customers and improve experiences. Harley-Davidson has created an iPad app that provides a virtual shopping experience that allows customers to view body types, seats, lights and add other options for their custom bike designs. Hyundai’s Virtual Guide app uses AR to teach owners how its car works, while IKEA has developed an app that lets shoppers see how furniture might look in their living spaces.

Currently, AR is a device-specific feature that requires a native platform to execute. iOS exposes AR to developers via ARKit, while Android does so via ARCore. ARKit requires an iOS device with iOS 11 or newer and at least an A9 processor. ARCore requires Android 7.0 or newer and access to the Google Play Store. You can refer to the official documentation from Google (bit.ly/2BY68oS) and Apple (apple.co/2PcSVdm) to learn more about the hardware and software requirements.

While learning a new technology, it’s important to understand the fundamental concepts behind it. Let’s consider these concepts for AR.

Tracking locates your device position in the real world in real time. To create a real-time relationship between physical and virtual spaces, ARKit uses a technique called visual-inertial odometry (VOI), while ARCore uses Concurrent Odometry and Mapping (COM).

Environmental Understanding is the process of detecting feature points and planes in the real world. ARCore and ARKit are capable of determining each plane’s boundary. This can be used for placing a virtual object inside a real-world plane boundary.

Anchor refers to position and orientation in physical space. We need to find an actual position in the real world for placing a scene. ARKit and ARCore are both able to maintain this position when the camera is moving around.

Light Estimate determines the amount of light in the physical environment and applies the correct amount of lighting to virtual objects embedded within it to produce a more realistic effect. ARKit and ARCore use the camera sensors to estimate the light.

Interactive Environmentenables ARCore and ARKit to map the physical environment to the device screen, employing hit testing to determine X, Y coordinates on screen. This provides users the ability to interact with the environment through the screen interface, using, for example, gestures like tap and swipe.

Enter Xamarin.Forms

Xamarin is one of the most widely used cross-platform tools for enterprises developing mobile applications. With the introduction of Xamarin.Forms, enterprises could build native UIs for iOS and Android using a single, shared C# code base—overcoming a key obstacle to adoption and making it a compelling platform for developing enterprise applications. ARKit and ARCore are fully supported by Xamarin, and the Xamarin Web site provides in-depth documentation for both ARKit and ARCore.

From my experience in mobile appli­cation development, it’s possible to achieve a considerable reduction in both lines of code and number of bugs by paying more attention in the design stage of Xamarin.Forms applications. A proper application architecture design in Xamarin.Forms enables maximum code reusability across platforms. It also enables faster development, ease of auto­mation testing, integration of changes, reduction of bugs and more. In the Agile world, in order for an application to succeed, it should enable easy addition of functionality with minimal code changes, without introducing new bugs. This is why Xamarin.Forms is a preferred approach for mobile development, especially when executed with Agile methodology.

As a Xamarin developer, it’s interesting to see how the power of C# and the advantages of Xamarin.Forms can improve development of exciting AR-­based applications.

Consider AR as a UI layer in Xamarin that handles all the user interactions. Because AR is a feature that’s device-specific and a hardware-sensitive, there may often be situations where the AR-native platform functionalities aren’t implemented in the Forms API. This is where the real power of Xamarin.Forms comes in handy.

In order to develop Xamarin applications, Visual Studio must be installed on the machine. I can code Xamarin on a Windows machine with Visual Studio, but I need a Mac for running and debugging iOS applications. Install the latest iOS and Android SDKs in development machines.

While Xamarin Live Player can be used to test Xamarin.Forms applications, it’s not a professional-grade tool and won’t support custom renderers, effects or third-party ModelView-View-Model (MVVM) products. Microsoft recommends using emulators built into Visual Studio for testing Xamarin applications. Many developers also use Genymotion. For testing, it’s recommended to install iOS and Android AR-based applications on physical, mobile devices.

Create Your Xamarin.Forms AR Project

In this sample, I’ll show you how to place a 3-D aircraft image in a physical space using Xamarin.Forms and ARKit. You’ll also learn how to animate this 3-D object. Figure 1 shows the final AR app running on iOS.

Running the Application on iOS
Figure 1 Running the Application on iOS

I’ll start by creating a Xamarin.Forms application with the name ARApp. As shown in Figure 2, a default Xamarin.Forms project named ARApp is generated for .NET Standard, along with the platform-­specific projects ARApp.iOS and ARApp.Android. Next, I create a folder named art.scnassets and add 3-D files and texture files to the ARApp.iOS project. These assets will be for loading the AR scene.

Project Structure
Figure 2 Project Structure

Now I’ll create the UI for initiating the AR functionality. For demo purposes, I’ll define a button in the MainPage.xaml file, like so:

<Button Text="Click Me" VerticalOptions="Center" HorizontalOptions="Center"
  Clicked="OnButtonClick"></Button>

Next, I’ll implement the button click in codebehind, with this code:

public partial class MainPage : ContentPage
{
  public MainPage()

  {
    InitializeComponent();
  }
  private void OnButtonClick(object sender, EventArgs e)
  {
    // Handle the button click from XAML UI here
  }
}

I now have the Xamarin.Forms UI ready. Now I’ll move to the native AR layer implementation.

Platform-Specific AR Implementation

It’s time to implement AR using platform-specific APIs for either ARKit or ARCore, depending on the platform. I’ll start by creating a platform-specific UI with the AR UI implementation, which can be invoked using DependencyService, the Xamarin implementation of dependency injection (DI) that allows apps to call into platform-specific functionality from shared code.

I’ll get started with an implementation for UI on iOS. First, I create a viewController in the iOS project and add the code shown in Figure 3.

Figure 3 Implementing a UI on iOS

public override void ViewDidLoad ()
{
  base.ViewDidLoad();
  startAR();
}
// Initialize AR scene view
public void startAR()
{
  // Create the scene view for displaying the 3-D scene
  sceneView = new ARSCNView();
  sceneView.Frame = View.Frame;
  View = sceneView;
  CreateARScene(sceneView);
  PositionScene(sceneView);
}
// Configure AR scene with 3-D object
public void CreateARScene(ARSCNView sceneView)
{
  // lLoading the 3-D asset from file
  var scene = SCNScene.FromFile("art.scnassets/ship");
  // Attaching the 3-D object to the scene
  sceneView.Scene = scene;
  // This is for debugging purposes
  sceneView.DebugOptions = ARSCNDebugOptions.ShowWorldOrigin | 
    ARSCNDebugOptions.ShowFeaturePoints;
}
// Position AR scene
public void PositionScene(ARSCNView sceneView)
{
  // ARWorldTrackingConfiguration uses the back-facing camera,
  // tracks a device's orientation and position, and detects
  // real-world surfaces, and known images or objects
  var arConfiguration = new ARWorldTrackingConfiguration
  {
    PlaneDetection = ARPlaneDetection.Horizontal,
    LightEstimationEnabled = true
  } ;
  // Run the AR session
  sceneView.Session.Run(arConfiguration, ARSessionRunOptions.ResetTracking);
  var sceneNode = sceneView.Scene.RootNode.FindChildNode("ship", true);
  sceneNode.Position = new SCNVector3(0.0f, 0.0f, -30f);
    sceneView.Scene.RootNode.AddChildNode(sceneNode);
  // Add some animation
  sceneNode.RunAction(SCNAction.RepeatActionForever(
    SCNAction.RotateBy(0f, 6f, 0, 5)));
}

I’m now set to invoke AR from Xamarin.Forms. But how? This is where the real power of Xamarin.Forms comes in. DI is used to call native platform code in .NET Standard or, optionally, a Portable Class Library (PCL). Let’s see how this is implemented.

Dependency Services

The DependencyService functionality allows apps to call into platform-specific functionality from shared code. This functionality enables Xamarin.Forms apps to do anything that a native app can do. There are four components needed to use DependencyService:

  • Interface: Define the functionality as an interface in shared code.
  • Implementation Per Platform: Add classes implementing the interface to each platform project.
  • Registration: Register each implementing class with DependencyService via a metadata attribute. Registration enables DependencyService to find the implementing class and supply it in place of the interface at run time.
  • Call to DependencyService: Request implementations of the interface by explicitly calling DependencyService from code.

I’ll use Dependency Services in a Xamarin.Form application to implement AR.

Interface An interface to define the interaction with ARKit/ARCore platform-specific APIs must be designed. It’s important to give more attention to this layer in the design, as it’s the API layer exposed to external classes. For example, you can create an interface named IARApp and define a method named LaunchAR:

public interface IARApp{
  void LaunchAR (  ); // Note that interface members are public by default
}

Platform-Specific Implementation for AR Interface The interface must be implemented in the project for each platform that you target. As the implementation happens in a shared environment, the interface may be called from all of the platforms. Any platform lacking the implementation will generate a NullReferenceException.

You can see the code for implementation for IARApp interface on iOS in Figure 4. The code for implementing the same interface on Android is here:

[assembly: Xamarin.Forms.Dependency(typeof(ARDemo.Droid.ARAppImpl))] 
namespace ARDemo.Droid
{
  public class ARAppImpl:IARApp
  {
    public void LaunchAR()
    { 
      // Launch AR in Android
    }
  }
}

Figure 4 Implementation for IARApp Interface on iOS

[assembly: Xamarin.Forms.Dependency(typeof(ARDemo.iOS.ARAppImpl))]
namespace ARDemo.iOS
{
  public class ARAppImpl:IARApp
  {
    public void LaunchAR()
    {
      // This is in native code; invoke the native UI
        ARViewController viewController = new ARViewController();
        UIApplication.SharedApplication.KeyWindow.RootViewController.
          PresentViewController(viewController, true, null);
    }
  }
}

Note that the [assembly:] attribute must be declared above the namespace. Otherwise, the dependency service won’t get invoked. With all this done, I can now invoke AR from a Xamarin.Forms button click, calling the Dependency Service with this code:

public partial class MainPage : ContentPage
{
  public MainPage()
  {
    InitializeComponent();
  }
  private void OnButtonClick(object sender, EventArgs e)
  {
    DependencyService.Get<IARApp>().LaunchAR(); // Launch AR
  }
}

Wrapping Up

Figure 5 offers a look at the application architecture for the sample app. The AR implementation for iOS is complete, and you can complete the Android implementation using the same approach.

The Application Architecture
Figure 5 The Application Architecture

Here’s a tip for those who intend to implement this for Android: Use ARCore through dependency services for implementing AR in Android. HelloAR is a Google AR project implemented for Android platforms that’s been ported to Xamarin. This project will help you start the AR implementation in Android. Please refer to the following links for more information: bit.ly/2ojBZq4 and bit.ly/2PKcomX.

One last thing: For designing an engaging immersive experience with AR in mobile, it’s important to prioritize the user interaction design. ARKit and ARCore documentation provides useful tips and guidelines for creating optimal immersive experiences for end users. Let’s go through some important interaction design guidelines for AR:

  • Interactions should be intuitive and simple. Avoid needless complexity.
  • Use audio and haptic feedback to enhance the immersive experience.
  • Use the entire screen space (display) to engage the user.
  • Scenes, objects and animations should be convincing to the user.
  • Consider how people must hold their device when using your app and make sure that it provides an optimal immersive experience for them.
  • Provide intuitive context-based guidance for the user whenever needed.

AR is a game changer. It’s the main ingredient of new technologies around digital twinning, which leverages digital replicas of physical assets, processes and systems to benefit a wide range of activities. The growth of this technology in the enterprise will happen very quickly, and Xamarin.Forms stands to be an important enabler of digital twinning going forward. Be sure to explore the magic of AR in different platforms and get yourself equipped.

Happy coding!


Rajeev K R is a solution architect at TCS interactive in Tata Consultancy Services. He is mainly concentrating on deriving end-to-end enterprise digital channel strategy and building enterprise architecture for mobile solutions. He’s passionate about the latest digital technologies and interested in experiments by mixing immersive technologies in mobile with AI/ML and conversational UI platforms.

Thanks to the following Microsoft technical expert for reviewing this article: Dan Hermes


Discuss this article in the MSDN Magazine forum