TNS
VOXPOP
Tech Conferences: Does Your Employer Pay?
Does your employer pay for you to attend tech conferences?
Yes, registration and travel are comped.
0%
Yes, just registration but not travel expenses.
0%
Yes, travel expenses but not registration.
0%
Only virtual conferences.
0%
No reimbursement.
0%
Open Source / Software Development

Why You Should Care About The New Open Source .NET Core

Dec 19th, 2014 7:56am by
Featued image for: Why You Should Care About The New Open Source .NET Core
Feature image via Flickr Creative Commons.

Open sourcing .NET to take it cross platform means shifting to a modular design that Microsoft can develop in an agile way; and that means a better .NET. But making sense of the change means thinking about both the new technology and the strategy that’s behind it.

Why We Need .NET Core And What You Get From It

Twelve years since the release of the first .NET framework, developers have ended up with multiple, fragmented versions of .NET for different platforms. From the .NET Compact Framework to Silverlight, Windows Phone and Windows Store applications, every time Microsoft has taken .NET to a new platform, the supposedly ‘common’ language runtime has ended up with a different subset: each time, there’s a different runtime, framework and application model, with different development being done at each layer and APIs that go back to a common code base but haven’t always stayed common.

net

Microsoft could talk to developers about common skills and being able to re-use what you know, but the fragmentation was the reality along with a certain amount of confusion and frustration.

Yes, different platforms are always going to have different features and capabilities, but as .NET goes open source and spreads beyond platforms Microsoft controls, getting a common core rather than a set of loosely coupled subsets becomes even more important. That’s the basis for .NET Core and Microsoft’s .NET open source strategy.

Microsoft has tried tackling the problem before, with portable class libraries and shared projects (to let you at least group the code for your multiple .NET versions together and share what you can), and then universal apps (which also organize your shared code to make it easier to add your per platform code).

Both are based on the concept of contracts that cover a single, well-defined area of the .NET APIs and have to be supported completely on a platform (or not at all). Confusingly, those were introduced in the Windows 8 timeframe, but they’re not the same as the contracts WinRT apps use to access the File Picker or sharing. They’re a way of abstracting the APIs so you can use them as if they were the same for each platform.

Write something that only uses the APIs available to a universal app and it will (theoretically) run on Windows and Windows Phone and (eventually) Xbox One. You could take the same shared project or universal app, re-use the common code in Xamarin Studio and wrap it with a UI that works on other platforms like iOS and Android and Mac OS X, to get a cross-platform .NET app. But in each case, the implementation of those APIs is different. Also, Universal apps don’t cover Windows Server.

Not that you’d want phone or tablet apps running on a server where you care more about ASP.NET, but it’s a good example of how divided the .NET story is in practice. And if you’re working in .NET for a desktop or mobile application and in ASP.NET for your server back end, being able to have components that work across all those different .NET platforms makes it a more attractive development environment.

Agile, Modular, Open

The problem that portable class libraries and universal apps are trying to tackle isn’t just the fragmentation of .NET across platforms; it’s the fact that it was never designed to be modular.
The core of the .NET Framework is mscorlib, which has Windows-specific features (like remoting and AppDomains). That means every time .NET goes to a new platform, it needs a new core.

Add in the fact that .NET on each new platform is built and maintained by a different team, with its own versions, shipping at different times, and you get a lot of divergence (you can criticize Microsoft for that but it’s a fact of life).

Portable class libraries started to push the platforms closer together but with different code bases, the same thing gets implemented multiple times.

And that doesn’t help with the compatibility issues between different versions of .NET on the same platform – or more often, with the way .NET apps work with versions of the framework. Adding an interface to an existing type can cause problems because an application might not get the interface it’s expecting to get. Adding an overload to an existing method can cause problems for code that wasn’t designed to pick the right method – because it didn’t have to where there was only one method.

That doesn’t just mean apps having to install the right version of the framework. These backward compatibility requirements freeze the design of a new version of the .NET Framework very early on and mean Microsoft has to concentrate on thoroughly-tested big-bang releases. As Immo Landwerth of the .NET team explains it in a Channel 9 video, not only do they take a long time to ship, but there’s little chance to take feedback on a beta. That’s because by the time they’re confident enough to let developers test it “it is so locked down and so close to shipping that we can only really address super impactful bugs; as far as a design change goes, the first time the public gets the bits is when it’s already too late to provide serious feedback.”

The only real feedback you can get in that situation is on code you’ve already shipped, because that’s what customers are using; you can’t get feedback on the code you’re writing now, because none of your customers have it — so you never get feedback at the point when it would help the most for getting a feature right, only once it’s locked down.

A couple of years ago, the ,NET team started shipping some libraries ‘out of band’ on NuGet instead of as part of the framework, and it was a big success. For the immutable collections library, he says “It was in beta for eight months, and we did make API changes based on customer feedback. The design we ended up shipping as a stable release was a lot better than what we could have shipped the traditional way.”

Plus developers get to call the version of the library they want without having to keep their own copy of the code (old versions stay on NuGet for ever) and each application gets the version of a library that it needs.

You can’t break up .NET Framework itself that way though. So when you use .NET Native to make a Windows Store app by compiling .NET code and the compiler merges the framework with your application and removes the parts of the framework that your application doesn’t need, there are parts of the framework that it doesn’t need but can’t remove. .NET Framework just isn’t modular enough for that, or for ASP.NET 5 (which is designed to be a stack small and simple enough to be XCOPY-ed onto the server by a web developer, not deployed by an IT team).

The .NET Core Stack

Enter .NET Core. The combination of wanting to build .NET in a more agile, modular way, and not wanting to implement the same thing multiple times when it could be written once, explains why the .NET Core stack looks the way it does.

Based on what Microsoft has said about .NET Core, here's how the layers fit together

Based on what Microsoft has said about .NET Core, here’s how the layers fit together

There’s the new runtime, CoreCLR; that’s got the just-in-time compiler and the VM, type system, garbage collection and other run-time pieces. That’s the same level as the Mono runtime, or the .NET Native runtime.

Above that are the Base Class Libraries and the framework: streams, components, reflection, networking — plus the libraries from NuGet like immutable collections. And on top of that is the app model, like ASP.NET, which bootstraps the CoreCLR and adds features like dynamic compilation and NuGet resolution.

It might sound very like the .NET Framework architecture, but between the runtime and the libraries is a thin layer that exposes what the libraries expect to get from the runtime —  like objects and strings and delegates — through a common interface. The .NET Core CoreCLR and the .NET Native runtime are implemented very differently, but because this runtime abstraction layer presents a common interface, you don’t have to care about that.

To get the stack, you get NuGet packages; a package for the run time, a package for each library and so on. Some packages will say they provide the runtime and have different implementations for CoreCLR and for .NET Native. A framework like Json.NET can sit on top of the stack, at the same level as ASP.NET, and describe what it needs in terms of assemblies and packages and not care about whether it’s going to run on .NET Core or .NET Native.

Portable Class Libraries sit at the level of the runtime abstraction layer; unlike the current PCLs that get a fixed set of APIs, you’ll be able to bring down newer versions of libraries that add additional APIs. “You don’t suffer from being restricted to what’s the lowest common denominator on all the platforms,” Landwerth points out. But when you want to use new features, like supporting a long file path, you can do that by calling a new package without changing the rest of the stack underneath it.

And Microsoft can stop building multiple runtimes and multiple versions of the base class libraries. “The BLC becomes a single BCL, this thing that we have on GitHub, and these libraries will just naturally float over the top of the runtimes,” he explains. “We get out of the business of having to special case runtimes. Not only are there no duplicate versions, but you would have a single package that represents something. So when I reference SystemCollections.DLL the package, then I get a version that works across all my stacks; not just the Microsoft stack but iOS and Android, and across Mono as well.”

Developers can choose how up to date they want to be, or how stable. Landwerth describes the options, going from pulling the code off GitHub yourself. “Tier number one is open source; you can build locally without ever talking to us. The next layer is you submit a pull request and it gets applied.” The .NET team has accepted over 100 pull requests already and more than 40 of those have been from the community.

“The next layer is we ship these code changes as NuGet packages and do some testing.” But those libraries are tested individually. “If you take 15 of those, they may not play nicely together, so the next tier is that we test the set and that’s the set we ship with Visual Studio in the same way it happens today.” Those sets of updates will continue to ship about four times a year, he predicts.

The idea is to give developers choice. “You don’t compromise your ability to be agile and fast for the benefit of the slowest moving target. However you still retain the quality of the slowest target; it just means it lags behind somewhat. You have this sliding window of innovation happening here and stable things are happening here.” That’s a model that will be very familiar to developers who already use open source projects, and it’s something the .NET libraries on NuGet have begun to introduce .NET developers to. And if you don’t want to work that way, you can carry on using the .NET Framework.

But if some developers will just carry on using .NET Framework, why introduce .NET Core at all? It’s all about the cross-platform opportunity.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.