Hacker News new | past | comments | ask | show | jobs | submit login
Flatpak – a security nightmare (flatkill.org)
377 points by stryk on Oct 9, 2018 | hide | past | favorite | 257 comments



> Sadly, it's obvious Red Hat developers working on flatpak do not care about security, yet the self-proclaimed goal is to replace desktop application distribution - a cornerstone of linux security.

Sheesh, while the issues raised are all valid, this does not actually justify such a conclusion about the intent of the Red Hat developers. Telling people what their side of the story is for them in a dismissive fashion like this is not going to make it more likely that they admit their mistakes.

A bit of Hanlon's Razor[0] goes a long way to resolve problems involving human cooperation (of any kind) more smoothly.

[0] https://en.wikipedia.org/wiki/Hanlon%27s_razor


> A bit of Hanlon's Razor[0] goes a long way to resolve problems involving human cooperation (of any kind) more smoothly.

I don't like the snarky tone of the article, but I really thing you're being naive here. Let's examine the facts:

- Flatpak's intent is clear and stated first in in their homepage [1]: "Flatpak is a next-generation technology for building and distributing desktop applications on Linux"

- Red Hat is the company other companies hire to manage their linux systems.

- In the flatpak homepage there's a huge quote stating "The Fedora Workstation team are very excited about Flatpak and the prospects it brings for making the Linux desktop easier to develop for. We plan to continue supporting this effort going forward."

- The FAQ page states "We are explicitly using many features of the linux kernel (bind mounts, namespaces, seccomp, etc) to create the sandbox that Flatpak apps are running in." ... and then it turns out that they aren't, or at least they aren't enforcing it.

I don't think this is stupidity. This is flat out lying to try and get mindshare.


Fedora and Red Hat are not the same thing. Sure, there is a relation between the two, but inclusion in Fedora is not an endorsement or guaranteed inclusion in RHEL.

I just wonder why the author of this page does not disclose his name... And even hides this from the whois info. This could have been written by a person with a grudge for all I am concerned.

Sure, improvements might be needed, but haa the author tried to get this happen in a different way, like a discussion? Creating a webpage and scream fire is not the best approach in my opinion.

The issue raised about security updates for packaged apps is the same as for container images and packages like .deb or .rpm in general. Generating a runtime is a possible solution? Something like an automated way to regenrate packages is needed, like flathub?!?


> I just wonder why the author of this page does not disclose his name...

> And even hides this from the whois info.

The registrant country is listed as France and the registrar is OVH. They are simply adhering to the GDPR. I have domains that have been like this since May, even though I would be quite happy to have everything public.

> This could have been written by a person with a grudge for all I am concerned.

Your comment could have been written by somebody with a grudge against the person you are claiming could have had a grudge for all I am concerned. A vested interest can make a difference, but I don't really see how any of the points are hard to verify.

Does this being written by a person with a grudge really negate the points made?

- Flatpak is claiming to be secure

- Flatpak is sandboxing apps with full access to your home directory.

(What is the worst thing that an app can do? Nuke your home directory or run malicious software. The sandboxing here does little to mitigate this.)

- Apps are not being updated as quickly as their distribution repository counterparts.

- At some point in its past, a (presumably) privileged component with permissions to setuid did not check that it wasn't blindly copying setuid.

It seems to me that this is more about easy delivery of software with a security claim that is arguably pretty weak at the moment against likely attack vectors. I can't see that changing without first thinking of what you're actually sandboxing for.

There are already software delivery systems, such as 0install and appimage, that do not make claims about being sandboxed and yet provide a similar (or greater in the setuid case) level of security for the main threats.

I find it hard to understand who the project is for. If you're a developer on your own machine then using your distro's package manager is probably more secure. If you're a sysadmin, letting users use flatpak serves only to increase the attack surface for privilege escalation.


The problem is that distro package managers are many and often complex to satisfy, which is one of the reasons people don’t develop “real” desktop software for Linux.

The new generation of installers (flatpak, snap etc) takes a cross-distro approach that is supposed to result, more or less, in “package once, run everywhere”: you build your app in a certain way and every distro should be able to run it, regardless of package managers. It’s basically a way to offload the compatibility headaches to flatpak developers and distribution builders.

Obviously that approach works only if flatpak does actually get good support in all distributions and becomes a de-facto standard, which is a challenge because there are many competitors (Ubuntu Snap being the most relevant one). If it remains just a glorified rpm (i.e. a redhat-specific tool), then there is no point.


> The problem is that distro package managers are many and often complex to satisfy, which is one of the reasons people don’t develop “real” desktop software for Linux.

If by real you mean the kind of proprietary {ad,spy,toolbar}-ware encrusted "real" desktop software that Windows has become notorious for then I think we may be better off without it. If you're talking about OSX, then I think you're missing that it's a market skewed towards those with the money and will to pay for proprietary software. Plenty of people are developing "real" desktop software for Linux.

> The new generation of installers (flatpak, snap etc) takes a cross-distro approach that is supposed to result, more or less, in “package once, run everywhere”: you build your app in a certain way and every distro should be able to run it, regardless of package managers. It’s basically a way to offload the compatibility headaches to flatpak developers and distribution builders.

This isn't novel. 0install and AppImage have been around for a while now. What is novel is that Flatpak combines this with sandboxing that doesn't really do so much to mitigate against any vulnerabilities that aren't in the kernel. I actually think they've got the sandboxing right, in that they are really starting with minimum privileges and adding only what is needed.[0] The problem, as they state, is that it is fine to drop all privileges for something like a single-player game, but it sucks for a file manager. At least according to their wiki, they're already doing most of what you could imagine you could improve on the sandboxing. [0]

It's always this interaction with the outside world that breaks sandboxing. It's rather easy to make a sandbox that runs as nobody and has no privileges and no capabilities and no file access. It's a bit harder to make something which has any of those things in a way which the user can grant sensible permissions.

It sounds like in Portals [1] the Flatpak team have gone in pretty much the only workable direction, although the Wiki is hopefully out of date and there are more portals. The problem they will have found is that to package anything useful and get feedback and validation requires that they implement the most difficult bits first. So I suspect the situation is "secure file picker is coming".

Sandboxing is a lot of work, not in the closing down but in the opening up. One of the more innovative approaches is to be found in Sandstorm [2]. Although marketed as a way to run web applications on your own server easily, it is actually a tight sandbox built on top of Cap'n Proto, which means that E-style capability-based RPC is intrinsic.

Sandboxed apps have no access to anything, cutting down on kernel attack space. The only connection to the outside world is through a UNIX socket file descriptor opened by the supervisor. The capabilities-based security model mean that a sandboxed app could request the ability to connect to "drive.google.com", or even a specific URL. This could also be passed to other apps through the supervisor, subject to user approval. Capabilities can be stored on disk to be later restored from the supervisor.

In Sandstorm the UI for granting capabilities between apps is working, but progress has slowed as the team have all had to find other ways to pay the bills.

> Obviously that approach works only if flatpak does actually get good support in all distributions and becomes a de-facto standard, which is a challenge because there are many competitors (Ubuntu Snap being the most relevant one). If it remains just a glorified rpm (i.e. a redhat-specific tool), then there is no point.

I think the problem is that if your target is casual users, they're not going to necessarily understand the nuances and limitations of the sandboxing.

[0] https://github.com/flatpak/flatpak/wiki/Sandbox

[1] https://github.com/flatpak/flatpak/wiki/Portals

[2] https://sandstorm.io/how-it-works#capabilities


"If by real you mean the kind of proprietary {ad,spy,toolbar}-ware encrusted "real" desktop software that Windows has become notorious for then I think we may be better off without it."

You're ignoring a huge amount of business software that is created for, and runs on, Windows. That's like judging Android or iOS based upon the consumer crapware that gets sold/given away in their app stores, as opposed to the many business apps that are used to automate and improve all sorts of tasks on mobile devices.

I keep seeing this pattern repeated over and over again: someone (this has been me, several times) mentions on HN that Linux needs a better way to allow for binaries to be dropped on a system and run without requiring re-compilation or overly-complicated containers/sandboxes, and the answers invariably end up being "don't want it, don't care". But, the reality is that there are a very large number of people that would jump to Linux in a heartbeat if they could target the OS for distribution in that fashion. It just seems like a bunch of ideology run amok with zero thought given to the actual needs of professional developers/companies. And, there's a lot of evidence that this is one of the primary things holding back Linux adoption on the desktop.

The rest of your reply was very informative, thanks.


> You're ignoring a huge amount of business software that is created for, and runs on, Windows. That's like judging Android or iOS based upon the consumer crapware that gets sold/given away in their app stores, as opposed to the many business apps that are used to automate and improve all sorts of tasks on mobile devices.

Thanks for this. Indeed I was responding a bit flippantly to the whole "real" desktop software, as if there isn't any on Linux. I appreciate now it probably wasn't meant in that way, and so that aspect of my response was flippant and unhelpful.

Indeed, it seems for every example we find of serious expensive proprietary software that runs on Linux, there is another that doesn't.

We have Maya running on Linux, and Softimage. Both sold by Autodesk (albeit acquired). And yet other Autodesk products, like 3dsmax and Autocad, do not have Linux support. Asking about this on their forums appears to result in a rather curt response to read the system requirements where it says "Windows."

These are tools that people learn and take with them between jobs, and I can well imagine that these are people that could work on desktop Linux without too many problems as long as those tools moved with them. Replying "but blender can do that!" is completely ignoring the reality that people have invested significant time into these tools and know them well. Whilst blender is an amazing piece of free software, it's by no means an industry standard. In the way that Gimp is quite phenomenal, it seems many professionals find it lacks features that Photoshop has.

I don't think the "web browser" response cuts it in other areas either, even if a lot of the less technically demanding software is going cloud-based.

> It just seems like a bunch of ideology run amok with zero thought given to the actual needs of professional developers/companies. And, there's a lot of evidence that this is one of the primary things holding back Linux adoption on the desktop.

I think you're right, and I'm probably in that category. I've also seen what bad vendors can do to an ecosystem though. The windows ecosystem is still full of dodgy vendors.

I'm quite convinced that package-once-with-sandboxing will happen, and I will admit that Flatpak is probably in a prime position for that to happen. I think they've probably got the most correct direction of all the attempts in the space, it just doesn't seem to be there yet.

From a commercial vendor point of view, it's also not a problem if the user's home directory is bind mounted and over a year ago there was a bug with the sand-boxing that let malicious vendors install software that could run as root. Let's not forget that most Windows apps don't go anywhere near a sandbox in the first place and their installers need system-wide access. There's plenty of low-hanging fruit still in this space.

Things have definitely got better in recent years. One of the biggest things to improve this is that 32-bit x86 is dying off. There were a number of vendors that did the "we can just build 32-bit, like windows" without realising or caring about the horrors of multi-arch dependency hunting that they were inflicting on the sysadmin.

So thank you for your comment. It made me realise that I was probably being quite flippant about software which does fulfil a niche need and does it well, and that you only need one piece of such software to tie someone to an operating system they may otherwise have no affinity for.


Thanks for the thoughtful comments, they are very much appreciated.

The thing that a lot of Linux developers don't seem to get about Windows is that its binary distribution model is an opportunity/productivity multiplier, and it all hinges on three major points:

1) Backward-compatibility is real, and, contrary to what most people say about MS/Windows, this is very pro-consumer. Just chucking code at people and saying "but, you can just re-compile it" ignores a lot of realities, including the fact that even developers don't want to be forced to recompile binaries that they are not, themselves, working on as their actual work.

2) You can deploy 32-bit binaries on 64-bit systems and they work just fine. It really made the switch to 64-bit versions of Windows a non-event, whereas this is another issue that developers need to worry about when targeting Linux. And, there are a lot of applications that simply don't benefit from being 64-bit because they don't need, or can't use, the increased address space, so forcing developers to specifically target 64-bit is just another unnecessary hurdle.

3) You can actually create binaries for Windows that don't require any dependencies other than what is available in the OS. This means that you can create binaries that work on tablets, laptops, desktops, and servers without recompilation. This also ties in to your comment about the installers, but the reality is that you really only need admin access for an installer to put things into the "Program Files" folder tree, or to install services. Apart from that, most professional software doesn't touch anything other than the user-specific folders (if Windows doesn't delete them first ;-) ).

I don't think that this point can be overstated: if you create a neutral OS platform where desktop developers can deploy their software with minimal fuss and no gatekeepers, you will win the desktop space, period. More than anything else, desktop developers want to make money, and an open OS that does its thing and stays out of the way is the ticket to that money. But, in order to do this, a lot of current Linux developers will need to get used to sharing Linux with commercial vendors and proprietary software, and I think that's still a hard sell. And, unfortunately, a large part of that mindset is wedded to this bizarre idea that desktop developers like Windows because of Windows. Desktop developers like Windows because a) that's where the users are, and b) they can make money without being forced or required to expose their source code. If the same can be done with Linux, then Linux will have no problem taking the OS space away from MS completely.

Linux is stuck in "server/appliance mode" right now, and it's a shame because if there was ever a time for Linux to take over the desktop, it's now. As you say, web applications are just not a suitable replacement for all desktop applications. MS keeps trying to lock down how development is done on Windows and, if they succeed, then everything that I stated above will go away and there will be a lot of developers looking for a new home. Any OS that starts out with "you can only use our development tools/languages, and no others", is going to whither away and die. It happened to Apple once, it appears to be happening to Apple again, and MS has apparently decided that it should emulate Apple. The whole reason why Windows became popular in the first place was because there were a ton of development tools/compilers/languages that could be used to produce binaries for Windows, and Windows gave zero preference to any of them. This was carried over from MS-DOS. And, none of this precluded any vendor from going ahead and providing/selling their source code along with the product. Linux developers have just got to let go of trying to control how people use the OS, and just try to make it as amenable to as many usage/deployment scenarios as possible.


I like what Microsoft did in UWP. App is allowed to read own files, files explicitly selected by user from Open Dialog. For anything else app developer has to ask for permission and user has to approve it (like in iOS, Android....). Specially the second option is nice, because you as user have a choice to grant access only for specific files and dirs you want app to have the access.


UWP sounds like it gets it right, but a lot depends on the granularity of the permissions. I'm not even sure you can win: too fine grained and the user ends up with permission fatigue and just clicks through, too coarse grained and the user has to choose between full access or no access. I think android gets this very wrong, but has improved somewhat.

I'm not sure if it's been tried but I think a better way might be to create some set of capabilities that can be applied, so you get complete confinement by default but can bulk set permissions of your choosing.

According to design docs, this is what is meant to happen with Flatpak. I'm not a user so I don't know how close it comes.

It appears that Flatpak falls short fo this with quite a few apps, which seems weird if they did implement the file portal they spoke of.


Flatpak does not claim to be secure. This is also clearly stated in the FAQ. They provide a means of separation of applications. Sandboxing is not per se a form of security, just like this was with Docker in the 'early days'.


>>I just wonder why the author of this page does not disclose his name...

That is a weak argument to rebut the article. The author should not matter only the facts

Either what this person has claimed about Flatpak is true, or it is not

You do not need the persons name, address, and background to form that conclusion.

The only reason to demand that is to exert public pressure on them likely in an effort to silence them. Anonymous Speech is a cornerstone of a free society


> This could have been written by a person with a grudge for all I am concerned.

Or by a person who does not wish to be on a no-hire list. The job market for Linux developers working on this sort of stuff is not exactly enormous.


> This could have been written by a person with a grudge for all I am concerned.

So glad that you can judge his points regardless of that irrelevant information, then.


Nowhere on the page they claim to be secure. See also the https://flatpak.org/faq/. they are aware that security benefits at the moment are at a minimum.

Perhaps the term 'sandboxing'is being misunderstood as it was with docker. This is not per se a means of providing security.


I'm going to stay a on the meta-discussions level for a bit (all other points in this debate have already been made anyway). Hanlon's Razor is about assuming intent, about railroading others into perceived roles. You bring up interesting points, but they were not part of the article we are responding to. The conclusion was supposed to follow from the earlier arguments given. Within that context there is a big assumption being made.

So what am I being naive about, exactly? Note that I have merely called out the assumptions made by others. I have not stated anything about Red Hat's intentions myself, so if I'm being perceived as naive about that, it would be making another assumption. EDIT: I suppose the "mistake" wording does imply good faith behavior. But there too: bad faith should be looked out for, tested for, but not presumed about others.


> The FAQ page states "We are explicitly using many features of the linux kernel (bind mounts, namespaces, seccomp, etc) to create the sandbox that Flatpak apps are running in." ... and then it turns out that they aren't, or at least they aren't enforcing it

They are using those features. flatpak has network namespaces for applications that don't need access to the network and bindmounts for applications that use very limited parts of the filesystem.

Sure, in reality many desktop apps have more far-reaching permissions, but all they're saying is that flatpak can make use of those features in some cases.

> This is flat out lying to try and get mindshare

Usually that's just called marketing. I don't think any of the above statements are actually lies in any way though. Can you point to what's actually a lie, or what lie is implied in your mind?


The part where they say it's a sandbox, and it doesn't even attempt to be a sandbox. That's a lie.


It's a sandbox like Android has a sandbox: each app lists a set of capabilities; the user gets a dialogue on installation where they have to grant those capabilities (or else cancel the install); and then for anything the app tries to do that's not in that set of capabilities, it fails.

A sandbox doesn't mean "you can never do [foo]." A sandbox means "you can never doo [foo] unless the user lets you." Even web browsers (the classical "true sandboxes") have an API that gets you access to the user's microphone, and another for access to their GPS data. There's just a dialogue in between that the user can say "No" to; and, having said no, the content of the tab can't ask again, and just gets denied automatically. That's what makes it a sandbox.


Is it also like Android's "sandbox" in that every application asks for every permission and the only choice is between "no security" and "can't install anything useful" and inures the user to just click "accept" on everything?


Android's permission system was overhauled in version 6 so that permissions are now generally asked when they are needed, for a specific type of operation, instead of the big dialog when installing.

I personally use several apps where I have granted one set of permissions and denied another (because it was for a feature I don't use). It has gotten a lot better than what it used to be.


For now, the GUIs are as far as I know, but there is a `flatpak override` command that seems promising for changing an installed application's permissions.


The system is configured to run apps in a sandbox though - that's completely true. It's the app's part that declares: it needs full access to your home directory to work. If that wasn't possible, we'd get people complaining about apps not working as expected instead. The balance may not be currently on the right side, but they don't lie about what's provided.

It's like standard Linux permissions. If you install an app which creates its directories with mode 0777, or install a package which has a suid binary, you don't complain that Linux offers no file access control. That's the author's or packager's fault.


“Sandbox” and “full access to your home directory” doesn’t compute. That’s literally one of the biggest reasons to sandbox applications.


Nobody wants a packaging system that can't package existing apps not designed to be secure in a useful fashion.

In a traditional app packaged as a deb/rpm the developer releases the source which then must be packaged and made to work with each distribution/platform. If the app is malicious or is sold to someone/compromised by someone who is then you are 100% hosed.

In a flatpak not designed to be properly sandboxed you are in fact no worse off than the alternative deb/rpm situation save that the issue of packing for distribution has been made easier.

It's in fact probably extremely challenging to package all sorts of applications without giving the user the option to provide an individual app elevated permissions.

At best we are relying on the user to decide which app ought to get those permissions.

If you think people can't be trusted to do this then the logical solution is to rely on packagers to decide what belongs in the official repo and keep malicious content out.


Well, the ideal solution is to fix the application to use the special file chooser that gives the app permission to access whatever files the user chose. I only know the basics about Flatpak, but I know it has such a file chooser; does Gimp not use it, or is there some other issue that makes it require full home directory access?

In the meantime – sure, package the app, but it shouldn’t show up as “sandboxed” in the GUI if the sandbox isn’t meaningful. Instead it should come with a nice scary warning that the app has access to all of your files… you know, for everyone to ignore and click through. (You can lead a horse to water…)


In flatpak you are worse off, though, since - as the article indicates - they lag behind on security updates. If they get compromised by unpatched exploits, that sandbox is a valuable line of defense.


The writer tries to blame Flatpak for app maintainers mistakes. That isn't fair.

If an app doesn't get an security fix whoever maintains that package should be the one to blame.

Disclaimer: I don't like flatpak either, I'm just trying to be fair here...


This is a fair point. Tons of people said that flatpaks wont get security updates because you would end up with 7 versions of libfoo getting updated, or not, on different schedules.

Lo and Behold this is true.

The security gains even in the future are also probably mostly imaginary. You can't trust average users to understand the implications of granting permissions. By default if they are installing an app they trust the dev.

Further its not like malicious actors can't test against the sandbox and do the extra work to discover ways through the fence. Getting your target to run your malware tends to be game over outside of very heavily restricted environments.

If the browser had a build in fashion to ask the user to give them full control of the machine in a way that didn't look like malware 20% of users would end up with compromised devices.


It depends on your use case. If you sandbox tar (for example), you'd do it be removing all network access and a few fancy caps, but you'd leave the rw access to the whole system. (within its standard privileges) If you sandbox netcat, you'd do the opposite: remove all fs access (unless you care about pipes) and leave open networking.

There's nothing about the idea of a sandbox that requires a specific approach.


Have a look at AppV on Windows. It isn't great, but all I/O is redirected, so in your example, tar would think that it's writing to /home/voltagex but it might actually be writing to /run/sandbox/blah/home/voltagex - so if something ran rm -rf it'd only delete the sandboxed home.


> It isn't great, but all I/O is redirected, so in your example, tar would think that it's writing to /home/voltagex but it might actually be writing to /run/sandbox/blah/home/voltagex - so if something ran rm -rf it'd only delete the sandboxed home.

Then how do you get it to operate on your actual home directory when you want it to? Making it operate on some different structure has been possible with chroot() or LVM snapshots or a number of other things for a long time.


Well, Flatpak is designed for GUI applications, where the user indicates their intent to grant access to a file by picking it the file chooser (which is magic, runs outside the sandbox, and grants the sandbox access to whatever file the user picked). This is nice because the sandbox can be invisible from the user’s perspective. I think AppV is similar.

For a CLI application like tar, this would be a bit harder because every program has its own command line syntax and you can’t always tell what arguments are supposed to be filenames. Still, you can do reasonably well by just granting access to any argument that looks like a filename. The Plash shell, a forerunner of modern sandbox designs, took this approach, but as an additional security measure only granted read access by default; if you wanted to run a command that writes to a file, you had to use special syntax before the filename [1]. Still reasonably usable, although there are other issues, like the fact that many Unix programs default to reading and/or writing to the current directory…

[1] http://www.cs.jhu.edu/~seaborn/plash/html/shell.html#shell-d...


> Well, Flatpak is designed for GUI applications, where the user indicates their intent to grant access to a file by picking it the file chooser (which is magic, runs outside the sandbox, and grants the sandbox access to whatever file the user picked). This is nice because the sandbox can be invisible from the user’s perspective.

That works where it works, but it seems like the sort of thing that will cause so much trouble that people end up turning it off. For example, a lot of files come with meta files containing thumbnails or other data with the same name but different extensions, or an associated directory with related data, and the app will want to access those too but a framework that doesn't understand they're related won't provide it.

You also end up with lots of little bugs, like breaking apps that predict what file you might open next and pre-load it or generate their own previews of files in the app's format.

A possible fix is that if the app tries to access a file you didn't expect it to, display an access prompt. But if that happens a lot it only conditions the user to just click yes every time.

> For a CLI application like tar, this would be a bit harder because every program has its own command line syntax and you can’t always tell what arguments are supposed to be filenames.

You also don't know what's implied. Most commands default to operating on the current directory when not otherwise specified (e.g. as the output location for a tar extract), but half the time that's the root of the user's home directory.


It's true, but I'm not sure how e.g. VS Code would even work on a truly sandboxes environment. There'd be no file browser.


macOS does this via the open dialog — opening a directory gives an application full access to that directory


And you can have the same issue the author said in the article by requesting access to home directory and running the exactly the same command as the author wrote.


You'd need to open your entire home dir for that. In VSCode, you normally open the specific directory that contains the source for one particular project you're working on, so in that model, it would only be granted permission to that directory.


Same way it works on the web: Use a broker process that grants access via a file select dialog.


The point is, that for vscode picking a file is not enough. It would cause lot of fun opening multi-root workspaces you just checked out...

There are many more applications, not just IDEs, where picking a file or folder is not sufficient: for example, apps like Rapid Photo Downloader or Darktable would be significantly crippled.


Virtualised paths with specific access - i.e. VS Code would only see one path.


"sandbox" is an ill defined term, but what flatpak provides fits it perfectly well.

A sandbox is an environment where capabilities can be restricted in a set of ways.

The javascript sandbox lets you manipulate the website and make network requests, but not access arbitrary files.

The flatpak sandbox is configured per-app and can prevent all fs access, all file io (with seccomp), all networking, etc.

The article is simply pointing out that most popular applications do not use the sandbox features well.

That doesn't mean flatpak does not have the sandbox they claim to, merely that it does not mesh well with many popular apps currently.

I still see nothing that merits the word "lie".


The sandbox is there for portability. I don't see a promise of being _more secure_ than other app distribution channels on linux anywhere. Though the packaging & sandboxing model certainly opens a path for improvements in the area.

Is it any less safe than installing something via aptitude or Ubuntu's app store?


The alternative if using Ubuntu is called Snap. It has a sandbox and it is better implemented, at the very least the part about reading and writing in home dir.


As far as I was told, snap sandboxing only works with a specially-patched (and apparmor-enabled) kernel [1], though I am not sure what the current status is.

I would like to know what's better implemented in snap, it seems this is simply a case of most applications requesting a r/w permission in the home directory. It might get complicated sandboxing vs code without that, don't you think? Or at least lead to a subpar user experience.

I am hopeful it will improve, though. Sandboxing needs to become the default.

[1]: https://web.archive.org/web/20170615042616/https://github.co...

Edit: Similar echo here https://news.ycombinator.com/item?id=18180877


Yes, his other major gripe is that the security updates for non-official flatpaks take a while to get security releases out. I have the same problem when I run applications that have an official RPM repo, and a volunteer packages the deb and pushes it to the official Ubuntu/debian repos. The same thing with Alpine Linux packages. Its not a problem with the tech, its a lack of volunteers (or not yet enough adoption by the developers to maintain it)

In fact, it could be nicer if it catches on, as developers won't need to maintain deb files, rpm files, pacman files, etc.


> his other major gripe is that the security updates for non-official flatpaks

I don't know whether it is true or not, but the author explicitly states that it is the official applications AND runtimes that aren't properly maintained.

> I have the same problem when I run applications that have an official RPM repo, and a volunteer packages the deb and pushes it to the official Ubuntu/debian repos

No you don't, because either (a) the package was not uploaded to the official (main) debian repository or (b) the debian security team is in charge of fixing it if the maintainer is no longer available.


The Debian security team being responsible may help you figure out who to blame, but it doesn't magically help the update actually happen. The Debian security team is a volunteer team, and it's entirely realistic that someone may actually have seen delays in the packages they care about getting security updates. "No you don't" is arrogant - you have no way of knowing that.


> The Debian security team being responsible may help you figure out who to blame, but it doesn't magically help the update actually happen.

Actually they do. Debian does carry local patches when necessary. It will also backport patches to release in older versions which may not be updated upstream. They literally make the updates happen.

In this case, it's valid to ask - where/when did you see the issue with this working.


...and, as I recall, that got them in trouble with openssl.


Parent was talking about Debian upstream patches making it into his platform of choice; in this case RHEL/CentOS.

That requires paid/volunteer package maintainers.


Most of the issues that are expicitly mentioned are fixable bugs and not fundamental design errors. In the end it seems to come down to poor package repository maintenance, which many repositories for Linux distros suffer from. While I do not like the idea of flatpak very much myself, this criticism seems too harsh to me.


The fact that many desktop applications need access to $HOME and that $HOME also conveniently provides arbitrary code execution via .bashrc/.profile/etc is kinda fundamentally at odds with doing filesystem sandboxing of desktop apps.

Sure, flatpak isn't making things worse by not being able to fix the fact that desktop users expect to open $HOME/screenshot.png in gimp, but it's also not going to easily fix that.

This is fixable on android / chromeos by specifically having applications request access to data which is isolated from arbitrary code execution (e.g. "user files" which don't include .bashrc). I think flatpak may need to ultimately have a custom file-browser where the user can "share" subsets of files into a sandbox and then patch applications to use that file browser... or to otherwise build a new filesystem abstraction.

Until then, this issue will be tricky to fix. I, of course, agree with your main point that things like updating packages more is fixable and the post is overly harsh and critical of what's effectively "things aren't perfect" with no empathy for how complicated stuff can be.


This is fixable on android / chromeos by specifically having applications request access to data which is isolated from arbitrary code execution (e.g. "user files" which don't include .bashrc). I think flatpak may need to ultimately have a custom file-browser where the user can "share" subsets of files into a sandbox and then patch applications to use that file browser... or to otherwise build a new filesystem abstraction.

It does have that through portals:

https://github.com/flatpak/xdg-desktop-portal

If you Flatpak a Gtk+ 3 or modern Qt application you get portals for free. E.g. I packaged a Qt application and I am not sharing the home directory - when the user opens a file it uses the Qt/KDE portal (similarly to macOS, ChromeOS, etc.).

As far as I understand the problem is that portals are only available for Gtk+ and recent Qt versions. Some of the applications that the posts mentions use toolkits that probably don't support portals (Java JDK, wxWidgets, etc.).

The situation for Linux is a bit different than e.g. macOS, where practically everything uses Cocoa and Apple could just throw the switch.

So, for applications that do not use vanilla Gtk+ or Qt they still need to make the home directory visible or they would not be Flatpack'able.


The situation is similar on Win10 (and the approach is also the same - sandbox apps have to use a certain API to invoke the file browser and get access to some files or folders).

But apps that don't do that because they're too old, just don't get access to the Store... or at least they didn't use to. Now you can ship non-sandboxed Win32 apps through the Store, and it doesn't even seem particularly obvious which ones are and which ones aren't. Windows 10 S only lets you install the sandboxed ones, but how many people use that?

So basically Windows couldn't solve that - the users ultimately decided that they care about stuff working as it did more than they did about security. I don't see why it would be any different on Linux.


I don't see why it would be any different on Linux.

There will always be legacy applications that will stay on old toolkits and they cannot fully benefit from sandboxing.

However, a lot of widely-used Linux applications that are on older toolkits are currently working on upgrades. E.g. Inkscape and The Gimp will be Gtk+3 applications. There are often other carrots, such as proper support for scaling for HiDPI screens, etc.


Such views can also be provided by userspace filesystems (fuse, coda, nfs, 9p, etc).

I have a fuse filesystem that shows different users different views of the filesystem, as a basic example.


Forcing applications to use a separately defined and maintained file dialog is impractical in my opinion. There are lots of ways in which file access can be presented in a user interface.

The problem is rather caused by filesystem layout conventions. I think that a better solution would be to split user home directories into two classes of files: data (that is, files that the user typically wants to see and work with as part of the normal workflow) and "user profile" kind of stuff (.bashrc, configuration files, etc...) that should be "privileged" and require special access rights. These might require interactive confirmation before write access is granted (similar to the split user accounts in Windows with UAC). However, I wonder if these sets of files can be separated cleanly. Marking files as "privileged" should be doable using extended attributes, though.


A poor package repository is a consequence of too many packaging formats, and lack of volunteer maintainers, though (I, too, could have made more to help). So coming up with package formats all the time - .deb, .rpm, .apk, Alpine's pm format, pkg, .dmg, flatpack, snap, docker and whatnot - is exactly the problem (cf. https://xkcd.com/927/). Maybe Slackware got it right after all by only building from upstream .tgz source archives. We'll see in a decade or two what software is even remotely in a usable state still.


Those aren't "package formats"; they're different names for a very small set of actual container formats (e.g. tar, zip), with the names there to namespace different incompatible OS file-layout hierarchies and sandboxing technologies.

If it was just about packaging, everyone would just have a build server that creates their binary once and then slams it through https://github.com/jordansissel/fpm.

But there are more fundamental differences between OSes than how the insides of their packages look. The packages are "differently shaped" to prevent you doing something stupid and damaging your OS by installing a package that will spew files in all the wrong places, and rely on classes of protections that aren't there.


Hmm... it's almost like maybe applications shouldn't spew their files all over in the first place, then you wouldn't have to worry about putting them in the wrong places...


If you can tell me what a self-contained equivalent to, say, the libpam package would be, I’m all ears.

Oh, also, a subset of case 2 is: packages that contain servers need to register the server as a service with the OS’s init system, and every OS has its own init system that expects a service definition file in its own distinct format and location.


libpam should be part of the base system, obviously. Like the widget library, ssl, and the display server. It's only really in UNIX world that this separation between "system" and "application" doesn't exist.


OS packaging formats exist for the "base system", in your terminology. "Applications", in the UNIX world, install in /opt.

Keep in mind that the separation you're talking about is very thin, often non-existent, in any OS. Consider, say, an RDBMS daemon. Is that an application, or part of the base system?

An application? You're sure? But what if components of the base system rely on its presence?

This isn't some wacky idea, and it's not a UNIXism, either. Some components of Windows Server rely on MSSQL. So MSSQL has to be built as a Windows component, rather than a standalone application.

This example helps a lot in understanding what OS packages really are. For example, why do Linux distros package scripting languages like Perl, Python, Ruby? It's not for you to use them to write system administration scripts. It's definitely not for applications to use as their runtime. (All the major distros recommend that you vendor your own runtime into your application, if you're creating a standalone application.) No, the point of these language-runtime packages is that there are system components written in these languages, and the OS package is there to support them.

And this is why you're supposed to install your own copy of these when using them for development: the OS copy isn't maintined for you. These OS-packaged runtimes are frequently not up-to-date, and they bundle with them some of the language's package ecosystem, but not all—and frequently not even the most popular language packages.

That's because the runtime isn't there for you. It's there for the OS to use. It may as well be hidden in /usr/libexec and not even in your $PATH. (Except that $PATH is frequently how system components find one-another.)

People make a mistake when they think it's a good idea to package their applications as e.g. Ubuntu PPAs following the Debian packaging guidelines. That format, and those guidelines, exist for the authoring of system components; the guidelines are the way they are (i.e. very strict) to enable other system components to rely on your component.

If you're building an application, none of that applies.

Until recently, applications on UNIX shipped as tarballs containing ./install.sh scripts that you'd have to run as root, that would unpack things into /opt but also maybe write some service scripts into /etc/init.d. (Even now, this is how heavily-integrated applications like VMWare ship.)

More recently, Docker has replaced this format when packaging applications that don't require much of the underlying platform (e.g. network servers.)

Flatpak and Snaps are two attempts to do for GUI apps what Docker did for network servers.

Unlike OS packages, Docker, Flatpak, and Snaps are all interchangable and inter-convertable as long as you have an application that only requires their least-common-denominator subset of capabilities. A Docker "package" can be repackaged as a Flatpak or Snap losslessly. There is no reason that the Flatpak and Snap ecosystems can't proxy through to Docker Hub and let you pull any Docker image and run it under their daemon. There's no real war here.


The reason applications spew files all over the place is because that's how you hook into other systems. Want a man-page? There's a folder for that. Want to autostart? Folder for that. Want your program to be runnable without specifying the full path? Put it in path.

The reason configuration sucks is because we use filesystems, which are necessarily hierarchical, when we should use a more general database.

If we had a table where the first column was "thing to be configured" and the second column was "program" things would be much better. We could query by the first column to get e.g. all programs that want to automatically start, or all programs in path. Or we could query the second column to get all configuration for a given program.

Database people have done a lot of work on how to prevent inconsistent state, and we are stupid for not leveraging that.


> Database people have done a lot of work on how to prevent inconsistent state, and we are stupid for not leveraging that.

It's kind of amusing that on the Windows side, you have a vertical integration between the filesystem (NTFS) and the update mechanism, where any MSI package will actually be installed in a filesystem transaction, such that if you e.g. cut power in the middle of package installation, then your disk will actually have nothing of the package's installed data on it. The transaction failed, and was rolled back.

And yet, even with that fancy tech in place, uninstallers still are manually-written apps that blow away files by explicitly naming them, rather than also taking advantage of these filesystem-level transactions together with some sort of hypothetical transaction WAL log, to reverse what the install did.


Unfortunately, NTFS transactions have been deprecated.

https://docs.microsoft.com/en-us/windows/desktop/fileio/depr...


> We'll see in a decade or two what software is even remotely in a usable state still.

Most distributions need non-free explicitly enabled, so almost all packages are open source, so it is irrelevant to distro packaging format.


Open source doesn't ensure it'll even build against new compilers, libs, language runtime, and all the other stuff we're reinventing all the time to keep the hamster wheel spinning. What open source desktop apps are you using that need sandboxing anyway all of the sudden? GIMP, Inkscape, Audacity? Come on.


>GIMP, Inkscape, Audacity

Yep - all of those. GIMP has scripting capabilities and exposure to vulnerabilities via image codecs, same as Inkscape, and Audacity could be linked to ffmpeg, which is a huge attack surface.


> We'll see in a decade or two what software is even remotely in a usable state still.

Fucking none of it, that's your answer.


The article is really over the top hostile. 'Flatkill', 'Fakepak', 'Red Hat does not care about security'. What intentions could the author possibly have?


Cross-posting a comment from Reddit, because it nails one of the points mentioned:

---

The list on the page is

    Gimp, VSCode, PyCharm, Octave, Inkscape, Steam, Audacity, VLC, ...
With the exception of Steam all of those programs are used to open random files anywhere on the system. One could implement a permission prompt for accessing a file, but that would lead to a Vista-like Situation where basically every action causes a prompt.

Now, that's not to say this is good as it is, but for most listed programs it's probably the way to go.


Ways macOS handles the this problem:

1. the File Open dialog is itself the permission prompt.

2. documents consisting of many files are structured into project bundles; you open—and thus grant access to—the entire bundle at once.

3. the GUI is structured to orient activity around documents rather than around applications. You hardly ever launch an app and then open a file in it. Instead, you open the document and the app is its default handler; or you right click the file and Open With the app; or you drop the file onto the app's launcher shortcut; or you drop the file onto the running app window. All of these actions implicitly grant the app permission to open the file, in the same way the File Open dialog does.

4. Apps can persist the token representing this granted permission in a serialized state file, and it will even survive OS reboots. There are some macOS utilities (e.g. DaisyDisk) that need access to your entire disk—but you only need to grant this access once. (DaisyDisk asks, on first startup, for you to open the Finder and drop the icon representing your primary hard disk onto the running app.)


Ways macOS handles the this problem: 1. the File Open dialog is itself the permission prompt.

And yet, there are quite some applications that ask you to open the home directory once so that it gets symlinked into their sandbox. It's basically a hack to say 'I need access to your whole home directory'. Of course, it is safer than permitting access by default.

Of course, in many cases there are good reasons (backup software, space usage analyzers). But not everything can be mediated through file dialogs.


Same thing for Win10 UWP apps.


File picker is privileged and mediates access through monitor. Ditto for drag-and-drop. Problem (largely) solved.


Do you open your files in PyCharm or VSCode through a file picker?


You open the parent project directory, right? That's how the file access controls work in MacOS. Sandboxed apps can't read files outside their sandbox until the user opens the directory/file in the file picker at least once. (Or something similar, I can't seem to find a source on that...)


Ah, ok. Makes more sense this way.


I’d imagine once you open a directory you can open files within it (and child dirs)


Drag and drop project folder. Also, yes, I do open project directories using PyCharm's (poor) file picker.


That’s how macOS sandboxing works too.


This highlights a conceptual dilemma with access rights in file systems: if an image manipulation software wants to open .bashrc, this is potentially suspicious. But for a text editor this is probably ok. Likewise, a text editor that reads a binary executable is probably a little bit fishy. But a unzip program might so that for a valid reason (e.g. ZIP archive appended at end of program, a common thing for installers). How can we distinguish between those cases?


This is pretty much the whole point of SELinux (and probably AppArmour)


How does SELinux or AppArmor distinguish between those cases? More interestingly, how can it tell that VSCode spontaneously editing .bashrc is bad, but doing so in response to user input is good?

(There are capability-based systems that permit distinguishing between these cases, but to my knowledge SELinux and AppArmor don't support this.)


SELinux and AppArmor would allow you to specify that your text editor is allowed to edit .bashrc, but some random other program isn't.

But I agree with you that this is not really a useful security feature -- you'd want something where a program has to be explicitly granted permission rather than some programs being able to do things that others can't (because then any attacker will just spawn "vi -c 'bad payload'" to get around the restriction).


Directory trees and files have a security context (etc_t, user_home_t, and so on), and there are rules governing which application contexts are allowed to access or modify which security contexts. It doesn't cover every edge case, and it can be frustrating to deal with things like local docker development. But the added safety is absolutely worth it to me.


SELinux labels users and domain transitions, so it's "technically" possible to do so, but I see that incredibly rarely.

I don't think AppArmor has such a facility. It wouldn't make sense, given that AppArmor doesn't know these things like SELinux would.


I'll just make a file link to .bashrc and call it /tmp/foo.png, what are you going to do? Not open links? Check if the file links to .bashrc?

If an image manipulation software wants to open .bashrc, allow it. If it has permission to write to one, so be it. If .bashrc is such a security nightmare, then perhaps the issue is that programs can write to it. Remove the write permission. Perhaps a security model where restoring permission asks for password.


> Check if the file links to .bashrc?

That would seem to be the obvious right choice. What's the problem with it?


Mmm... This is a random idea, but maybe this could be supported trough filtering files by mimetype.

Gimp would only get access to image/* (not sure it needs something else? Adapt in the case it does). Of course, this requires adjusting the current development workflow (maintain a list of filetypes needed, for example).


Most use cases of VLC only open files, so giving it universal privileges to open files but a prompt on filesystem writes would be fine. Gimp could have universal read and the right to create new files, but overwriting files it didn't create itself restricted behind a permission prompt. Same for Inkscape and Audacity.

If you want to push this further, you could imagine a permission system that can distuingish based on file types. Gimp overwriting a PNG is probably intended, Gimp overwriting a bash script probably not.

There's a lot more nuance in existing systems (SELinux) and potential future systems than just "allow everything or nothing".


> but overwriting files it didn't create itself restricted behind a permission prompt

It's more nuanced than that. A user might not even have a .bashrc, but you still don't want to allow any random app to create one.

In general, it feels like the security model for the FS has to distinguish things that can be executed, and things that cannot. Which it already does on Unix with +x, but then you've got all the scripting languages that cheerfully ignore that, and all the apps that use executable configs etc. If you can fix all those such that +x is required for any source of executable code on the system, then you can just prohibit apps from creating +x files. But the cost of doing that in the existing ecosystem is enormous.


Okay, now do emacs! Depending on build-time options, that can open text files, images, pdfs, archives (zip et al).

In fact I'm struggling to think of a single file type that truly won't have a use in emacs.

(It might actually make more sense to forbid editting executable files than going via type)

-----

>There's a lot more nuance in existing systems (SELinux) and potential future systems than just "allow everything or nothing".

The problem isn't that there isn't a lot of "nuance" in these systems, the problem is that there is!

Sure, SELinux will work if you have a static system or an SELinux expert under the desk.

Creating a system that works and remains understandable is much harder.


The sandboxing platform could offer an API so a sandboxed program can spawn a privileged file-open dialog, and then the sandboxed program is only allowed to modify the file/directory that the user picked.

With the current situation, calling the programs sandboxed is completely misleading.


You've described the xdg-desktop-portal. Gtk apps just need to switch to GtkFileChooserNative and they get that for free. It does an FD pass of a FUSE fd and the app never gets file-system access.

It also has the benefit of using the KDE file-chooser on KDE, Windows file-chooser on Windows, mac, etc...


There is no reason to make it complicated. Sandboxed programs have their own $HOME. You can drag & drop files into their $HOME. Full stop.

I have been using a directory-per-program sandboxing setup for several years (and still do). It is very convenient, and does not require any additional effort to adapt. In fact, I now have less clutter in my actual $HOME than ever before.

Programmers like to come up with clever ways to solve nonexisting problems. I say — give user a way to bootstrap a sandboxed environment into a directory of their choice (no, using auto-generated directory names is NOT allowed!), and the "problem" would no longer exist.


> Sandboxed programs have their own $HOME. You can drag & drop files into their $HOME. Full stop.

That is not very good.

Suppose you create an audio file with Audacity and a series of images with ImageMagick and GIMP, then use ffmpeg to combine them into a video and VLC to view it. They're all operating on the same files.

What we need is to add an application list to filesystem ACLs and then have security groups like Video and Finances which contain apps. Because GIMP should be able to access your photos but not your accounting spreadsheet.

It should even be possible to do some of this automatically by file type, e.g. GIMP can access any PNG file the user can but not any spreadsheet file, or read a shared library but not write to it.


Very much this. Something resembling an arbitrary number of groups per file (essentially a set of tags) and an analogous (arbitrarily numerous) set of tags applied to a process which match the launched binary. Bonus points if file format or other characteristics can somehow be worked in to automate things a bit while still maintaining security. A simple tagged group membership approach like this seems like it would be reasonably easy to use without getting in your way.

Does anything remotely resembling this already exist?

Edit: Before anyone says that SELinux resembles this, as far as I'm aware SELinux policies are anything but simple to set up and use correctly. However, SELinux types are inherited from parent directories and do look an awful lot like this. The main thing missing would seem to be that I can't find how to apply multiple contexts or types to a single file, but perhaps I'm just failing to navigate the manual?


> What we need is to add an application list to filesystem ACLs and then have security groups like Video and Finances which contain apps. Because GIMP should be able to access your photos but not your accounting spreadsheet.

This isn't that hard in a technical sense. But I very much doubt you can get the casual users to actually do that. They'll just end up with one giant group, because it's the easiest and doesn't require understanding the concept of security boundaries.

Basically, it'd be Vista UAC prompts all over again.


If you are interested in trying something similiar, this is what Qubes OS [1] does, albeit with a different implementation (separate VMs for separate tasks).

It is more cumbersome to use than the current mainstream paradigm, but not that much!

[1] https://www.qubes-os.org/intro/


It might work if you hardlinked the directory containing the shared files into all your apps' workspaces (aka $HOMEs). I would just be worried about leaking privileges through the link somehow.


It also feels a lot like just recreating the original unified $HOME, and requires everything to be sorted by type. If you wanted to organize files by project and each project contained its own images, code, documentation, etc. then if you map the project directory for GIMP it can read all the files even though it should only get access to the images.


I'm sorry, but that solution isn't workable to many (likely a majority of) users. Most will expect that when you open two programs that need to work on the same file, that if you save the file to your $HOME/Desktop that it will appear in $HOME/Desktop in other programs. Sandboxing or not. I'm happy it works for you though.


Well, I have used it for years, and never encountered any issues. If you want to share a file between sandboxes you can hardlink it. Or use descriptor-passing. Or… But it feels like you are just looking for theoretical flaws in my personal workflow for the sake of coming up with flaws.

Of course, it is silly to sandbox your bread-winning software. I don't sandbox Android Studio. Or ffmpeg. Or VLC. Personally, I believe that nobody has a right to decide, how to sandbox software on other people's computers. I think, that such decision should be left to users of that software. Unfortunately, it looks like Flatpak does not make that easy.


I'm not very invested in finding flaws in your workflow, but I am happy it works for you. It does sound interesting and I'd honestly like to know more about how you have it setup.

I stopped using desktop Linux a long time ago and now use a Mac for my desktop work. I take things a bit further and don't let anything write to my $HOME/Desktop -- it's read-only. I don't recommend that as a default for anyone either!

But as far as Flatpak, a few of the "featured" apps are things like Inkscape, Gimp, VLC, or LibreOffice. Not apps that you'd really want to sandbox isolate like you described. (And as you mentioned -- you wouldn't want to do that anyway).

Now a few of the others were Spotify and Slack. Things that you could (should?) definitely sandbox.

I guess I don't see the point in having applications (that is intended for a general purpose user) that (a) need access to your home directory to read/write files and (b) should only have access to a sandboxed pseudo-home directory. Either sandbox it so it doesn't have $HOME access at all, or don't. I'm not sure I see the benefit to this middle ground. Especially for the use-case of general user desktop applications. For server applications, I could potentially see the benefit (although, containers have probably already filled this scenario). What is the use-case you're thinking of?

I appreciate your point of view that no one should be able to decided for you how to sandbox software, but no one is forcing you to use Flatpak packaged programs[1]. Perhaps there should be some way to re-build a Manifest that limits access, or make it so that you can more easily switch granted privileges -- that would probably be a good thing. But someone has to set defaults. If a Flatpak packaged program has too liberal defaults, then maybe that's best treated like a bug and hopefully there is a mechanism to send a patch.

[1] yet... but it might be coming. I think the only way you'd see commercial applications like Photoshop for desktop Linux would be wrapped in something like Flatpak. I still think that most open-source applications will still be packaged by the main distro repositories, regardless of how well Flatpak does.


> I have been using a directory-per-program sandboxing setup for several years (and still do).

I'm curious how you do this? What's your setup?


I have a heavily-patched version of less-popular sandboxing program (appjail). When I want to handle some files from questionable origin, I create a directory (~/jailboxes/gregs_avi_files) and use appjail to switch to that directory in terminal. Unlike firejail, appjail defaults to full $HOME isolation (and have knobs for Xorg support, so X11 apps work out of box without access to parent /home). There are command-line switches for X11-based and pure terminal environments. It is also possible to whitelist/blacklist individual files in /dev etc. from command line.

I don't use Flatpak etc. — all of my jails use system-wide libraries and executables. They are just launched inside sandbox environment.


Could you link to the comment? Would like to read the discussion on Reddit too.



The developers are employed by Red Hat, but that does not make this a Red Hat endorsed product. It is not included in the current offering of RHEL and unlikely would be in a new(er) version as the technology is immature/not well enough tests d/proven yet for inclusion within EL. I call the usage of 'Red Hat' here clickbait and sensationalism as there is no indication within the text that shows the aforementioned non-existence of endorsement or inclusion.

Note: the engineers working on Flatpak and both friends and colleagues of mine. Just concerned that the author misrepresents our employers viewpoint. We are allowed to work on side projects, but that does not make them default inclusions or endorsed.


1. None of this has anything to do with Flatpak, it has everything to do with Flathub and how particular software is packaged.

2. Your preferred distribution can host their own Flatpak repository and ensure that things like security updates get dealt with properly. Flatpak is not Flathub.

3. This ecosystem is growing, so it's putting some things on the backburner, prioritizing application availability over holding a package to make sure that permissions are perfect. There is no reason that these issues can't be ironed out going forward.


>There is no reason that these issues can't be ironed out going forward.

That's true in principle, but selinux still doesn't see that much adoption outside of the distro configured policies for typical server usecases. A lot of desktop apps run unconfined. So I think this is where openbsd's approach to stuff like this is more practical. They iterate and wait before rolling out features like pledge or unveil so that they know that 1) It can be made to work with at least 50 apps (read this is one of their slide decks) 2) They can tackle a complex enough application like chromium. Flatpak, selinux or any of the other security mechanisms are completely ineffective if users or developers are largely ignoring them.


> selinux or any of the other security mechanisms are completely ineffective if users or developers are largely ignoring them

SELinux works by default on Fedora, and even has a nice GUI popup that explains to you what happened when an SELinux policy blocked an action (so that you can reconfigure it). It's pretty neat, and is massive improvement to SELinux of old -- I would recommend trying it if you haven't recently.


Yes, I think my point still stands that a large number of desktop apps either have lax policies or run unconfined. I don't know if things have changed that much recently. Confinement is opposite of ease of use. So Fedora/RHEL have selinux in enforcing mode, but the policies are still more effective for servers. I don't know how far they go with the policies for desktop.


It'd be nice to see this stuff in (open)SUSE too. wink wink


Thank you ! This should be 1st. Many people don't understand this it seems


But Flathub is flatpak. Also, does flatpak have the full support of redhat?


Flatpak is still a side project worked on by my colleague. Although he is employed by Red Hat, it is not a project led by our employer. AFAIK there hasn't been any work done to get it in RHEL, ...


No. Flathub has nothing to do with flatpak technology itself. Flathub is just one server hosting some flatpack repos.

It's like saying .deb is Ubuntu Store. Well no, it is just one PPA among many other you could add to get your apps


Isn't the whole point of Flatpak to prevent the same app from being packaged multiple times for different distros?


Ideally , Firefox would be downloaded from official Mozilla Flatpak repo, Blender from Blender Flatpak repo on Blender server, etc... And we would have this list of those repo on our distro.

However because the official adoption is slow (very few software have official flatpak repo), flathub allowed the community to build packages themselves. But this is clearly not how it should be.

The second (more legit) reason of flathub is that small developer might not want to pay a server to host their app and flathub proposes to host their repo.


Probably a good idea to get the permissions correct up front.


Spot on.. expecting the latest open source software to be perfect is not reasonable..

People who wants stable and secure to go with Debian stable... Some day when starts recommending flatpak, I'm sure flatpak will be solid :)


Ugh. This trade off again. Linux package management means you update a library once, and fix that security problem everywhere for all the apps that use that library .. except for Firefox, Libreoffice, Chrome and others which insist on packaging their own versions of libjpeg, libpng and everything under the sun for stability.

A docker container contains all its own dependencies too. You gain stability .. but you could have a bedbug nest of security problems down there.

I don't get the move to Flatpack and Snap for desktop apps. Desktop apps need to interact with the base OS, the display system (X11 or Wayland) the shared toolkit libraries (GTK/QT). I've screenshots of native vs Flatpack vs Snap and they tend to have different window borders, some get the GTK themes matched up while others don't.

Not to mention the wasted space when every single application has its own version of everything. What, did you think electron apps each packaging their own 80MB web browser was a good idea too?

This just seems like the wrong direction to move in for Linux. We're not MacOS! We don't have hacked together shit like Android apks. We need to be better than that!


Linux needs a deduplicating filesystem though in the kernel. Something to make containers and docker handle the situation of "lots of mostly identical files" well. Even ZFS, which should be good at this, really isn't.


Take a look at Fedora Silverblue with rpm-ostree, it does basically what you describe


In Fedora, Firefox, LibreOffice, and Chromium are contain no bundled libraries. Main offenders of "no bundled lib" rule are Go and Rust applications.


In Fedora, Rust applications don't have bundled dependencies, but since Rust doesn't provide a stable ABI, we statically link the libraries into the application for now.


> CVE-2018-11235 reported and fixed more than 4 months ago. Flatpak VSCode, Android Studio and Sublime Text still use unpatched git version 2.9.3.

Wait, what? We explicitly do not ship a Flatpak version of Sublime Text, and no version of Sublime Text comes with git.

After such inaccurate information, I can’t help but question the rest of the article.


After installing the flatpak of Sublime Text from the flathub maintainers, here is the result of looking for "git":

    wbond@u1804:/var/lib/flatpak/app/com.sublimetext.three$ find . -iname 'git' 
    wbond@u1804:/var/lib/flatpak/app/com.sublimetext.three$


I don't know about git integration but as for Flatpak version of Sublime Text, I found this:

https://flathub.org/apps/details/com.sublimetext.three


Yes, it appears that the flathub maintainers have published Sublime Text under flathub. This is not an official distribution channel by us, and looking at the spec (https://github.com/flathub/com.sublimetext.three/blob/master...) it seems to rather automatically install Package Control, but also in a rather brittle way. Sigh.


If many distros and people want to use a flatpak to install software even with these drawbacks that would be a good indication that it would be worth doing upstream.


We currently provide a full complement of Linux package manager repositories, along with tarballs: https://www.sublimetext.com/docs/3/linux_repositories.html.


I try to respect a self-imposed policy of not installing proprietary software that's not properly sandboxed, as I have little control over it (think about the Remote Code Execution hive that Steam games must be).

I do not use Sublime Text personally, but if I ever want to try it, I'd do it trough a flatpak. Yes, sandboxing permissions might not be perfect yet, but a little sandboxing is always better than none...


Have a look into flatpak too, otherwise in the not-so-distant future the users could have a problem.

For Fedora, they are planning switching to Silverblue around Fedora 30 (atomic system, rpms still supported, but jumping around the hoops).


It will be interesting to see if Flatpak, Snap or AppImage ends up being the predominant force in the new wave of Linux packaging. Knowing Linux, users will probably expect projects to support all three. :-)


Could someone knowledgeable enough comment on how this compares to Cannoicals Snappy https://en.wikipedia.org/wiki/Snappy_(package_manager)?


With the exception of snaps running on Ubuntu and Solus, snap confinement is limited. Snaps rely heavily on Ubuntu's specific flavor of AppArmor to be able to offer full confinement, currently. Solus imports these changes into their kernel, though I don't trust the changes much because they haven't undergone formal review and have been approved by the kernel developers.

So, for example, on Fedora, Debian, CentOS, or openSUSE, snaps run in "devmode" because of the missing functionality. There's been some work over the last couple of years to upstream some of the work on AppArmor (so openSUSE may partially work in the future). There's a desire to support SELinux properly, but to date, no work has been done. There is an SELinux policy that attempts to confine snapd itself, which was contributed by the guy working on the Fedora/CentOS package for snapd (though it looks like the policy would also work for openSUSE and Debian SELinux setups, too).

Based on conversations I've had with the Snappy team before, it comes down to two things:

* Canonical doesn't know how to work with SELinux at all, and doesn't want to learn how to * Canonical's customers haven't demanded it of them yet

I find the latter point especially strange given the constant demand for official Snappy support on CentOS and Amazon Linux 2 (which is currently not available yet). Both distributions have SELinux and rely on it for security.

In addition, the majority of snaps are not sandboxed at all anyway, as they operate in "classic" confinement. That is, they're not confined, and have full access to the system, they just use snapd as the delivery system for the software. So even if Snappy confinement actually worked on all Linux distributions, it doesn't matter because most apps delivered through Snappy are entirely unconfined.

Finally, Canonical is the sole arbiter of snaps. You cannot use your own store with it, as snapd is hardwired to Canonical's store. They own all the namespaces, and are the sole publisher. And yet, they have a confusing policy of how they don't consider themselves the distributor of software when they are... It's strange. But because you can't run your own store, you're at their mercy for snap names, available functionality, and so on.

Flatpak, in contrast, is designed to offer the fullest confinement possible with common security features in the mainline Linux kernel that all major distributions offer. Applications register what they need in their manifests, and the Flatpak system handles granting access as appropriate. Flatpak relies on federation through "portals" for interacting with the host environment, and that allows for the applications to have far less direct access than they would normally have. It's basically an Android-like setup, and it seems to work well, though it's still far too coarse for some kinds of applications.

Flatpak lets you run your own repository, so you can implement whatever means you'd like for delivering them, even keyed paywall locations, so that customers who pay get their own access to their own purchases. But most apps probably should be pushed to Flathub, especially if they're free. I think no one has figured out yet how to do paid stuff.

(Disclaimer: I'm a Linux app developer that grudgingly deals with both formats. I'd rather just keep using RPMs myself, as it works well and is reasonably portable.)


> Snaps rely heavily on Ubuntu's specific flavor of AppArmor to be able to offer full confinement,

The AppArmor patches have been largely upstreamed by Canonical, and improvements continue to float upstream constantly. So claiming it's not being reviewed isn't accurate.

> * Canonical doesn't know how to work with SELinux at all, and doesn't want to learn how to

That's disingenuous. Canonical works with many parties, and has people working on LSM stacking for example precisely to support co-existence of the systems. We also had exchanges in the forum to discuss the implementation of actual backends in snapd to support it, but Canonical indeed won't pay for the cost of implementation until there's a reason to do it. That's business as usual and pretty straightforward.

> In addition, the majority of snaps are not sandboxed at all anyway, as they operate in "classic" confinement.

That's incorrect by a huge margin. I'm curious about where you could possibly have based that opinion on? Classic snaps require manual reviews, which need to be backed by public justification. You can see every single request floating by in the store category at https://forum.snapcraft.io. That means every snap people push without talking to anyone are not classic, and thus the vast majority.

> Finally, Canonical is the sole arbiter of snaps.

Well, yes, it has created the project and maintains it actively for years now. You're welcome as a contributor.

> Disclaimer: I'm a Linux app developer that grudgingly deals with both formats. I'd rather just keep using RPMs myself

And I work on snapd (and have also worked on RPM back then, so enjoy :).


>Well, yes, it has created the project and maintains it actively for years now. You're welcome as a contributor.

So, there cannot be a third-party/self-hosted snap store ? That seems like a major limitation.


There are self-hosted proxies, and there are publicly hosted stores, but all stores are part of the exact same hierarchy and share some of their knowledge. That's mainly a consequence of implementing the intended user experience as originally designed back then.


> That's disingenuous. Canonical works with many parties, and has people working on LSM stacking for example precisely to support co-existence of the systems.

I'm assuming with "LSM stacking" that you mean having both AppArmor and SELinux operate concurrently on a system, since you can currently have kernels that have both enabled, but only one at a time active.

Are you going to convince Red Hat to enable AppArmor and support stacking SELinux and AppArmor in RHEL? What about helping to maintain AppArmor support in Fedora? Without that piece, that's not a valid or useful solution because you're hoping for something that won't help any of those people (like me!) at all.

I'm pretty sure that everyone will say no to the idea of combining AppArmor with SELinux, since it's basically insane and requires developing and maintaining policies for both that don't conflict with each other. Having written these things for my apps, I wouldn't wish the combination of both on a single system on my worst enemy. That's a lot of security check policies to work through!

> We also had exchanges in the forum to discuss the implementation of actual backends in snapd to support it, but Canonical indeed won't pay for the cost of implementation until there's a reason to do it. That's business as usual and pretty straightforward.

Sure, but if people do keep asking for full support, that implies having SELinux support to enable full confinement. As I said above, unless you intend to actually do the work and convince Red Hat to make the necessary functionality available, you're going to need to support SELinux as a proper backend.

> Well, yes, it has created the project and maintains it actively for years now. You're welcome as a contributor.

I think you missed the point. But sure, maybe. If there wasn't the CLA to get in the way... Why do you have that when you already offer it under a nice copyleft license?


> I'm assuming with "LSM stacking" that you mean

The term "LSM stacking" is public. Search for it and you'll get good material.

> Are you going to convince Red Hat to

That's not how things work. Canonical and RedHat collaborate technically by improving parts of the system as necessary. Things are enabled or not based on market requirements.

> What about helping to maintain AppArmor support in Fedora?

Canonical already does that by working to upstream the patches. That helps Fedora and everybody else too.

> I'm pretty sure that everyone will say no to the idea of combining AppArmor with SELinux

Well, no need to guess.. there are open discussions about it.

> I think you missed the point. But sure, maybe. If there wasn't the CLA to get in the way...

For legal reasons that are not unique to Canonical we do require a pretty straightforward CLA to be signed. I've signed that sort of CLA myself for other large companies, both individually and in the name of Canonical, so the playing field is level here.


> That's not how things work. Canonical and RedHat collaborate technically by improving parts of the system as necessary. Things are enabled or not based on market requirements.

Umm, but your ability to offer useful confinement literally hinges on this since you don't want to do anything else...

So, you'd have to do something to get Red Hat to consider enabling it. Otherwise you're stuck with nothing for Snappy on the most commonly used Linux distribution platform in the commercial space.

>> What about helping to maintain AppArmor support in Fedora? > Canonical already does that by working to upstream the patches. That helps Fedora and everybody else too.

It doesn't help Fedora at all today, since there is no AppArmor support in the distribution. The user-space tools aren't in there, and the kernels shipped by Fedora do not have AppArmor enabled. So, no, I can categorically say you are wrong there.

> Well, no need to guess.. there are open discussions about it.

Really? Because I searched, and outside of John Johansen's wishful thinking presentations, I've seen no evidence of anyone talking about it seriously. If anything, I've heard people say John is crazy for thinking that this is a reasonable idea.

Care to offer some proof to the contrary? Who knows, you might be right! It seems that the LSM mailing list has no functioning archive, so there could be something there that says otherwise.

> For legal reasons that are not unique to Canonical we do require a pretty straightforward CLA to be signed.

No other major Linux company requires one. Not Red Hat. Not SUSE.


> There is an SELinux policy that attempts to confine snapd itself, which was contributed by the guy working on the Fedora/CentOS package for snapd (though it looks like the policy would also work for openSUSE and Debian SELinux setups, too).

Snappy packager for Fedora here! :)

Yes, it's true there's an SELinux policy that confines snapd, but it does do some limited enforcement of limitations on snaps, too. It's just not as nice as I'd like, but that requires snapd to learn how to work SELinux, which I can't really do...

And yeah, I tested the policy on Debian too, it works! It should work on openSUSE too, though it might need a slight tweak.

> In addition, the majority of snaps are not sandboxed at all anyway, as they operate in "classic" confinement.

I don't think it's the majority _per se_ (since Ubuntu Core can't run those), but most of the popular ones likely do.


> I don't think it's the majority _per se_ (since Ubuntu Core can't run those), but most of the popular ones likely do.

No, that's also incorrect if you slice it by popularity. We don't have a public chart easily filtered by these aspects together, but just pick some random samples.

It's also easy to see that based on the low volume of classic snap requests in the forum, vs. the volume of actual snaps published and announced in the open.


My understanding is that we still can't fully confine stuff using Electron (VSCode, Atom, Skype, etc.). Did that change recently?


Not true. The snaps you list are not classic by virtue of being electron. There's plenty of electron apps in the snap store which are not classic, but strictly confined. It's the default for electron apps built with electron-builder.


I don't think that was ever true?

Docs: https://docs.snapcraft.io/build-snaps/electron

Example: snap info electron-quick-start


I thought the seccomp-in-seccomp thing would have messed things up...


It's exactly the same state.


No, not really. Files at ~/.* were never readable or writable for strict snaps, even when they were granted the "home" interface, precisely for that sort of reason. The file permissions and ownership are also strictly checked by the store (no setuid bit issue). Classic snaps depend on manual approval, and need to be acknowledged by the user before being installed for that reason, etc.


Wait a minute, did somebody get so pissed at flatpak that they bought a domain name just to specifically host that single blog post?


Domains are cheap. Often free for one year.


Well they’re “free” if you buy them alongside hosting. Still even if cheap, that’s quite a commitment: finding a domain, paying it, writing a custom (albeit simple) website, uploading it, etc...


> that’s quite a commitment: finding a domain, paying it, writing a custom (albeit simple) website, uploading it

Sounds fast to me if you know how. Just write the article (markdown + pandoc is fast) and...

With Zeit you can just type `now` and `now alias (url) mydomain.xyz` - and the website is up and running at your domain for $domain_price + 0.10USD/GB.

With DigitalOcean/Vultr/Some VPS Provider + Ansible you can do something similar.


> Still even if cheap, that’s quite a commitment: finding a domain, paying it, writing a custom (albeit simple) website, uploading it, etc..

I am among the laziest people I know and even I'm raising an eyebrow at this statement.


> Well they’re “free” if you buy them alongside hosting.

Depends on hosting company. I've "bought" domains for free many times, without hosting, just to run joke sites for few months.


Ha, interesting! I always assumed a .org was at least a dollar. Mind sharing a link?


Don't have a free "org" source; maybe they never was. Free domains I got were under national TLD.


Guessing some other company involved with packaging formats?


It is far more likely that the website was created by an individual, probably by the guy who submitted the site to Reddit's Linux forum.


Flatpak, Snap, container images are the new iteration of static linking.

Just like with static binaries, they make deployment easier for the developer, but introduce problems with size, duplication and library updates.

Flatpak added runtimes [1] to alleviate the problem. Does this solution look familiar? Yes, it is the same dynamic library concept. We are coming full circle.

[1] http://docs.flatpak.org/en/latest/available-runtimes.html


> VLC [...] filesystem=host

See my comment on why it's not easy to fully sandbox software like VLC: https://news.ycombinator.com/item?id=14409234

The author is correct, in the fact that flatpak-vlc is not a secure sandbox.


Thanks for sharing the info. I'm just curious - how would splitting VLC into multi processes solve the permission issue, since the sub-processes will still need access anyway?


Each subprocess would only get one permission, the one that it actually need. The critical parts (audio decoders, video decoders, parsers) would not get access to $HOME or network, for example.


Thanks, that makes sense.


By default, flatpaks don't have r/w to your home.

And setuid binaries have been blocked for a while (as the article says). Plus, selinux will have these things locked down on a system that uses selinux.

I think the problem is part perception. Flatpak, like Docker is not primarily for security isolation. It's isolation for ease of deployment - to avoid dependency hell.

Not saying Flatpak's failings are not a problem. Just keep some perspective.


To be honest, this security nightmare also covers other contemporary "container" formats such as docker.

Running docker containers as a non-root user is unfortunately still not a widespread practice. That means that any root process within a docker container has root on the host.


Only if you run with no isolation / user namespace. And even without that, you need to run with `--privileged` to get access to interesting capabilities. It's not as simple as container root == host root.


Are user namespaces enabled by default, or are they something that you have to enable and then spend time dealing with all the containers that weren't written with them in mind?


What a lot of people are missing is that flatpaks put the flatpak author responsible for the security of every package inside the flatpak. If you use a package from an unofficial rpm or deb repo, they're nearly always still dynamically linked, so security updates for things like openssl still apply.


> they're nearly always still dynamically linked, so security updates for things like openssl still apply.

Could be, or might not be. It's easy enough to ship compiled libraries in the same rpm/deb as the software you ship, or put your defunct versions into the same unofficial repo under a different name and have your application pull from there. In fact, they might not use openssl at all, possibly some other half-baked library. Of course, that's for languages that are compiled; people can vendor in all sorts of stuff into python sources. Don't even get me started on golang.

Installing software from any source involves risk. Distribution repos help mitigate some of that risk. Flatpaks as a technology don't change the risk (significantly) from a bit-rot point of view IMO.


> Distribution repos help mitigate some of that risk. Flatpaks as a technology don't change the risk (significantly) from a bit-rot point of view IMO.

I don't agree. There was a FOSDEM talk by one of my colleagues specifically about this issue, and why Flatpak is walking us backwards in terms of how packaging has worked historically[1]. Distributions are far from perfect (hell, some of them ship my packages and I'm definitely far from perfect) but they do solve real problems and going to a Windows-DLL-dump method of packaging is going in reverse.

If your "package format" makes developers maintain all of their dependencies, it isn't solving the problem that most people actually want you to solve -- to be able to do less work and build on top of other people's work. By using dependencies maintained by your distribution you also get security updates and maintenance for free -- many distributions have paid security and maintenance teams that deal with precisely this. I cannot overstate how much work it is to maintain your own distribution.

[1]: https://www.youtube.com/watch?v=mkXseJLxFkY


I'm with you, I prefer traditional packaging over flatpak. I usually build from source if I need a newer version than what a distro provides (or if the distro doesn't provide it at all).


This is what happens when you try escape "dependency hell" by turning the host into a composition of hosts-per-application.

Now you have a clusterfuck of dependency islands which all need updating and have their own unique sets of CVEs and upstream release schedules and policies.

It's not really flatpak-specific, this is the reality of containers (in the image/rootfs software distribution sense, not namespaces+cgroups) in general.


Flatpak and Flathub are not hiding this and nowhere on Flathub does it claim that all Flathub apps are securely sand boxed. Flathub has unoffical packages and this it has the same issue like all other unoffical repos.

Flatpak CAN be used to do sandboxing, but that totally different from saying 'all application will be securely sandboxed'. I don't know where the authors got this idea from.

The simple fact is, that sandboxing on a legacy system is difficult and Flatpak can't magic away many of the security issues in the Linux desktop.

Also all the 'Red-Hat Developers' evil reminds me of the typical Systemd-hate rant and I really hope we don't have to suffer another iteration of this in the Open-Source community. The person that leads the project works for Red Hat, but its not a Red Hat project.


I share this writer's concerns with Flatpak. It looks to me like yet another attempt to bring the horribly broken and insecure "download it and drag it to your desktop" model of application distribution, which has long been a source of viruses and malware on Windows and Macs, to Linux.


> bring the horribly broken and insecure "download it and drag it to your desktop" model of application distribution, which has long been a source of viruses and malware on Windows and Macs, to Linux.

Wat?

Windows famously doesn’t have “just drag it to your desktop” to install. There’s an entire segment of the industry around building installers and managers for installation of windows programs.

And I can’t recall a single mac-affecting malware that spread the way you describe except maliciously modified versions of pirated commercial software (eg adobe stuff) which doesn’t actually install via drag and drop anyway - it has an “installer” if its own.


I think the reference is to the fact that on Windows you download some random .exe installer from some place on the internet and trust it, rather than selecting a signed package from a trusted repository that gets automatically updated. Should have been "download it, drag it to your desktop, and install it".


Yes, this is what I was referring to. Sorry for my unclear phrasing.


What ? No it's exactly the opposite ! I think you're confusing with AppImage maybe which is indeed download from the browser & run.

The (long) goal of flatpak is that the user would never download and execute from the browser, everything is updated through the flatpak repos (like the PPAs for .deb) but with the addition that the apps are sandboxed and follow a runtime model for dependency instead of packaging everything or depending on other packages.

Basically the goal is to have something like on Android or IOS , so exactly the opposite of the "download from the browser and run an untrusted executable"


> The (long) goal of flatpak is that the user would never download and execute from the browser

Just to be clear; the "download" model I was describing is not "download and execute the actual app from the browser", it is "download and execute an installer from the browser". Then either clicking on the installer or dragging it somewhere (on Macs it used to be dragging to the desktop, but I haven't used Macs for several OS X versions now) starts the installer.

> everything is updated through the flatpak repos (like the PPAs for .deb)

This I would have no problem with; I would be able judge whether I trust their PPA the same way I judge any other third party PPA (or the distro itself, for that matter). And the update would be through the normal mechanism I use to update everything on my system, which has well-tested security measures built into it.

> but with the addition that the apps are sandboxed and follow a runtime model for dependency instead of packaging everything or depending on other packages.

I understand the benefits of this as far as fixing dependency hell. But it doesn't seem like the sandboxing part works as advertised.

> Basically the goal is to have something like on Android or IOS

I'm not sure this is a good way to phrase the comparison since it implies not just sandboxing/packaging, but an app store curated by a large corporation whose interests don't align with mine, various broken permissions models, etc.


This isn't any less "broken" than painstakingly adding third-party repositories when your package happens to not be maintained.

In other words, Linux is secure because nobody can ship software on it without going through massive hurdles and because everybody who is smart enough to install software on Linux does some diligence.


> This isn't any less "broken" than painstakingly adding third-party repositories when your package happens to not be maintained.

True, it isn't any less broken than that; it's more broken.

First, adding a third-party repository, and then using your distro's GUI package manager to install an app from that repository, is a lot more work for the average user than clicking on a download link and then dragging the downloaded file to your desktop (or clicking on it to open it and start an install process). That's by design: it should take some work on the user's part to download and install software that hasn't been vetted by their distro. Greatly reducing that work, as Flatpak does, is a bug, not a feature. (See further comments below.)

Second, third party repositories don't promise that their apps are sandboxed; a binary from a third-party repo has the same privileges as any other binary from the distro. Users aren't being told that the third party apps are "more secure". Promising that your apps are sandboxed means they need to actually be sandboxed; disabling the sandbox with default privilege settings breaks that promise. So users get less security than they think they are getting with this model.

> Linux is secure because nobody can ship software on it without going through massive hurdles

Really? Then why are there thousands of open source applications in my distro's package manager? (And that's without installing any third party repositories.)

> everybody who is smart enough to install software on Linux does some diligence.

Nothing can protect a user who is not smart enough to do some due diligence before installing software. So setting up the system to require some due diligence seems like a better idea than removing the due diligence just because users will find that easier, and then claiming that you can still provide security.


> is a lot more work for the average user than clicking on a download link and then dragging the downloaded file to your desktop (or clicking on it to open it and start an install process).

You can totally download binaries from the internet and execute them if they don't require libraries (if the binary even needs any libraries, ie not statically compiled).

You can also download a .sh installer and execute that to install software, it can even create an icon on your desktop (if you even still have one of those that has icons ;) ). Unfortunately, there's a ton of software that installs like this on Linux.

Edit: Grammar


I agree there's a ton of software out there that wants you to install it this way, not just on Linux but on any OS. My point is simply that I, as a user, am never going to use software that wants me to install it this way. The extra work involved in setting up secure distribution is a feature, not a bug.


> My point is simply that I, as a user, am never going to use software that wants me to install it this way.

I, as a developer, am not sure I care. It's tough for me to care about Linux in the first place (you guys are picky!), but let's say I went through the trouble of maintaining multiple third-party repositories for major distributions, how exactly is that more secure from your perspective? You still have to trust that I don't ship malicious binaries, just as if you just had downloaded the package from my website. Worse yet, you also trust that I maintain all these repositories securely, which means a bigger attack surface for you.

> The extra work involved in setting up secure distribution is a feature, not a bug.

Except it isn't really secure from a technical perspective, it's literally just more work.


> I, as a developer, am not sure I care.

I'm not saying you have to care. If your software is so good that I need to have it, then either my distro will have it, or you'll have set up some kind of distribution infrastructure that I can use securely, or, if I have to, I'll download your source code and build it myself. OTOH, if I don't need your software, and it's not easily available to me securely through my distro, then I just won't use it.

> It's tough for me to care about Linux in the first place (you guys are picky!),

Yep, I sure am. I have to be picky to keep my information secure. Most people don't seem to care about that, which is why they're not as picky as I am. Sooner or later it will bite them.

> let's say I went through the trouble of maintaining multiple third-party repositories for major distributions, how exactly is that more secure from your perspective? You still have to trust that I don't ship malicious binaries, just as if you just had downloaded the package from my website.

If I'm getting binaries from you directly (instead of from my distro's maintainers, who are building binaries from your open source code), then yes, I have to trust them. If downloading them from your website is the only way you'll give them to me, and your software is so good that I need to have it, then I'll end up downloading them from your website. So far, the set of software that is so good I'm willing to do that, and which forces me to do that by giving me no other alternative, is empty.

Also, even supposing downloading from your website is the only alternative you give me, to do that securely, you'll have to use HTTPS, you'll have to sign your binaries with a public key I trust, you'll have to provide signed hashes so I can verify the download, etc.--in other words, all the stuff you'd have to do if you maintained a third-party PPA. The software that is so good that I'd be willing to download it from your website without all those precautions is not only empty, it is inconceivable to me that it will ever be anything other than empty (whereas I can at least conceive it being possible that somebody, sometime, will write software that's so good that I'll go to their website to download, with all of those precautions, if given no other option).

And also again, if you don't supply a third-party PPA that my distro's package manager can pull updates from automatically, how are you going to ship me updates? Are you going to ask me to go to your website every time? Or are you going to reinvent, poorly, the packaging and updating infrastructure that has already been field tested for years by distros?


"Also, even supposing downloading from your website is the only alternative you give me, to do that securely, you'll have to use HTTPS, you'll have to sign your binaries with a public key I trust, you'll have to provide signed hashes so I can verify the download, etc.--in other words, all the stuff you'd have to do if you maintained a third-party PPA."

This is how most professional Windows desktop software is distributed today. Also, you don't need a signed hash if the binaries are code-signed - you can verify that they haven't been tampered with by simply right-clicking on the binary and looking at the cert/SHA-1/SHA-2 signatures.


> ...then I just won't use it.

So far, that seems like a very reasonable compromise for both of us.

> Yep, I sure am. I have to be picky to keep my information secure. Most people don't seem to care about that, which is why they're not as picky as I am. Sooner or later it will bite them.

I don't see your point. If it's about Microsoft's data collection, that's orthogonal to how software distribution works. Otherwise, there's no reason to trust the competence of Canonical or RedHat employees (or even volunteers for other distros) over those of Apple or Microsoft. Either one can mess up, either one can expose your system.

> Also, even supposing downloading from your website is the only alternative you give me, to do that securely, you'll have to use HTTPS, you'll have to sign your binaries with a public key I trust, you'll have to provide signed hashes so I can verify the download, etc.--in other words, all the stuff you'd have to do if you maintained a third-party PPA.

It doesn't stop at PPA, to really support all the other picky Linux guys with their distributions I need to provide dozens of packages built against the dependencies of whichever versions of those distributions are currently in use. That's the actual problem Flatpak is solving. If there was one package format that worked everywhere, it would be a different story. You can trivially download and install (compatible) deb or rpm files as well, why aren't you lamenting that being a security issue?

> And also again, if you don't supply a third-party PPA that my distro's package manager can pull updates from automatically, how are you going to ship me updates?

Your distribution could integrate Flatpak updates into its update mechanism, or you can run them manually or as a cron job.

> Or are you going to reinvent, poorly, the packaging and updating infrastructure that has already been field tested for years by distros?

Personally, the amount of times that this "packaging and updating infrastructure" has broken working applications or whole Linux installations leads me to believe that no amount of testing will ever make it work reliably. On the other hand, the software that has all its dependencies in one place, where an update consists of overwriting or replacing the installation directory, has rarely failed. On Windows, this is called "portable", on Mac OS, this is simply a regular application.


> Personally, the amount of times that this "packaging and updating infrastructure" has broken working applications or whole Linux installations leads me to believe that no amount of testing will ever make it work reliably.

What distributions have you been using? I rarely have a problem with Debian or Fedora in this manner.

> It doesn't stop at PPA, to really support all the other picky Linux guys with their distributions I need to provide dozens of packages built against the dependencies of whichever versions of those distributions are currently in use.

Get your package into Debian and Fedora, other distros might pick it up. If your software is popular enough, someone might volunteer to do the packaging for you. If it's something I care about and not available, I'll compile it (if it's a compiled language). If it's something I care about and it needs to go into production, I'd build and maintain my own rpms or debs internally.


> What distributions have you been using? I rarely have a problem with Debian or Fedora in this manner.

Fedora and especially Arch are big offenders. Debian is so "stable" that I can't install newer software through the provided packages anyway, so that's trading off one failure over another.

> Get your package into Debian and Fedora...

If you stay inside the FOSS bubble, of course maybe some maintainer will eventually spend their precious time packaging some version of your application in some (sometimes broken) fashion. I don't think that's a good solution even for FOSS, but for non-FOSS it's not even on the table.


> Debian is so "stable" that I can't install newer software through the provided packages anyway, so that's trading off one failure over another.

I've already addressed this, it's pretty trivial to recompile most major software packages. You can also pull those packages from testing or unstable.

> but for non-FOSS it's not even on the table.

Sucks to be a proprietary software vendor. You have to do all this hard work for people to not buy your product anyway.


> I've already addressed this, it's pretty trivial to recompile most major software packages.

Is it not obvious that outside of the Linux bubble people are not looking forward to invest their precious time into such things?

> Sucks to be a proprietary software vendor. You have to do all this hard work for people to not buy your product anyway.

Of course the alternative is to just ignore Linux users like most proprietary software vendors do.


> there's no reason to trust the competence of Canonical or RedHat employees (or even volunteers for other distros) over those of Apple or Microsoft.

Yes, there is: Apple and Microsoft have broken people's systems, and leaked their data, multiple times. Microsoft has even shipped virus infected CD-ROMs to customers. RedHat and Canonical have not done those things. So their track record is much better.

> It doesn't stop at PPA, to really support all the other picky Linux guys with their distributions I need to...

You only need to do all that stuff if you insist on providing your own binaries. But the whole point of each distro having its own packaging system is that the distro builds the binaries and packages them. You, the upstream developer, just provide your open source code.

> Personally, the amount of times that this "packaging and updating infrastructure" has broken working applications or whole Linux installations leads me to believe that no amount of testing will ever make it work reliably.

I've never had this problem, so we apparently have had very different experiences.


The sandbox is in a sense working, the problem is that the folder the app accesses is more critical than what the user thought. We should make sure nothing in home will get executed : no bashrc, no scripts, no executable. In the "ideal" world you would never download & run any script or any executable (like on Android or IOS). Everything the user should be able to do is install or run flatpak apps.

(Of course in practice as soon as you are programming a bit you'll want to open a terminal, run scripts and do stuff outside flatpak)

For the second point, well it's just that update are not frequent enough ? This has nothing to do with flatpak technology right ?


Maybe it's about time we break backwards compatibility and get rid of all dotfiles in ~ and move them to say .local/share/software_name and then start restricting access to those folders?


But I still want to be able to use my Flatpak VsCode to edit config files in .local or .config, and I still want to use my Flatpak Gimp to edit images in .local/share/icons.


The chances of those things happening are rather low for the average user a prompt would solve those and would be a way better solution that just allowing full access to ~ and any dotfiles.


There are a WIP portals that would instead of giving access to the whole filesystem or the whole devices list a prompt for the user to allow access to a specific device or a specific system feature. Let's take for example a Music player, you can give it access to `xdg-music` folder only. But the users will start complaining about the fact their music is stored somewhere else and they would want a full access to their home folder or to the devices list to play music from an external hard drive or whatever. Things are not perfect yeah, but many of those apps were not made with a sandboxed env in mind. There are a bunch of new apps that were created with that in mind and use those portals features. Things are getting better, slowly maybe but surely! The Flatpak packages will improve with time and we will be getting a better way of distributing apps safely and easily on Linux.


I don't see any real benefits to using flatpak in its current form. You get worse integration with the rest of the system, poor tools that are inferior to your system's package manager, and no real security benefits currently. What's the point of launching all these rubbish proprietary apps on the store with no real sandboxing ? It creates a false sense of security, which is worse. All the proprietary apps will do what they used to. They've just been ported to flatpak as thin wrappers. If you get the dropbox flatpak, it will continue to litter your home directory with hidden files.


I use flatpak on my system for the fact that packages can be had there I can't get through my package manager easily and officially. That's the only selling point!


Yes, that's kind of true. You can get newer versions of things slightly easier. It can be major point if your distro is getting quite old (like RHEL or even xenial), but if you keep up with a 2 year cycle of distro updates like ubuntu LTS, you rarely hit that scenario.


That's definitely true! It's also good for smaller distros with smaller package managers. It's easier for a dev team to support just Flatpak than to bundle/petition for pacman, apt, yum, eopkg, and so on.

I'd greatly prefer 'native' packages though.


That's a pretty huge selling point - I think it was the main reason for creating them.

The flaw this post seems to highlight is that they shouldn't claim that they are sandboxed when they aren't. But even without sandboxing they're still a good thing.


flatpak is still early.. most apps are still installed from package managers, and few apps are written with flatpak in mind.

The current situation is probably not much worse than installing from various third party package archives.

I suspect things will get better as adopting catches on.. it's no surprise that early stage open source software have rough edges.


Trust me, nobody cares. The regular user doesn't go past "I can install Spotify in one click". It always were like that and it always will be like that. With more "normies" coming to Linux there will be more stuff like that. That is exactly why people were saying that there are no viruses on Linux only thanks to lower popularity.

And with all honesty - how many of us read script files when installing something from AUR?

But hey, nobody forces you to use it! You can always choose you distro repo or go DIY way.


Downvoted without replies... Does anybody find statement that most users don’t spend too much time to ensure security of their machine offensive or incorrect? I know only myself and few other fairly geeky people doing that...


Random mint 18.04 user here.

I just noticed about a week ago that flatpak was downloading huge amounts of data (like hundreds of MBs). I had all system updates set to download and install manually, and hadn't installed anything new for at least a month. All this data was getting sent to /var/tmp as flatpak-cache folders.

I don't know what it was doing but I've since turned it off in the startup list. Any ideas? What are the chances this was malicious?


I'm not a flatpaker but maybe it was updating the runtimes? http://docs.flatpak.org/en/latest/available-runtimes.html

Honestly I would be more worried about running mint (a topic not suited for this thread, a quick search would probably show you what I've read about it in the past).


What's cross-platform experience for packagers/builders? Seems like you can't do cross-compilation, so expecting upstream to provide flatpaks for users is expecting upstream to manage virtual or real machines for every [desired] CPU architecture with all the annoying host distro installation/management, networking stuff, etc.?

At least with the current model, upstream developers can offload most of that to distribution packagers.


In reality every desired CPU architecture is x86-64.


I have GNU/Linux desktops on armv7 and aarch64, and i686 laptop. So with this attitude, flatpak is mostly worthless to me.


That's not reality at all. Many laptops run ARM CPUs now. Especially on Linux.


> A high severity CVE-2017-9780 (CVSS Score 7.2) has indeed been assigned to this vulnerability. Flatpak developers consider this a minor security issue.

The flatpak release notes talk of a "minor security update", where "minor" surely means "of small size", not "of little importance" as OP would have it. Though the text could add a little importance.


Disclaimer: I work for Red Hat

I strongly oppose these kind of attacks by people hiding their identity. No matter how valid the criticism might be, this goes against all the ethics of Open Source.

Just like the Devuan folks that vigorously attacked systemd and Lennart.

This must not be tolerated.

Yes, I’m very upset.


Why is it problematic when people hide their identity? I mean, if the criticism is valid, what does it matter who said it?

I think hiding the identity is not the real problem here. To me, it looks more problematic, that the critique is not very constructive, one-sided and loaded with imputations.


> I think hiding the identity is not the real problem here. To me, it looks more problematic, that the critique is not very constructive, one-sided and loaded with imputations.

And I think there is a strong correlation between those two things.

It definitely makes it impossible to enter in a productive discussion about how flatpak could work better. The way it stands now, this rant is useless, aimed to be destructive and simply unacceptable. IMHO.


> And I think there is a strong correlation between those two things.

There might be a correlation between those in this case.

But just because it - perhaps (I haven't actually read the "article") - applies in this case, that's a far cry from being a general rule.

Think of it in terms of - for example - a muslim speaking up against oppression in their home country, or a Tibetan speaking up against China. Should they not be allowed to do so anonymously?

It is possible in most cases to judge merit on content/argument alone.

It's very difficult to imply or deduce someones motive, whether you know who they are or not. In most cases, you would be mistaken.

I find it helpful to remind myself that most people do what they do out of love, even if their actions are/seem utterly insane, or are/seem destructive.


Everything has some valid criticism, but if one side pushes their particular criticism a lot people might think it is worse than the alternatives.

For example if this site is by the Snap developers...


That is not like the Devuan folks, who do not hide their identities.

* https://devuan.org/os/team/


Is the sandboxing of flatpak more or less secure than docker?


They use a lot of similar techniques. One big difference is that docker uses user namespaces and flatpak does not. I'n not sure about the reasoning, but it's probably a combo of "not trusting user namespaces" (disagree) and user namespaces requiring privileges to use.

It sounds like the bigger issue isn't that the underlying technologies are fundamentally better or worse, but that the de facto configurations are worse. In particular, the median docker container can not write to my home directory. The median flatpak can.

Despite the ordering, the "no updates" seems like a way worse issue than the "most of the sandboxing is ineffective". It seems pretty clear to me that a lot of apps need wide access and the first person who does a great job at that will do us all a big security favor but we're not there yet in terms of UX. Sometimes I really want my text editor to edit my bashrc. Maybe that should require a privilege escalation, that's fine.


> docker uses user namespaces

Docker has support for user namespaces but it's off by default, and I've never actually seen someone use them (I'm sure people do, but the way the support was implemented is fairly half-baked in a variety of ways, for a variety of understandable but still disappointing reasons).

LXC/LXD's user namespace implementation actually privilege-separates different containers from each other (while also being able to "punch out" parts of the mapping so that you can share stuff between containers without needing to share the entire uid_map).

> user namespaces requiring privileges to use

Not always. See https://github.com/rootlesscontainers (a project I work on -- currently you can run Kubernetes as an unprivileged user with some caveats about multi-node setups but we're working on it) or LXC's unprivileged containers.

And in cases where you need to have multi-user mappings (which isn't necessary for most user applications because they wouldn't be able to setuid anyway!) you can just use "newuidmap" and "newgidmap".

In fact, bubblewrap has supported precisely this usecase and the use of user namespaces for a while. Of course, user namespaces wouldn't really help with protecting against home directory attacks -- if you're running as the same user (but in a user namespace) and you bind-mount the home directory then it can obviously write to said home directory.


Yes. The sandbox tool used by Flatpak is "bubblewrap", which has an overview here: https://github.com/projectatomic/bubblewrap/blob/master/READ...

There is nothing against Flatpak using user namepsaces when the developers feel a bit more comfortable with that, though.


Bubblewrap supports user namespaces and has for a while -- grep through the source for CLONE_NEWUSER. I talk about the security concerns a bit in [1].

[1]: https://news.ycombinator.com/item?id=18181034


Are you one of the developers/speaking for them? That warning is pretty old.


Creation of user namespaces still has caused security vulnerabilities in very recent history. But with seccomp you can disable it inside a container (which is what Docker and LXC do by default for instance), and it doesn't make sense to be worried about that as a container runtime because you are using it to increase the security of your sandbox.


> I'n not sure about the reasoning, but it's probably a combo of "not trusting user namespaces" (disagree) and user namespaces requiring privileges to use.

binctr looks like an interesting solution to tackle this issue.

[1] https://blog.jessfraz.com/post/getting-towards-real-sandbox-...

[2] https://github.com/genuinetools/binctr

[3] https://news.ycombinator.com/item?id=18180276


Or https://rootlesscontaine.rs/ [1]. runc has had upstream support for this for quite a while (binctr predates it by a bit, but the LXC support for it predates all of this by several years). If you want to run this in production, please use this -- or LXC -- rather than the PoC that Jess wrote a few years ago. umoci[2] also has rootless support (though it doesn't use user namespaces) for image manipulation (extraction and diff generation).

I worked quite a bit on getting this userspace stuff together (though of course the kernel work was done by much more clever people than myself :P).

[1]: https://github.com/rootlesscontainers [2]: https://github.com/openSUSE/umoci


What's better to hold water, a colander or a sieve?

Which is to say, when you hand out the privileges laid out in the article, it really doesn't matter what software you used to whitelist "every thing".


As I understand it, from reading this page it doesn't actually matter much how secure the sandbox is as many applications effectively disable it.


And many docker users run privileged containers, because then they don't need to troubleshoot permissions.. It doesn't meant the underlying system is flawed, because people take the lazy way around it.

I'm thinking of all the blogs back a few years ago for setting up things on Centos. Step 1, disable SELinux.. That was never recommended, but the blog writers didn't want to go into details about how to manage selinux, or couldn't understand it.


You're right! It's not the fault of the underlying system, it's the fault of the lazy people who work around it trivially.

With that said, some people might consider a system that is much easier to trivially work around than to use properly is one possessed of a wonderful, glorious, bountiful collection of opportunities to improve its design. Such systems are not bad! Not by any means! They just could, perhaps, be somewhat better.

All of that said, I do think a sandbox-based system probably shouldn't allow things inside the sandbox to say "Don't sandbox me bro". That seems less than maximally wise, even if it does also seem super convenient.


I almost never had to run a privileged container, and I avoid it whenever possible.

As far as I have seen, privileged container use is rare. What lead you to the assumption that it isn't?


It should also be noted that bind-mounting docker.sock is equivalent (or much worse -- it's easier to exploit at least) to using privileged containers, and an exceptionally large number of people do this (you see it in many blog posts and project installation scripts).


> Almost all popular applications on flathub come with filesystem=host, filesystem=home or device=all permissions

Aren't other sandboxes (like Ubuntu's Snap) the same?


Not on macOS.


Coming out with yet another deployment format and then expecting maintainers to include you is wishful thinking


"Almost all popular applications on flathub come with filesystem=host, filesystem=home or device=all permissions, that is, write permissions to the user home directory (and more), this effectively means that all it takes to "escape the sandbox" is echo download_and_execute_evil >> ~/.bashrc. That's it."

No shit, installed applications can write to the filesystem. What an exceptional security hole that only affects flatpak and literally every other form of installing those same programs outside of a sandbox.


If we could get applications to switch to using APIs backed by the xdg-desktop-portal, then they don't even get access to the host/home filesystems. They talk to a service which sends a FUSE FD to the app, and that overwrites the original file when close()d (but never does the app get direct filesystem access).

Gtk added GtkFileChooserNative for just this purpose. It implements the same file-chooser interface as other dialogs so in many cases is a couple line change to apps. Sadly, it can't be done automatically because various API/ABI reasons.


A document editor will want to write to the users documents directory.

Is the problem here that the user home dir both contains typical locations for user data and writing files with certain names to it will automatically execute ,such as ~/.bashrc?

Wouldn't it be better if the app only had permission to write to an actual user documents/data directory, rather than the home root? Which begs the next question: is there a standardized user document/data directory for desktop linux that isn't just "~"?


> Wouldn't it be better if the app only had permission to write to an actual user documents/data directory...

Well, would it? If the app went rogue, it could still encrypt all your documents.

Either way, dealing with that is not the job of package management or software installers. They can not solve the fundamental design issues of software that has been written without a finer permissions model in mind.


Yes, outside of a sandbox it's expected. But you don't sell something as secure and sandboxed when it's not. A flatpack sharing any part of the host file system with the app should be marked as insecure when installing and/or launching.


> Yes, outside of a sandbox it's expected. But you don't sell something as secure and sandboxed when it's not.

It is isn't being sold as "secure". It's sandboxed in the same way that Python virtual environment is sandboxed, i.e. you're not messing with the system software installation. Real security sandboxing is a completely orthogonal feature that package managers do not deliver either.


Honestly, that's not what I would expect sandboxed to mean. By that definition installation in /opt/$vendor/$software/$version would also be sand-boxed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: