But Musl is only available on Linux, isn't it? Cosmopolitan (https://github.com/jart/cosmopolitan) goes further and is available also on Mac and Windows, and it uses e.g. SIMD and other performance related improvements. Unfortunately, one has to cut through the marketing "magic" to find the main engineering value; stripping away the "polyglot" shell-script hacks and the "Actually Portable Executable" container (which are undoubtedly innovative), the core benefit proposition of Cosmopolitan is indeed a platform-agnostic, statically-linked C standard (plus some Posix) library that performs runtime system call translation, so to say "the Musl we have been waiting for".
Really, what would the world look like if this problem had been properly solved? Would the centralization and monetization of the Internet have followed the same path? Would Windows be so dominant? Would social media have evolved to the current status? Would we have had a chance to fight against the technofeudalism we're headed for?
The trick? It's not statically linked, but dynamically linked. And it doesn't like with anything other than glibc, X11 ... and bdb.
At this point I think people just do not know how binary compatibility works at all. Or they refer to a different problem that I am not familiar with.
You don't want to believe how many old binaries broke. Lot of ABI upgrades like libpng, ncurses, heck even stuff like readline and libtiff all changed just enough for linker errors to occur.
Ironically all the statically compiled stuff was fine. Some small things like you mention only linking to glibc and X11 was fine too. Funnily enough grabbing some old .so files from the RHEL 7 install and dumping them into LD_LIBRARY_PATH also worked better than expected.
But yeah, now that I'm writing this out, glibc was never the problem in terms of forwards compatibility. Now running stuff compiled on modern Ubuntu or RHEL 10 on the older OS, now that's a whole different story...
Why "better than expected"? I can run the entire userspace from Debian Etch on a kernel built two days ago... some kernel settings need to be changed (because of the old glibc! but it's not glibc's fault: it's the kernel who broke things), but it works.
> Now running stuff compiled on modern Ubuntu or RHEL 10 on the older OS, now that's a whole different story...
But this is a different problem, and no one makes promises here (not the kernel, not musl). So all the talk of statically linking with musl to get such type of compatibility is bullshit (at some point, you're going to hit a syscall/instruction/whatever that the newer musl does that the older kernel/hardware does not support).
I remember this in a heated LKML exchange, 13 years ago, look how the table has turned:
>
> Are you saying that pulseaudio is entering on some weird loop if the
> returned value is not -EINVAL? That seems a bug at pulseaudio.
Mauro, SHUT THE FUCK UP!
It's a bug alright - in the kernel. How long have you been a maintainer? And you still haven't learnt the first rule of kernel maintenance?
If a change results in user programs breaking, it's a bug in the kernel. We never EVER blame the user programs. How hard can this be to understand?
To make matters worse, commit f0ed2ce840b3 is clearly total and utter CRAP even if it didn't break applications. ENOENT is not a valid error return from an ioctl. Never has been, never will be. ENOENT means "No such file and directory", and is for path operations. ioctl's are done on files that have already been opened, there's no way in hell that ENOENT would ever be valid.
> So, on a first glance, this doesn't sound like a regression,
> but, instead, it looks tha pulseaudio/tumbleweed has some serious
> bugs and/or regressions.
Shut up, Mauro. And I don't _ever_ want to hear that kind of obvious garbage and idiocy from a kernel maintainer again. Seriously.
I'd wait for Rafael's patch to go through you, but I have another error report in my mailbox of all KDE media applications being broken by v3.8-rc1, and I bet it's the same kernel bug. And you've shown yourself to not be competent in this issue, so I'll apply it directly and immediately myself.
WE DO NOT BREAK USERSPACE!
Seriously. How hard is this rule to understand? We particularly don't break user space with TOTAL CRAP. I'm angry, because your whole email was so _horribly_ wrong, and the patch that broke things was so obviously crap. The whole patch is incredibly broken shit. It adds an insane error code (ENOENT), and then because it's so insane, it adds a few places to fix it up ("ret == -ENOENT ? -EINVAL : ret").
The fact that you then try to make excuses for breaking user space, and blaming some external program that used to work, is just shameful. It's not how we work.
Fix your f*cking "compliance tool", because it is obviously broken. And fix your approach to kernel programming.
LinusAnd that doesn't require using newer functionality.
And this has nothing to do with 1996, or 2004 glibc at all. In fact, glibc makes this otherwise impossible task actually possible: you can force to link with older symbols, but that solves only a fraction of the problem of what you're trying to achieve. Statically linking / musl does not solve this either. At some point musl is going to use a newer syscall, or any other newer feature, and you're broke again.
Also, what is so hard about building your software in your "security updates only" server? Or a chroot of it at least ? As I was saying below, I have a Debian 2006-ish chroot for this purpose....
In my experience, that's not quite accurate. I'm working on a GUI program that targets Windows NT 4.0, built using a Win11 toolchain. With a few tweaks here and there, it works flawlessly. Microsoft goes to great lengths to keep system DLLs and the CRT forward- and backward-compatible. It's even possible to get libc++ working: https://building.enlyze.com/posts/targeting-25-years-of-wind...
The problem is with the Linux dynamic linking, and the idea that you must not statically link the glibc code. And you can circumvent it by freezing your glibc abstraction interface, so that if you need to add another function, you do so by making another library entirely. But I don't know if musl does that.
If you want to go to such level, ELF is also forward compatible in that sense.
This is completely irrelevant, because what the developer is going to see is the binaries he builts in XP SP3 no longer work in XP SP2 because of a link error: the _statically linked_ runtime is going to call symbols that are not in XP SP2 DLLs (e.g. the DecodePointer debacle).
> If you use the Linux kernel directly, it is forward compatible in that sense.
Or not, because there will be a note in the ELF headers with the minimum kernel version required, which is going to be set to a recent version even if you do not use any newer feature. (unless you play with the toolchain) (PE has similar field too, leading to the "not a valid win32 executable" messages).
> And, of course, there is no issue at all with statically linked code.
I would say statically linked code is precisely the root of all these problems.
In addition to bring more problems of its own. E.g. games that dynamically link with SDL can be patched to have any other SDL version, including one with bugfixes for X support, audio, etc. Games that statically link with SDL? Sorry..
> And you can circumvent it by freezing your glibc abstraction interface, so that if you need to add another function, you do so by making another library entirely. But I don't know if musl does that.
Funnily, I think that is exactly the same as the solution I'm proposing for this conundrum: just (dynamically) link with the older glibc ! Voila: your binary now works with glibc from 1996 and glibc from 2026.
Frankly, glibc is already the project with the best binary compatibility of the entire Linux desktop , if not the only one with a binary compatibility story at all . The kernel is _not_ better in this regard (e.g. /dev/dsp).
I know, I've done that.
> just (dynamically) link with the older glibc!
Except that the older glibc is unmaintained and very hard to get a hold of and use. If you solve that, yeah, it's the same.
No, you can't. When you use 7-era toolchain (e.g. VS 2012) it sets the minimum client version in PE header to Vista, not XP much less 2k.
If you use VC++6 in 7, then yes, you can; but that's not really that different from me using a Debian Etch chroot to build.
Even within XP era this happens, since there are VS versions that target XP _SP2_ and produce binaries that are not compatible with XP _SP1_. That's the "DecodePointer" debacle I was mentioning. _Even_ if you do not use any "SP2" feature (as few as they are), the runtime (the statically linked part; not MSVCRT) is going to call DecodePointer, so even the smallest hello world will catastrophically fail in older win32 version.
Just Google around for hundreds of confused developers.
> Except that the older glibc is unmaintained and very hard to get a hold of and use.
"unmaintained" is another way of saying "frozen" or "security updates only" I guess. But ... hard to get a hold of ? You are literally running it on your the "security updates only" server that you wanted to target in the first place!
Yes, you can! There are even multiple Windows 10 era toolchains that officially support XP. VS 2017 was the last release that could build XP binaries.
How would that work given that glibc has gone through a soname change since then? If it's from 1996 are you sure the secret isn't that it uses non-g libc?
That suggests someone went to significantly more effort than "just dynamically link it".
If you are talking about _any_ other library, yes, that is a problem. My point is that glibc is the only one who even has a compatibility story.
Lots of tech companies and organizations have created artificial barriers to entry.
For example, most people own a computer (their phone) that they cannot control. It will play media under the control of other organizations.
The whole top-to-bottom infrastructure of DRM was put into place by hollywood, and then is used by every other program to control/restrict what people do.
tl;dw Google recognizes the need for a statically-linked modular latency sensitive portable POSIX runtime, and they are building it.
I don't want Lua. Using Lua is crazy clever, but it's not what I want.
I should just vibe code the dang thing.
I have a devcontainer running the Cosmopolitan toolchain and stuck the cosmocc README.md in a file referenced from my AGENTS.md.
Claude does a decent job. You have to stay on top of it when it’s writing C, easy to turn to spaghetti.
Also the fat binary concept trips up agents - just have it read the actual cosmocc file itself to figure any issues out.
The things I know of and can think of off the top of my head are:
1. appimage https://appimage.org/
2. nix-bundle https://github.com/nix-community/nix-bundle
3. guix via guix pack
4. A small collection of random small projects hardly anyone uses for docker to do this (i.e. https://github.com/NilsIrl/dockerc )
5. A docker image (a package that runs everywhere, assuming a docker runtime is available)
7. https://en.wikipedia.org/wiki/Snap_(software)
AppImage is the closest to what you want I think.
A "works in most cases" build should also be available for that that it would benefit. And if you can, why not provide specialized packages for the edge cases?
Of course, don't take my advice as-is, you should always thoroughly benchmark your software on real systems and choose the tradeoffs you're willing to make.
'Noticeably slower' at what? I've run, e.g. xemu (original xbox emulator) as both manually built from source and via AppImage-based released and i never noticed any difference in performance. Same with other AppImage-based apps i've been using.
Do you refer to launching the app or something like that? TBH i cannot think of any other way an AppImage would be "slower".
Also from my experience, applications released using AppImages has been the most consistent by far at "just working" on my distro.
Been doing it this way for years now, so it's well battle tested.
I wonder though, if I package say a .so file from nVidia, is that allowed by the license?
https://docs.appimage.org/reference/best-practices.html#bina...
There are several automation tools to make AppImages, but they won't magically allow you to compile on the latest Fedora and expect your executable to work on Debian Stable. It's still require quite a lot of manual labor.
It won't work: drivers usually require exact (or more-or-less the same) kernel module version. That's why you need to explicitly exclude graphics libraries from being packaged into AppImage. This make it non-runnable on musl if you're trying to run it on glibc.
https://github.com/Zaraka/pkg2appimage/blob/master/excludeli...
It makes me wonder, does the OS still take its job of hardware abstraction seriously these days?
Any .so from nvidia is supposed to be one of those things. Because it also depends on the drivers etc.. provided by nvidia.
Also on a side note, a lot of .so files also depends on other files in /usr/share , /etc etc...
I recommend using an AppImage only for the happy path application frameworks they support (eg. Qt, Electron etc...). Otherwise you'd have to manually verify all the libraries you're bundling will work on your user's distros.
You generally still also have to abide by license obligations for OSS too, e. G., GPL.
To be specific for the exampls, Nvidia has historically been quite restrictive (only on approval) here. Firmware has only recently been opened up a bit and drivers continue to be an issue iirc.
You can change the rpath though, which is sort of like an LD_LIBRARY_PATH baked into the object, which makes it relatively easy to bundle everything but libc with your binary.
edit: Mild correction, there is this: https://sourceforge.net/projects/statifier/ But the way this works is that it has the dynamic linker load everything (without ASLR / in a compact layout, presumably) and then dumps an image of the process. Everything else is just increasingly fancy ways of copying shared objects around and making ld.so prefer the bundled libraries.
It works surprisingly well but their pricing is hidden and last time I contacted them as a student it was upwards of $350/year
https://appimage.github.io/appimagetool/
Myself, I've committed to using Lua for all my cross-platform development needs, and in that regard I find luastatic very, very useful ..
But you can't take .so files and make one "static" binary out of them.
Yes you can!
This is more-or-less what unexec does
- https://news.ycombinator.com/item?id=21394916
For some reason nobody seems to like this sorcery, probably because it combines the worst of all worlds.
But there's almost[1] nothing special about what the dynamic linker is doing to get those .so files into memory that it can't arrange them in one big file ahead of time!
[1]: ASLR would be one of those things...
There is no universal, working way to do it. Only some hacks which work in some special cases.
Nonsense. xemacs could absolutely call dlopen.
> There is no universal, working way to do it. Only some hacks which work in some special cases.
So you say, but I remember not too long ago you weren't even aware it was possible, and you clearly didn't check one of the most prominent users of this technique, so maybe you should also explain why I or anyone else should give a fuck about what you think is a "hack"?
But I'm always a bit sceptical about such approaches. They are not universal. You still need glibc/musl to be the same on the target system. Also, if you compile againt new glibc version, but try to run on old glibc version, it might not work.
These are just strange and confusing from the end users' perspective.
Why would you include most of your dynamic libraries but not your libc?
You could still run into problems if you (or your libraries) want to use syscalls that weren't available on older kernels or whatever.
- either you use chroot, proot or similar to make /lib path contain your executable’s loader
- or you hardcode different loader path into your executable
Both are difficult for an end user.
Bonus points if you add compression or encryption and manage to trip a virus scanner or three. [1]
[0] https://grugq.github.io/docs/ul_exec.txt
[1] https://blackhat.com/presentations/bh-usa-07/Yason/Whitepape...
mkdir chroot
cd chroot
for lib in $(ldd ${executable} | grep -oE '/\S+'); do
tgt="$(dirname ${lib})"
mkdir -p .${tgt}
cp ${lib} .${tgt}
done
mkdir -p .$(dirname ${executable})
cp ${executable} .${executable}
tar cf ../chroot-run-anywhere.tgz .Eg. Your App might just depend on libqt5gui.so but that libqt5gui.so might depend on some libxml etc...
Not to mention all the files from /usr/share etc... That your application might indirectly depend on.
ldd works recursively.
> Not to mention all the files from /usr/share
Well yeah, there obviously cannot be a generic way to enumerate all the files a program might open...
https://github.com/sigurd-dev/mkblob https://github.com/tweag/clodl
made hooking into game code much easier than before
[1] https://devblogs.microsoft.com/oldnewthing/20110921-00/?p=95...
[2] https://devblogs.microsoft.com/oldnewthing/20221109-00/?p=10...
Even worse is containers, which has the disadvantage of both.
In practice, a statically linked system is often smaller than a meticulously dynamically linked one - while there are many copies of common routines, programs only contain tightly packed, specifically optimized and sometimes inlined versions of the symbols they use. The space and performance gain per program is quite significant.
Modern apps and containers are another issue entirely - linking doesn't help if your issue is gigabytes of graphical assets or using a container base image that includes the entire world.
When dynamically linking against shared OS libraries, Updates are far quicker and easier.
And as for the size advantage, just look at a typical Golang or Haskell program. Statically linked, two-digit megabytes, larger than my libc...
In decades of using and managing many kinds of computers I have seen only a handful of dynamic libraries for whom security updates have been useful, e.g. OpenSSL.
On the other hands, I have seen countless problems caused by updates of dynamic libraries that have broken various applications, not only on Linux, but even on Windows and even for Microsoft products, such as Visual Studio.
I have also seen a lot of space and time wasted by the necessity of having installed in the same system, by using various hacks, a great number of versions of the same dynamic library, in order to satisfy the conflicting requirements of various applications. I have also seen systems bricked by a faulty update of glibc, if they did not have any statically-linked rescue programs.
On Windows such problems are much less frequent only because a great number of applications bundle with the them, in their own directory, the desired versions of various dynamic libraries, and Windows is happy to load those libraries. On UNIX derivatives, this usually does not work as the dynamic linker searches only standard places for libraries.
Therefore, in my opinion static linking should always be the default, especially for something like the standard C library. Dynamic linking shall be reserved for some very special libraries, where there are strong arguments that this should be beneficial, i.e. that there really exists a need to upgrade the library without upgrading the main executable.
Golang is probably an anomaly. C-based programs are rarely much bigger when statically linked than when dynamically linked. Only using "printf" is typically implemented in such a way that it links a lot into any statically-linked program, so the C standard libraries intended for embedded computers typically have some special lightweight "printf" versions, to avoid this overhead.
> On the other hands, I have seen countless problems caused by updates of dynamic libraries that have broken various applications,
OpenSSL is a good example of both useful and problematic updates. The number of updates that fixed a critical security problem but needed application changes to work was pretty high.
In the most security-forward roles I've worked in, the vast, vast majority of vulnerabilities identified in static binaries, Docker images, Flatpaks, Snaps, and VM appliance images fell into these categories:
1. The vendor of a given piece of software based their container image on an outdated version of e.g. Debian, and the vulnerabilities were coming from that, not the software I cared about. This seems like it supports your point, but consider: the overwhelming majority of these required a distro upgrade, rather than a point dependency upgrade of e.g. libcurl or whatnot, to patch the vulnerabilities. Countless times, I took a normal long-lived Debian test VM and tried to upgrade it to the patched version and then install whatever piece of software I was running in a docker image, and had the upgrade fail in some way (everything from the less-common "doesn't boot" to the very-common "software I wanted didn't have a distribution on its website for the very latest Debian yet, so I was back to hand-building it with all of the dependencies and accumulated cruft that entails").
2. Vulnerabilities that were unpatched or barely patched upstream (as in: a patch had merged but hadn't been baked into released artifacts yet--this applied equally to vulns in things I used directly, and vulns in their underlying OSes).
3. Massive quantities of vulnerabilities reported in "static" languages' standard libraries. Golang is particularly bad here, both because they habitually over-weight the severity of their CVEs and because most of the stdlib is packaged with each Golang binary (at least as far as SBOM scanners are concerned).
That puts me somewhat between a rock and a hard place. A dynamic-link-everything world with e.g. a "libgolang" versioned separately from apps would address the 3rd item in that list, but would make the 1st item worse. "Updates are far quicker and easier" is something of a fantasy in the realm of mainstream Linux distros (or copies of the userlands of those distros packaged into container images); it's certainly easier to mechanically perform an update of dependency components of a distro, but whether or not it actually works is another question.
And I'm not coming at this from a pro-container-all-the-things background. I was a Linux sysadmin long before all this stuff got popular, and it used to be a little easier to do patch cycles and point updates before container/immutable-image-of-userland systems established the convention of depending on extremely specific characteristics of a specific revision of a distro. But it was never truly easy, and isn't easy today.
This was indeed comon for Unix. The only way to tune the systems (or even change the timezone) was to edit the very few source files and run make, which compiled those files then linked them into a new binary.
Linking-only is (or was) much faster than recompiling.
If you're a an indie developer wanting your application to run on various debian based distros but the debian maintainers won't package your application, that's when you'd see why it's called DLL hell, how horribly fragmented the Linux packaging is and why even steam ships their whole run time.
I lose control of the execution state. I have to follow the calling conventions which let my flags get clobbered.
To forego all of the above including link time optimization for the benefit of what exactly?
Imagine developing a C program where every object file produced during compilation was dynamically linked. It's obvious why that is a stupid idea - why does it become less stupid when dealing with a separate library?
No idea why the glibc can't provide API+ABI stability, but on Linux it always comes down to glibc related "DLL hell" problems (e.g. not being able to run an executable that was created on a more recent Linux system on an older Linux system even when the program doesn't access any new glibc entry points - the usually adviced solution is to link with an older glibc version, but that's also not trivial, unless you use the Zig toolchain).
TL;DR: It's not static vs dynamic linking, just glibc being a an exceptionally shitty solution as operating system interface.
LTO is really a different thing, where you recompile when you link. You could technically do that as part of the dynamic linker too, but I don't think anyone is doing it.
There is a surprisingly high number of software development houses that don't (or can't) use LTO, either because of secrecy, scalability issues or simply not having good enough build processes to ensure they don't breach the ODR.
In the era of containers, I do not understand why this is "Not trivial". I could do it with even a chroot.
The fact that you need to use a container/chroot on Linux in the first place makes the process non trivial, when all you have to do on Windows is click a button or two.
Chroot _is_ trivial. I actually use it for convenience, as I could also as well install the older toolchains directly on the newer system, but chroot is just plain easier. Maybe VS has a button where you can target whatever version MS fancies today ("for a limited time offer"), but what about _any other_ windows toolchain?
Linux syscalls, MS-DOS 'software interrupts'...
But that's not the issue, operating system interfaces can be exposed via DLLs, those DLLs interfaces just must be guaranteed to be stable (like on Windows).
Tbh, I'm not sure why I can't simply tell the gcc linker some random old glibc version number from the late 1990s and the gcc linker checks whether I'm using any functions that haven't been available in that old version (and in that case errors out). That would be the most frictionless solution, and surely it can't be too hard to annotate glibc functions with a version number in the gcc system headers when that function first appeared.
Without dlopen (with regular dynamic linking), it's much harder to compile for older distros, and I doubt you can easily implement glibc/musl cross-compatibility at all in general.
Take a look what Valve does in a Steam Runtime:
- https://gitlab.steamos.cloud/steamrt/steam-runtime-tools/-/blob/main/docs/pressure-vessel.md
- https://gitlab.steamos.cloud/steamrt/steam-runtime-tools/-/blob/main/subprojects/libcapsule/doc/Capsules.txtif you configure binfmt_misc
>Windows
if you disable Windows Defender
>OpenBSD
only older versions
For most cases, a single Windows exe that targets the oldest version you want to support plus a single Glibc binary that dynamically links against the oldest version you want to support and so on is still the best option.
> if you configure binfmt_misc
I don't think that's a requirement, it'll just fall back to the shell script bootstrap without it.
It came preconfigured on Ubuntu 20.04 and 22.04, don't know about newer versions.
Gave up them afterwards. If I need to tweak dependencies might as well deal with the packet manager of my distro.
If you forego the requirement of a runtime plugin system, is there anything realistically preventing greenfield projects from just being fully statically linked, assuming their dependencies dont rely on dlopen ?
E.g. in my experience: command line tools are fine to link statically with MUSL, but as soon as you need a window and 3D rendering it's not worth the hassle.
Here is an idea, lets go back to pure UNIX distros using static binaries with OS IPC for any kind of application dynamism, I bet it will work out great, after all it did for several years.
Got to put that RAM to use.
Even with multiple processes sharing the same DLL I would be surprised if the alternative of those processes only containing the code they actually need would increase RAM usage dramatically, especially since most processes that run in the background on a typical Linux system wouldn't event even need to go through glibc but could talk directly to the syscall interface.
DLLs are fine as operating system interface as long as they are stable (e.g. Windows does it right, glibc doesn't). But apart from operating system interfaces and plugins, overusing dynamic linking just doesn't make a lot of sense (like on most Linux systems with their package managers).
We started there in computing history, and outside Linux where this desire to go to the past prevails, moved on to better ways including on other UNIX systems.
But you still need the compiler of the library objects to place different functions and data items into different sections of your object, e.g.
gcc -ffunction-sections -fdata-sections
if you want elimination from the executable binary file.How do I do that? Is there a documented configuration of musl's allocator?
I eventually decided to keep the tiny musl app and make a companion app in a secondary process as needed (since the entire point of me compiling musl was cross platform linux compatibility/stability)
Some might appreciate a concrete instance of this advice inline here. For `foo.nim`, you can just add a `foo.nim.cfg`:
@if gcc:
gcc.exe = "musl-gcc"
gcc.linkerexe = "musl-gcc"
passL = "-static -s" @end
There is also a "NimScript" syntax you could use a `foo.nims`: if defined gcc: # nim.cfg runs faster than NimScript
switch "gcc.exe" , "musl-gcc"
switch "gcc.linkerexe", "musl-gcc"
switch "passL" , "-static -s"The documentation to make static binary with GLibc is sparce for a reason, they don't like static binaries.
Honestly, it was the kind of bug that is not fun to fix, because it's really about dependency, and not some fun code issue. There is no point in making our life harder with this to gatekeep proprietary software to run on our platform.
Binary compatibility solutions mostly target cases where rebuilding isn't possible, typically closed source software. Freezing and bundling software dependencies ultimately creates dependency hell rather than avoiding it.
Look at the hoops you sometimes have to jump through or hacks you have to apply to make something work on Nix, just because there is no standardization or build processes assume library locations etc. And if you then raise an issue with the software maintainer - the response is often "but we don't support Nix". And if they're not Nix/Nixos users, can you blame them?
If you've ever had to compile a modern/recent software package for an old distro (I've had to do this for old RH distro's on servers which due to regulations could not be upgraded) - you're in a world of pain. And both distro and software maintainers will say "not my problem, we don't support this" - and I fully understand their stance on that, because it is far from straight forward, and only serves a limited audience.
Adobe stuff is of the kind that you'd prefer to not exist at all rather than have it fixed (and today you largely can pretend that it never existed already), and the situation for games has been pretty much fixed by Steam runtimes.
It's fine that some people care about it and some solutions are really clever, but it just doesn't seem to be an actual issue you stumble on in practice much.
Basically the way for the year of the Linux desktop is to become Windows.