Η αίσθηση ενός χρήστη desktop διανομής Linux ότι έπιασε το Θεό από τα αρχ... από την πολλή ελευθερία. επειδή το internet δολεύει με servers linux και επειδή και η google κλπ έχει δικές της εσωτερικές διανομές εμένα με ξεπερνάει.
Αγαπητοί μου οι desktop διανομές του Linux δεν είναι ούτε ασφαλείς ούτε stable, ουτε καιν λειτουργικές.
Desktop Linux Problems and Major Shortcomings
(For those who hate reading long texts, there's a TL;DR version below). So Linux sucks because ...
Hardware support:
Video accelerators/βacceleration (also see the X system section).
! NVIDIA Optimus technology and ATI dynamic GPU switching are still very problematic to use out of the box in most major distros (Mint starting from version 17.2 supports Optimus but quite awkwardly - read the WFM section).
! Open source drivers have certain, sometimes very serious problems (Intel-!, NVIDIA and AMD):
! The open source NVIDIA driver is much slower (up to ten times) than its proprietary counterpart due to incomplete power management (primarily it's NVIDIA's fault).
! The open source NVIDIA driver, nouveau, does not properly and fully support power management features and fan speed management (again, it's NVIDIA's fault).
! Proprietary NVIDIA driver has a nasty habit of keeping your GPU at the highest performance level which significantly increases power consumption, and, in case of mobile users, significantly cuts battery life. NVIDIA was made aware of this bug in July 2017 and the issue still persists.
!! According to an anonymous NVIDIA engineer, "Nearly Every Game Ships Broken ... In some cases, we're talking about blatant violations of API rules ... There are lots of optional patches already in the driver that are simply toggled on or off as per-game settings, and then hacks that are more specific to games ... Ever wondered why nearly every major game release is accompanied by a matching driver release from AMD and/or NVIDIA?". The open source community simply doesn't have the resources to implement similar hacks to fix broken games, which means that at least for complex AAA games, proprietary drivers will remain the only option.
! NVIDIA and AMD proprietary graphics drivers don't work reliably for many people (crashes, unsupported new kernel and X server, slow downs, extreme temperatures, a very loud fan, 100% CPU usage, problems resuming after suspend, etc.).
Proprietary NVIDIA graphics drivers don't fully support KMS/VirtualFB and are often late in supporting newer X.org server and kernel releases. Besides, Linux developers do everything to break closed source drivers by changing APIs (to give you an example, each and every kernel from 3.8 to 3.14 included, had changes that rendered NVIDIA binary drivers inoperable, i.e. uncompilable) or making APIs unusable beyond the GPL realm. NVIDIA's blob supports Wayland and Mir.
! A great many users experience severe video and desktop tearing while watching videos and youtube clips - this issue affects both proprietary (NVIDIA confirmed that this issue plagues Kepler and Maxwell GPUs; an NVIDIA specific workaround exists but it causes performance degradation) and open source GPU drivers. Ostensibly it's an X.org "feature".
! Linux drivers are usually much worse (they require a lot of tinkering, i.e. manual configuration) than Windows/Mac OS drivers in regard to support of non-standard display resolutions, very high (a.k.a. HiDPI) display resolutions or custom refresh rates.
! Under Linux, setting multi-monitor configurations especially using multiple GPUs running binary NVIDIA drivers can be a major PITA.
(Not an issue for most users but still) GPU voltage tuning will most likely never be supported both for AMD and NVIDIA GPUs which means there's no proper overclocking, or underclocking to save power.
Audio subsystem:
PulseAudio is unsuitable for multiuser mode - yes, many people share their PCs (an untested solution can be found here).
! No reliable echo cancellation (if you use a normal microphone and speakers in many cases you won't be able to use Skype and other VoIP services normally). Windows, Android and MacOS implement it on a system level. There's a solution for PulseAudio - hopefully it'll be enabled by default in the future or/and there'll be an easier way to use it.
Hardly a dealbreaker but then audio professionals also want to use Linux: high definition audio support (>=96KHz, >=24bit) is usually impossible to set up without using console.
Printers, scanners and other more or less peripheral devices:
! There are still many printers which are not supported at all or only barely supported - some people argue that the user should research Linux compatibility before buying their hardware. What if the user decides to switch from Windows to Linux when he/she already has some hardware? When people purchase a Windows PC do they research anything? No, they rightly assume everything will work out of the box right from the get-go.
Many printer's features are only implemented in Windows drivers.
! Some models of scanners and (web-)cameras are still inadequately supported (again many features from Windows drivers are missing) or not supported at all.
Incomplete or unstable drivers for some hardware. Problems setting up some hardware (like touchpads in newest laptops, web cameras or Wi-Fi cards, for instance, 802.11ac and USB Wi-Fi adapters are barely supported under Linux and in many cases they are just unusable). Numerous people report that Broadcom and Realtek network adapters are barely usable or outright unusable under Linux.
Laptops, tablets, 2 in 1 devices, etc.:
Incomplete or missing support for certain power-saving features modern laptops employ (like e.g. PCIe ASPM, proper video decoding acceleration, deep power-saving states, etc.) thus under Linux you won't get the same battery life as under Windows or MacOS and your laptop will run a lot hotter. Jupiter (discontinued unfortunately), see Advanced Power Management for Linux. Edit July 19, 2018: If you're running supported hardware with Fedora 28 and Linux 4.17 and later, power management must be excellent under Linux aside from watching videos (both online and offline: video decoding acceleration in Linux is still a very sad story).
!! Oftentimes you just cannot use new portable devices in Linux because proper support for certain features gets impletemented too late and distros pick up this support even later.
Laptops/notebooks often have special buttons and features that don't work (e.g. Fn + F1-F12 combination or special power-saving modes).
! Resume after suspend in Linux is unstable and oftentimes doesn't work.
! Often regressions are introduced in the Linux kernel, when some hardware stops working inexplicably in new kernel versions. I have personally reported two serious audio playback regressions, which have been consequently resolved, however most users don't know how to file bugs, how to bisect regressions, how to identify faulty components.
Software support:
X system (current primary video output server in Linux):
X.org is largely outdated, unsuitable and even very much insecure for modern PCs and applications.
No high level, stable, sane (truly forward and backward compatible) and standardized API for developing GUI applications (like core Win32 API - most Windows 95 applications still run fine in Windows 10 - that's 24 years of binary compatibility). Both GTK and Qt (incompatible GTK versions 1, 2, 3, 4 and incompatible Qt versions 2, 3, 4, 5 just for the last decade) don't strive to be backwards compatible.
! Keyboard shortcut handling for people using local keyboard layouts is broken (this bug is now 15 years old).
! X.org doesn't automatically switch between desktop resolutions if you have a full screen application with a custom resolution running - strangely some Linux developers oppose to the whole idea of games on Linux. But since Linux is not a gaming platform and no one is interested in Linux as a gaming platform this problem's importance is debatable. Valve has released Steam for Linux and they are now porting their games for Linux - but that's a drop in the bucket.
! X.org doesn't restore gamma (which can be perceived as increased brightness) settings on application exit. If you play Valve/Wine games and experience this problem run `xgamma -1` in a terminal. You can thank me by clicking the ad at the top of the page ;-)
! Scrolling in various applications causes artifacts.
! X.org allows applications to exclusively grab keyboard and mouse input. If such applications misbehave you are left with a system you cannot manage, you cannot even switch to text terminals.
! Keyboard handling in X.org is broken by design - when you have a pop up or an open menu, global keyboard shortcuts/βkeybindings don't (GTK) work (QT).
! For VM applications keyboard handling is incomplete and passing keypresses to guest OS'es is outright broken.
! X.org architecture is inherently insecure - even if you run a desktop GUI application under a different user in your desktop session, e.g. using sudo and xhost, then that "foreign" application can grab any input events and also make screenshots of the entire screen.
! X.org server currently has no means of permanently storing and restoring settings changed by the user (xrender settings, Xv settings, etc.). NVIDIA and ATI proprietary drivers both employ custom utilities for this purpose.
!! X.org has no means of providing a tear-free experience, it's only available if you're running a compositing window manager in the OpenGL mode with vsync-to-blank enabled.
!! X.org is not multithreaded. Certain applications running intensive graphical operations can easily freeze your desktop (a simple easily reproducible example: run Adobe Photoshop 7.0 under Wine, open a big enough image and apply a sophisticated filter - see your graphical session die completely until Photoshop finishes its operation).
! There's currently no way to configure mouse scroll speed/acceleration under X.org. Some mice models scroll erratically under X.org.
There's no way to replace/βupgrade/βdowngrade X.org graphics drivers on the fly (simply put - to restart X server while retaining a user session and running applications).
No true safe mode for the X.org server (likewise for KMS - read below). Misconfiguration and broken drivers can leave you with a non-functional system, where sometimes you cannot access text virtual consoles to rectify the situation (in 2013 it became almost a non-issue since quite often nowadays X.org no longer drives your GPU - the kernel does that via KMS).
Adding custom monitor modelines in Linux is a major PITA.
X.org totally sucks (IOW doesn't work at all in regard to old applications) when it comes to supporting tiled displays, for instance 4K displays (Dell UP3214Q, Dell UP2414Q, ASUS PQ321QE, Seiko TVs and others). This is yet another architectural limitation.
HiDPI support is often a huge issue (many older applications don't scale at all).
! Fast user-switching (and also concurrent users' sessions) under X.org works very badly and is implemented as a dirty hack: for every user a new X.org server is started. It's possible to login twice under the same account while not being able to run many applications due to errors caused by concurrent access to the same files. Fast user switching is best implemented in KDE followed by Gnome.
Related problems:
1) Concurrently logged in users cannot access the same USB flash drive(s).
2) There are reports that problems exists with configuring audio mixer volume levels.
Wayland:
!! Applications (or GUI toolkits) must implement their own font antialiasing - there's no API for setting system-wide font rendering. Most sane and advanced windowing systems work exactly this way - Windows, Android, Mac OS X. In Wayland all clients (read applications) are totally independent.
!! Applications (or GUI toolkits) must implement their own DPI scaling.
The above issues are actually the result of not having one unified graphical toolkit/API (and Wayland developers will not implement it). Alas, no one is currently working towards making existing toolkits share one common configuration for setting font antialiasing, DPI scaling and windows shadowing. At least in theory these issues can be easily solved, in practice we already have three independent toolkits for Wayland (GTK3/Qt5/Enlightenment).
!! Wayland works through rasterization of pixels which brings about two very bad critical problems which will never be solved:
Firstly, forget about performance/bandwidth efficient RDP protocol (it's already implemented but it works by sending the updates of large chunks of the screen, i.e. a lot like old highly inefficient VNC), forget about OpenGL pass-through, forget about raw compressed video pass-through. In case you're interested all these features work in Microsoft's RDP.
Secondly, forget about proper output rotation/scaling/ratio change.
!! Screensharing doesn't yet work out of the box.
!! Wayland lacks APIs for global keyboard shortcuts.
!! Wayland lacks APIs for sending remote input which makes Wayland unsuitable for remote desktoping.
!! Applies to the X server/protocol as well: neither X.org, nor Wayland offer a way to extend/modify window's title bars and File Open/Save dialogs. This is a very powerful feature which can be very useful in many situations. Again it's a result of the fact that there's no unified toolkit and no unified window manager (or protocol).
!! Wayland compositors don't have a universal method of storing and configuring screen/session/keyboard/mouse settings.
Wayland doesn't support XModMap.
XWayland refresh rate is locked to 60Hz - that's actually a serious problem since most games for Linux use the X11 protocol.
Wayland applications cannot run without a Wayland compositor and in case it crashes, all the running apps die. Under X.org/Microsoft Windows there's no such issue.
Font rendering (which is implemented via high level GUI libraries) issues:
! ClearType fonts are not properly supported out of the box. Even though the ClearType font rendering technology is now supported, you have no means of properly tuning it thus ClearType fonts from Windows look ugly.
Quite often default fonts look ugly, due to missing good (catered to the LCD screen - subpixel RGB full hinting) default fontconfig settings (this quite unpopular website alone gets over 20% of its visitors seeking to fix bad font (rendering) in Linux).
Web fonts under Linux often look horrible in old distros.
Font antialiasing settings cannot be applied on-the-fly under many DEs. This issue is impossible to solve unless we have a high level GUI library which is shared between all tooklits and desktop environments.
The Linux kernel:
! The kernel cannot recover from video, sound and network drivers' crashes (I'm very sorry for drawing a comparison with Windows Vista/7/8 where this feature is implemented and works beautifully in a lot of cases).
KMS exclusively grabs video output and disallows VESA graphics modes (thus it's impossible to switch different versions of graphics drivers on the fly).
KMS video drivers cannot be unloaded or reloaded.
!! KMS has no safe mode: sometimes KMS cannot properly initialize your display and you have a dead system you cannot access at all (a kernel option "nomodeset" can save you, but it prevents KMS drivers from working at all - so either you have 80x25 text console or you have a perfectly dead display).
Traditional Linux/Unix (ext4/βreiser/βxfs/βjfs/βbtrfs/etc.) filesystems can be problematic when being used on mass media storage.
File descriptors and network sockets cannot be forcibly closed - it's indeed unsafe to remove USB sticks without unmounting them first as it leads to stale mount points, and in certain cases to oopses and crashes. For the same reason you cannot modify your partitions table and resize/move the root partition on the fly.
In most cases kernel crashes (= panics) are invisible if you are running an X session. Moreover KMS prevents the kernel from switching to plain 640x480 or 80x25 (text) VGA modes to print error messages. As of 2019 there's work underway to implement kernel error logging under KMS.
Very incomplete hardware sensor support, for instance, hwinfo32/64 detects and shows ten hardware sensor sources on my average desktop PC and over fifty sensors, whilst lm-sensors detect and present just four sources and twenty sensors. This situation is even worse on laptops - sometimes the only readings you get from lm-sensors are cpu cores' temperatures.
! A number (sometimes up to dozens) of regressions in every kernel release due to the inability of kernel developers to test their changes on all possible software and hardware configurations. Even "stable" x.y.Z kernel updates sometimes have serious regressions.
! The Linux kernel is extremely difficult and cumbersome to debug even for the people who develop it.
Under some circumstances the system or X.org's GUI may become very slow and unresponsive due to various problems with video acceleration or lack of it and also due to notorious bug 12309 - it's ostensibly fixed but some people still experience it). This bug can be easily reproduced under Android (which employs the Linux kernel) even in 2019: run any disk intensive application (e.g. under any Android terminal 'cat /dev/zero > /sdcard/testfile') and enjoy total UI unresponsiveness.
!! Critical bug reports filed against the Linux kernel often get zero attention and may linger for years before being noticed and resolved. Posts to LKML oftentimes get lost if the respective developer is not attentive or is busy with his own life.
The Linux kernel contains a whole lot of very low quality code and when coupled with unstable APIs it makes development for Linux a very difficult error prone process.
The Linux kernel forbids to write to CPU MSRs in secure UEFI mode which makes it impossible to fine-tune your CPU power profile. This is perfectly possible under Windows 10.
Memory management under Linux leaves a lot to be desired: under low memory conditions your system may become completely unresponsive.
Problems stemming from the vast number of Linux distributions:
! No unified configuration system for computer settings, devices and system services. E.g. distro A sets up networking using these utilities, outputting certain settings residing in certain file system locations, distro B sets up everything differently. This drives most users mad.
! No unified installer/package manager/universal packaging format/dependency tracking across all distros (The GNU Guix project, which is meant to solve this problem, is now under development - but we are yet to see whether it will be incorporated by major distros). Consider RPM (which has several incompatible versions, yeah), deb, portage, tar.gz, sources, etc. It adds to the cost of software development.
! Distros' repositories do not contain all available open source software (libraries' conflicts don't even allow that luxury). The user should never be bothered with using ./configure && make && make install (besides, it's insecure, can break things in a major way, and it sometimes simply doesn't work because the user cannot install/configure dependencies properly). It should be possible to install any software by downloading a package and double clicking it (yes, like in Windows, but probably prompting for a user/administrator password).Linux distros. ©2000 Microsoft Germany [Microsoft_Linux_ad]
! Applications development is a major PITA. Different distros can use a) different library versions, b) different compiler flags, c) different compilers. This leads to a number of problems raised to the third power. Packaging all dependent libraries is not a solution, because in this case your application may depend on older versions of libraries which contain serious remotely exploitable vulnerabilities.
! Two most popular open source desktops, KDE and Gnome, can configure only a few settings by themselves thus each distro creates its own bicycle (applications/utilities) for configuring a boot loader/βfirewall/βusers and groups, services, etc.
Linux is a hell for ISP/ISV support personnel. Within the organization you can force a single distro on anyone, but it cannot be accomplished when your clients have the freedom to choose.
! It should be possible to configure pretty much everything via GUI (in the end Windows and Mac OS allow this) which is still not a case for some situations and operations.
No polish and universally followed conventions. Different applications may have totally different shortcuts for the same actions, UI elements may be placed and look differently.
Problems stemming from low Linux popularity and open source nature:
! Few software titles, inability to run familiar Windows software (some applications which don't work in Wine - look at the lines which contain the word "regression" - have zero Linux equivalents).
! No equivalent of some hardcore Windows software like ArchiCAD/3ds Max/Adobe Premier/Adobe Photoshop/Corel Draw/DVD authoring applications/etc. Home and enterprise users just won't bother installing Linux until they can get their work done.
! A small number of native games and few native AAA games for the past six years. The number of available Linux games overall is less than 10% of games for Windows. Steam shows a better picture: 25% of games over there have Linux ports (in March 2019: Windows 58382 titles vs. Linux 12216 titles) but over 98% out of them are Indies; i.e. AAA titles, especially recent ones, are a rarity in Linux. Luckily nowadays it's possible to run a large number of Windows games in Wine/DXVK and Steam/Proton.
Questionable patents and legality status. USA Linux users cannot play many popular audio and video formats until they purchase appropriate codecs.
General Linux problems:
!! There's no concept of drivers in Linux aside from proprietary drivers for NVIDIA/AMD GPUs which are separate packages: almost all drivers are already either in the kernel or various complementary packages (like foomatic/sane/etc). It's impossible for the user to understand whether their hardware is indeed supported by the running Linux distro and whether all the required drivers are indeed installed and working properly (e.g. all the firmware files are available and loaded or necessary printer filters are installed).
!! There's no guarantee whatsoever that your system will (re)boot successfully after GRUB (bootloader) or kernel updates - sometimes even minor kernel updates break the boot process (except for Windows 10 - but that's a new paradigm for Microsoft). For instance Microsoft and Apple regularly update ntoskrnl.exe and mach_kernel respectively for security fixes, but it's unheard of that these updates ever compromised the boot process. GRUB updates have broken the boot process on the PCs around me at least ten times. (Also see compatibility issues below).
!! LTS distros are unusable on the desktop because they poorly support or don't support new hardware, specifically GPUs (as well as Wi-Fi adapters, NICs, sound cards, hardware sensors, etc.). Oftentimes you cannot use new software in LTS distros (normally without miscellaneous hacks like backports, PPAs, chroots, etc.), due to outdated libraries. A recent example is Google Chrome on RHEL 6/CentOS 6.
!! Linux developers have a tendency to a) suppress news of security holes b) not notify the public when the said holes have been fixed c) miscategorize arbitrary code execution bugs as "possible denial of service" (thanks to Gullible Jones for reminding me of this practice - I wanted to mention it aeons ago, but I kept forgetting about that).
Here's a full quote by Torvalds himself: "So I personally consider security bugs to be just "normal bugs". I don't cover them up, but I also don't have any reason what-so-ever to think it's a good idea to track them and announce them as something special."
Year 2014 was the most damning in regard to Linux security: critical remotely-exploitable vulnerabilities were found in many basic Open Source projects, like bash (shellshock), OpenSSL (heartbleed), kernel and others. So much for "everyone can read the code thus it's invulnerable". In the beginning of 2015 a new critical remotely exploitable vulnerability was found, called GHOST.
Year 2015 welcomed us with 134 vulnerabilities in one package alone: WebKitGTK+ WSA-2015-0002. I'm not implying that Linux is worse than Windows/MacOS proprietary/closed software - I'm just saying that the mantra that open source is more secure by definition because everyone can read the code is apparently totally wrong.
Year 2016 pleased us with several local root Linux kernel vulnerabilities as well as countless other critical vulnerabilities. In 2016 Linux turned out to be significantly more insecure than often-ridiculed and laughed-at Microsoft Windows.
The Linux kernel consistently remains one of the most vulnerable pieces of software in the entire world. In 2017 it had 453 vulnerabilities vs. 268 in the entire Windows 10 OS. No wonder Google intends to replace Linux with its own kernel.
Many Linux developers are concerned with the state of security in Linux because it is simply lacking.
Linux servers might be a lot less secure than ... Windows servers, "The vast majority of webmasters and system administrators have to update their software manually and test that their infrastructure works correctly".
Seems like there are lots of uniquely gifted people out there thinking I'm an idiot to write about this. Let me clarify this issue: whereas in Windows security updates are mandatory and they are usually installed automatically, Linux is usually administered via SSH and there's no indication of any updates at all. In Windows most server applications can be updated seamlessly without breaking services configuration. In Linux in a lot of cases new software releases require manual reconfiguration (here are a few examples: ngnix, apache, exim, postfix). The above two causes lead to a situation when hundreds of thousands of Linux installations never receive any updates, because their respective administrators don't bother to update anything since they're afraid that something will break.
August 2016 report from Kaspersky corroborates my thesis: in the first seven months of 2016 the number of infected Linux servers increased by 70%.
Ubuntu, starting with version 16.04 LTS, applies security updates automatically except for the Linux kernel updates which require reboot (it can be eliminated as well but it's tricky). Hopefully other distros will follow. As much as Ubuntu might be commended they still distribute their downloaded ISO images via HTTP - this is a major security threat because most users won't verify their ISO images using GPG.
! Fixed applications versions during a distro life-cycle (except Firefox/Thundebird/Chromium). Say, you use DistroX v18.10 which comes with certain software. Before DistroX 20.10 gets released some applications get updated, get new exciting features but you cannot officially install, nor use them.
! Let's expand on the previous point. Most Linux distros are made such a way you cannot upgrade their individual core components (like kernel, glibc, Xorg, Xorg video drivers, Mesa drivers, etc.) without upgrading your whole system. Also if you have brand new hardware oftentimes you cannot install current Linux distros because almost all of them (aside from rare exceptions) don't incorporate the newest kernel release, so either you have to use alpha/development versions of your distro or you have to employ various hacks in order to install the said kernel.
Some people argue that one of the problems that severely hampers the progress and expansion of Linux is that Linux doesn't have a clear separation between the core system and user-space applications. In other words (mentioned throughout the article) third-party developers cannot rely on a fixed set of libraries and programming interfaces (API/ABI) - in most other OSes you can expect your application to work for years without recompilation and extra fixes - it's often not possible in Linux.
No native or/and simple solutions for really simple encrypted file sharing in the local network with password authentication (Samba is not native, it's a reverse engineered SMB implementation, it's very difficult for the average Joe to manage and set up. Samba 4 reimplements so many Linux network services/daemons - it looks like a Swiss knife solution from outer space).
Glibc by design "leaks" memory (due to heap fragmentation). Firefox for Linux now uses its own memory allocator. KDE Konsole application uses its own memory allocation routines. Neil Skrypuch posted an excellent explanation of this issue here.
! Just (Gnome) not enough (KDE) manpower (X.org) - three major Open Source projects are seriously understaffed.
! It's a major problem in the tech industry at large but I'll mention it anyways because it's serious: Linux/open source developers are often not interested in fixing bugs if they cannot easily reproduce them (for instance when your environment substantially differs from the developer's environment). This problem plagues virtually all Open Source projects and it's more serious in regard to Linux because Linux has fewer users and fewer developers. Open Source developers often don't get paid to solve bugs so there's little incentive for them to try to replicate and squash difficult to reproduce bugs.
! A galore of software bugs across all applications. Just look into KDE or Gnome bugzilla's - some bugs are now over ten years old with over several dozens of duplicates and no one is working on them. KDE/Gnome/etc. developers are busy adding new features and breaking old APIs. Fixing bugs is of course a tedious and difficult chore.
! Steep learning curve (even today you oftentimes need to use a CLI to complete some trivial or non-trivial tasks, e.g. when installing third party software).
! Incomplete or sometimes missing regression testing in the Linux kernel (and, alas, in other Open Source software too) leading to a situation when new kernels may become totally unusable for some hardware configurations (software suspend doesn't work, crashes, unable to boot, networking problems, video tearing, etc.)
GUI network manager in Linux has serious problems. You cannot establish PPPoE connections over Wi-Fi.
Poor interoperability between the kernel and user-space applications. E.g. many kernel features get a decent user-space implementation years after introduction.
! Linux security/permissions management is a bloody mess: PAM, SeLinux, Udev, HAL (replaced with udisk/upower/libudev), PolicyKit, ConsoleKit and usual Unix permissions (/etc/passwd, /etc/group) all have their separate incompatible permissions management systems spread all over the file system. Quite often people cannot use their digital devices unless they switch to a super user.
No sandbox with easy to use GUI (like Sandboxie in Windows).
! CLI (command line interface) errors for user applications. All GUI applications should have a visible error representation.
! Certain Linux components have very poor documentation and lack good manuals.
! No unified widely used system for packages signing and verification (thus it becomes increasingly problematic to verify packages which are not included by your distro). No central body to issue certificates and to sign packages.
There are no native antivirus solutions or similar software for Linux (the existing ones are made for finding Windows viruses and analyzing Windows executives - i.e. they are more or less useless for Linux). Say, you want to install new software which is not included by your distro - currently there's no way to check if it's malicious or not.
!! Most Linux distributions do not audit included packages which means a rogue evil application or a rogue evil patch can easily make it into most distros, thus endangering the end user (it has happened several times already).
! Very bad backwards and forward compatibility.
! Due to unstable and constantly changing kernel APIs/ABIs Linux is a hell for companies which cannot push their drivers upstream into the kernel for various reasons like their closeness (NVIDIA, ATI, Broadcom, etc.), or inability to control development or co-develop (VirtualBox/βOracle, VMWare/βWorkstation, etc.), or licensing issues (4Front Technologies/OSS).
Old applications rarely work in new Linux distros (glibc incompatibilities (double-free errors, memory corruption, etc.), missing libraries, wrong/new libraries versions). Abandoned Linux GUI software generally doesn't work in newer Linux distros. Most well written GUI applications for Windows 95 will work in Windows 10 (24 years of binary level compatibility).
New applications linked only against lib C will refuse to work in old distros (even though they are 100% source compatible with old distros).
New library versions bugs, regressions and incompatibilities.
Distro upgrade can render your system unusable (kernel might not boot, some features may stop working).
There's a myth that backwards compatibility is a non-issue in Linux because all the software has sources. However a lot of software just cannot be compiled on newer Linux distros due to 1) outdated, conflicting, no longer available libraries and dependencies 2) every GCC release becoming much stricter about C/C++ syntax 3) Users just won't bother compiling old software because they don't know how to 'compile' - nor they should they need to know how to do that.
DE developers (KDE/Gnome) routinely cardinally change UI elements, configuration, behaviour, etc.
Open Source developers usually don't care about application behaviour beyond their own usage scenarios. I.e. coreutils developers for no good reasons have broken head/tail functionality which is used by the Loki installer.
Quite often you cannot run new applications in LTS distros. Recent examples: GTK3 based software (there's no official way to use it in RHEL6), and Google Chrome (Google decided to abandon LTS distros).
Linux has a 255 bytes limitation for file names (this translates to just 63 four-byte characters in UTF-8) - not a great deal but copying or using files or directories with long names from your Windows PC can become a serious challenge.
Certain applications that exist both for Windows and Linux start up faster in Windows than in Linux, sometimes several times faster. It's worth noting though that SSD disk users are mostly unaffected.
All native Linux filesystems (except ext4) are case sensitive about filenames which utterly confuses most users. This wonderful peculiarity doesn't have any sensible rationale. Less than 0.01% of users in the Linux world depend on this feature.
!! Most Linux filesystem cannot be fully defragmented unless you compact and expand your partition which is very dangerous. Ext4fs supports defragmentation but only for individual files. You cannot combine data and turn free space into one continuous area. XFS supports full defragmentation though, but by default most distros offer Ext4 and there's no official safe way to convert ext4 to XFS.
Linux preserves file creation time only for certain filesystems (ext4, NTFS, fat). Another issue is that user space utilities currently cannot view or modify this time (ext4 `debugfs` works only under root).
A lot of UNIX problems (PDF, 3MB) apply to Linux/GNU as well.
There's a lot of hostility in the open source community.
This is so freaking "amazing", you absolutely have to read it - the developer behind XScreenSaver fought with Debian developers.