“I hate systemd” and other Ill-conceived Diatribes

It’s a popular statement in a world where many distributions have standardized on systemd. “I hate systemd” comes the quip–a statement designed to evoke emotion rather than contemplation. Just a mere mention of Lennart Poettering provokes near-Pavlovian salivation in some persons afflicted by such a malady, and they often haven’t the foggiest notion why beyond it’s different.

This post isn’t intended to be a deliberate defense of systemd, although it can certainly be construed as such. There are valid reasons to eschew its use, just as there are equally valid reasons to praise it for its novelty–and its belligerence to upend established convention surrounding sysvinit. What I hope to accomplish herein is to reduce the emotionally-charged nature of the anti-systemd response and convince proponents that opposition to such views isn’t antithetical to strongly held convictions of traditionalism in the context of UNIX design and philosophy. Whether they come away from this in continued opposition to systemd (or not) is largely uninteresting to me; I’d much rather someone walk away with a better understanding of their own views (and opinions) rather than convince them otherwise.

It has been my experience that systemd opponents typically fall into three camps: The first, people who have a limited understanding of systemd but have read volumes of opinion pieces explaining how bad it is, accepting opinions as truth; the second, people who feel it violates traditional mores of the UNIX world; and the third (and smallest) group who disagree with its adoption strictly on technical grounds. The latter group is unlikely to glean anything useful from this post as their opinions are likely founded on reason and experience (though I may disagree). The former two find themselves in opposition mostly by force of ignorance. Fortunately, ignorance is curable (albeit powerful), and worst case, we can lift those readers into the objectivism of the third category: Informed but still in opposition.

Readers who are mostly indifferent to systemd because they’re uninterested in learning it or are satisfied with the sysvinit (or sysvinit-alike, e.g. OpenRC) that was installed with their distribution are not considered the target audience. Though they may collect useful arguments, either for or against, I don’t expect them to find much else of interest. They may opt to skip this post entirely.

systemd Violates UNIX Principles

The concept that systemd grossly violates UNIX principles is an argument that usually establishes a few key points (erroneously): 1) systemd is monolithic; 2) systemd violates the principle of “everything should be text;” and 3) systemd is unnecessary complex. On occasion, this may coincide with a fourth point that establishes systemd as an unnecessary replacement for some subsystems or forcibly requires the use of its own internal replacements (such as for syslog or DHCP).

Of these, the third and fourth (surrogate) arguments bear the most weight. As such, I will address them last.

Is systemd monolithic? Yes and no. Yes, the PID 1 replacement that fundamentally exists as “systemd” in its binary form comprises a lot of moving parts, but it’s helpful to understand that systemd as a service runner exposes a significant number of kernel internals: cgroups, capabilities(7), namespaces, read-only file system views, complex service dependency management, and more. systemd is complex but it’s not necessarily monolithic.

Indeed, browsing through the number of binaries in a typical systemd installation will expose a wide assortment of services that perform one specific task. systemd-networkd, for instance, can manage network interfaces. Its companion, systemd-resolved, handles resolver configuration via resolv.conf (and honestly does a much better job of it than dhcpcd hooks or resolvconf).

Does systemd violate the principle that everything should be text? Not really. Whenever this gripe surfaces, it’s usually framed in the context of the systemd journal which does store its output in a binary format. It can also be configured to forward its data to syslog, but I don’t think this argument matters. journalctl comes with tools that can transparently read its binary form just fine thank you very much, along with filtering options that are arguably more powerful than your typical less/grep inspection can muster. In fact, to head off the argument that it doesn’t use “standard tooling,” I might argue that syslog doesn’t either–you have to use other user space tools to open and search through the logs; tools that have become a de facto standard through longevity. Nay, the difference exists mostly in the reality that systemd-journald’s output can’t be read by a tool the system administrator might author independently. Leastwise, not without some work.

There is a strength in what systemd does that isn’t easily replicated via syslog. As an example, it’s possible to configure remote hosts to pack up their binary logs and ship them to another location for centralized logging. Yes, you can do this with one or more syslog distributions, but it’s not easy. Compare this with systemd-journal-remote(8), systemd-journal-gatewayd(8) and journal-remote.conf(5), and you’ll learn it’s a simplified process that does little more than upload binary blobs to a remote HTTP API. Bonus: Because it’s a binary format log, you can selectively extract entries from a remote journald instance. Yes, really.

Aside: I recognize some astute readers will find it cleverly ironic that I’d reference three separate manual pages in the context of the claim that remote logging is easier in systemd, whereas their favorite solution requires reading one or two (and ancillary works; web searches; or more). The facts aren’t quite so malleable: In systemd it’s a matter of enabling the correct services and applying the appropriate configuration changes. There isn’t much need to do anything else.

Returning to our original train of thought: The third dispute, “systemd is too complex,” is a matter of whether all this complexity is useful. Most arguments against systemd’s complexity tend to focus on everything it can do rather than what it does, and I think this disconnect stems from a misunderstanding of what systemd actually is. In particular, it’s apparent that the complaint isn’t really about the complexity of the entire corpus of what systemd can do. It’s a complaint following from the belief that it does all of this by default. This isn’t true; many of the systemd services (systemd-networkd, systemd-resolved, and the more recently notorious systemd-homed) are entirely opt-in. They aren’t necessary and they aren’t typically activated by default (though some distributions make that choice).

I would argue that this complaint ought to focus instead on systemd’s reliance on dbus for its internal messaging apparatus. dbus itself is fairly complex (it’s a message bus…), but it also allows systemd to do a lot of its work under the hood by using an existing message passing system rather than reinventing its own (a surprise to some!). Perhaps it could be argued that repurposing a desktop bus was something of an ambitious choice on Poettering’s behalf, but again, it’s not an illustration of a violation of UNIX principles. If anything, repurposing existing tools should be praised as an example of avoiding systemic problems common among Not-Invented-Here adherents!

At this point, I would presume this section has implicitly answered, by proxy, the question of whether systemd’s replacement of conventional tools is strictly necessary or desirable. If not, then I would posit that more competition is good. As an example, systemd-networkd is far easier to configure and start than DHCPcd or dhclient. systemd-networkd has supported DUIDs out of the box for quite some time, and if you examine the contents of /run/systemd/netif/leases/*, you can copy the DUID between installations to retain static IPv4 assignments leased via DHCP. I’ve done this. (Yes, you have to chop the identifier up a bit, but that’s beyond the scope of this essay.)

systemd Integrates Itself too Deeply

systemd is an init (PID 1) replacement process. Deep integration is its job.

Okay, I get it: systemd as a whole is replacing “too many” established packages. As an example, it contains replacements for one or more of the following: DHCPcd/dhclient, resolv.conf manipulation tools, NTPD, timezone management (more on this in a minute), syslog (we’ve touched on this), hostname management, cron replacement, and probably a dozen other things that I haven’t thought about while writing this post.

I would argue this isn’t strictly a bad thing. Is competition against current DHCP clients particularly egregious? I’d think not–you can still use them if you like. systemd-resolved takes much of the guesswork out of configuring a DHCP client and its helpers to properly update resolv.conf. NTP clients are in the same ballpark–it’s entirely opt-in. syslogging, well, we’ve touched on that. And so on goes the list.

Of course, cron replacement seems to be a particularly touchy point. I don’t really know why, because many of the DIY distributions like Arch and Gentoo don’t actually ship cron by default. You have to install one yourself. Then you have to configure it. Although I’ve never made much use of systemd timers, I will say that shipping software targeting systemd means that if I also ship a systemd timer, I no longer have to worry about whether someone has a cron correctly setup and configured on their system at all. This means that if they deploy a bare container image (say LXD) containing a recent Debian or Ubuntu image, they can install my software and expect it to perform periodic tasks as required without any further intervention. systemd timers also do a bit more than cron. Suggested reading: systemd.timer(5).

And, well, let’s be honest. The crontab syntax is just a little eccentric. Sure, it has its charm, and it’s very compact and easy to read (ahem–once you learn it), but are we defending cron on merit or by way of inertia? Contemplate that question for a while before you continue.

So: Timezones. What exactly does systemd need to manipulate the system timezone for? Well, the short answer is that it doesn’t. The long answer is that the traditional way to configure a timezone for the system was to either allow the installer to do it or to manually place a symlink at the location /etc/localtime pointing to the appropriate zoneinfo file in /usr/share/zoneinfo. systemd itself doesn’t actually manipulate this file, but it does include a tool (timedatectl) that does this for you. Admittedly, this tool does many other things, but it also configures the local timezone for you. Is it worth replacing manual invocation of ln(1)? Probably not, but it’s not exactly doing anything new or particularly troublesome either.

systemd is Pushing Toward a Monoculture

I won’t deny this is a real risk. As of this writing, 8 of the top 10 Linux distributions according to DistroWatch.com use systemd as their sysvinit replacement.

(Ignoring FreeBSD, because I know someone will say “but there’s 11 on that list!”–even though it’s not Linux.)

This is both good and bad. I’ll start with the bad.

Any time there’s a software monoculture, particularly one that’s controlled by a comparatively small number of people, there is a real danger of locking ourselves into a specific mindset. The irony is not lost on me that this is the reason for one of the chief complaints against systemd: Traditionalists who cling to their archaic script-based sysvinit lock themselves into the mindset that sysvinit should only be done with scripts. systemd, by virtue of its market penetration, presents a quandary that we may eventually conclude systemd is the only way to do things. This is bad, but the fallout from this is comparatively minor versus, say, macOS which has no other system besides launchd (from which systemd drew substantial inspiration).

For one, I would imagine that there will always be distributions using alternatives to systemd, even if only as a token gesture to appease the naysayers. Gentoo, while it supports systemd as an alternative init system, uses OpenRC by default. Slackware has options to use a traditional sysvinit or OpenRC. Devuan formed as a consequence of Debian switching to systemd (although I believe users can choose among other inits). Void Linux, in what is probably one of the most novel approaches of a recent distribution, thumbs its nose at convention and elects to use runit instead. While it’s true that most distributions–arguably consuming the plurality of users–have standardized on systemd, I don’t believe there’s any real risk that systemd will become a true monoculture.

As a developer, I think systemd is a good thing for a couple of reasons. First, I can distribute unit files with my software that will initialize it exactly as I intended, and it’ll work across any system running systemd. Provided there’s a fairly recent kernel in place, I can have access to each of the features (see above) that can be used to harden running processes. I don’t even need to write distribution-specific initscripts to ensure the process(es) start up as I’d expect. Maintainers can be happier too, because they then only need to copy out whatever unit files were included with the distribution, patch them as they see fit (if necessary–usually not), and go about their business. Second, if I target a system with systemd, I don’t have to worry about specialized packages like supervisord. Instead, I can be reasonably assured that the process supervisor for the entire system will do exactly what I want. Process fails? It’ll restart. Complex dependency chain when distributing microservices? No big deal; systemd will manage that for me.

The best part? All of that comes for free.

Does this mean I don’t think we can do better? Of course not. I’ve heard it stated before that, paraphrasing, “systemd is a disaster, but whatever comes after systemd will be better than what we have now.”

There may be some truth to such a statement, but I don’t think the statement itself is necessarily a reflective (or is that reflexive?) truth either. systemd can be improved upon, sure, but it exposes a large feature set that once required special tooling (C wrappers, anyone?). More importantly, systemd can be used now. Not next year. Not in 5 years. Not in a decade. Now.

I think the push-back against systemd is potentially dangerous, because it risks frightening off people who might come into the field with new ideas. Seeing how Poettering has been attacked personally (he’s even received death threats–yes, really), they may decide it’s not worth the trouble. That would be criminal.

If any time a new idea surfaced, we immediately bowed to pressure and confessed the naysayers were right before so much as beginning the journey, we’d still be a tribalistic people wandering the plains.

systemd was Written by Lennart Poettering–Just like PulseAudio

You got me. I have nothing else to say.

No, really. I’ll be honest–I’ve seen this argument before, and I’ve seen it it presented as a straight-faced counter to explain why systemd is so awful. This argument is anything but objective and seeks to paint systemd based on either the personality (or personalities) behind the project and on his previous work. I’m not sure this is a particularly good argument, because PulseAudio does actually resolve some long standing issues with Linux audio.

If you don’t know why, then it’s plausible you’ve never dug too deeply into PulseAudio before. But, to humor you: If I have multiple audio cards outputting to multiple devices, I can easily switch between them (think speakers and headsets) by changing the sink output from the mixer. That’s something you can’t easily do from Windows without opening the sound options and fiddling around with default output devices. In Pulse, it’s literally three clicks: Open the mixer, right-click the application, and select the output device.

Let’s be honest: “I hate PulseAudio” is absolutely not a valid argument against systemd. It’s intellectually lazy. It’s strawmanning.

Conclusion

systemd’s criticisms are certainly not without their merits, and I think its worth looking at its deficiencies in the context of what it does right as well as what it does wrong. systemd isn’t perfect–no software is–but I think there’s an argument to be made that sysvinit and sysvinit-compatible init systems are long in the tooth. It’s good to see that there are distributions exploring alternatives (again, Void Linux) and that many others have standardized on an init system that solves long standing issues with process supervision and dependency resolution.

Once upon a time, I used to rely on supervisord to manage multiple processes and to provide some guarantees that if an application failed, it would be restarted. Before that, I relied on DJB’s daemontools (hello qmail!). Each of these solved the deficiencies that existed in traditional sysvinits–and did a darn good job of it. Having said that, I think it’s time that PID 1 finally take control over process life cycle management. Windows had this for a long time. It’s time Linux did too.

No comments.
***

Linux on the Lenovo Y740 Legion (2019)

It’s easy.

No, really, it is. There’s a couple gotchas and some minor inconveniences (probably self-induced in my case), but provided you didn’t do anything particularly unusual with the system configuration at purchase, it should work.

First, I want to preface this with a brief overview of my configuration. I selected one with an NVMe SSD for its primary drive and a mechanical SATA drive for the secondary. I did not select one with Optane and with good reason, but I’ll get to that in a moment. All things considered, it’s a fairly banal, boring configuration with the exception of some of the features new to the 2019 lineup (notably the Corsair RGB keyboard and configurable fan LEDs). Interestingly, the behavior of this system’s EFI isn’t especially novel or noteworthy. It just works.

Caveat emptor: The configuration I discuss in this post may not work for you. I made decisions specific to how I wanted to use this system and performed some tasks manually to avoid overwriting defaults that shipped with the laptop. I’m also using Arch and other distros may present challenges specific to the software they ship or the utilities they package. Always back up any important files and do not perform these steps if you’re unwilling to lose data. I didn’t, but I’d only just gotten the laptop a couple days prior. Had I been using it for some months, I might have performed these steps differently. Some may also question why I didn’t perform a clean install of Windows; I considered it, but I didn’t feel the need to do so.

Now to the meat of the process: Before I started, I made certain to have available two USB sticks (both bootable): One with the Arch ISO image, and the other with a bootable Windows 10 installation via the Media Creation Toolkit. The latter was in my pile of tools in the event things went south and I needed to reinstall completely.

When booting to Linux on the Y740, you’ll note that the NVMe drive is not visible from Linux. This appears to be due to the storage configuration in BIOS. By default, it’s set to Intel’s Rapid Storage Technology; switching it to AHCI resolves the issue. One of the curious things to note when changing this configuration is the ominous warning that applying a different setting will delete all data on attached disks. I found this isn’t the case, but this is also why I’d recommend against selecting a model with Optane installed. I’m not certain on this point–you should certainly do your own research–but I believe with Optane installed, BIOS transparently configures it as a drive cache. Changing this on such a system may cause data corruption which is possibly what the warning implies. (The help text for this setting also mentions it’s specifically for Optane-equipped systems, hence my speculation.)

Once the drives were configured to use AHCI, the NVMe disk was accessible via Arch, and I proceeded to image it to the mechanical storage drive (using dd, of course). This image was then moved to permanent storage on my other systems in the event I did something incredibly stupid.

I look at it this way: If I didn’t have it available, I can almost guarantee I probably would have done something stupid. Always keep a backup!

(Speaking of stupid: This section is somewhat intentionally out of order but necessary for the story to flow; read below for potential video issues, because you will encounter them.)

Now, at this point, I had two choices. My initial thought was to partition the drives accordingly, and reinstall Windows completely. This would have been the ideal situation, but I wanted to save some time (and use the system in its stock state, minus some annoying cruft like McAfee). So, the next choice was to shrink the volumes on the SSD and the storage drive to roughly half. I gave Windows somewhat more storage, because it’s Windows, and because I plan to use this as a casual gaming system in addition to doing work. Doing so is easy enough: To resize the NTFS volumes, go to Computer Management -> Storage -> Disk Management. Then, you’re only a reboot away from getting started.

This is the easy part.

With the partitions resized, and a USB stick in hand, I feverishly pressed F12 to bring up the boot device selection screen–and promptly noticed that the USB stick wasn’t anywhere to be found. After unplugging it and putting it back in, BIOS appeared satisfied enough to present me with it, and off I went. I didn’t notice this with another stick I was using, and I had some difficulty replicating the problem. I’m not sure if this is a fault with the USB flash drive I had the Arch ISO written to or whether it’s just an idiosyncrasy of this BIOS. Either way, things appeared to work great…

Until the stick booted, that is.

Apparently the nouveau drivers that ship with Arch don’t work with the GeForce 2060 particularly well. I was greeted with what looked like vague video corruption and a line roughly one pixel high that appeared to be the bottom strip of whatever text was printed to the screen. Bummer. Rebooting, pressing F2, and getting into BIOS to examine whatever other configurations might help seemed to be my only salvation. Without much clue what else to pick, I noticed the graphics configuration had two states: Discrete graphics (that is the NVIDIA card) and “switchable graphics.” I knew from helping my girlfriend with her now-crippled Acer that the “switchable graphics” setting likely allowed the system to select between the integrated Intel UHD graphics on the CPU die and the discrete (NVIDIA) card; my theory at this point was to presume that doing so would allow Linux to boot up using the Intel chipset, hopefully avoiding the video corruption immediately after the kernel loaded up.

It worked, and from here we could progress.

The Arch installation was fairly pedestrian from this point: Setup the free space with partitions (I went with /boot and root on the SSD, although I should have left them merge–more on this in a moment–and the mechanical drive got a /storage mount point and swap), format as appropriate, install, configure, and… done!

Just kidding. That last part should be cry while you figure out how you want to setup your boot loader. You see, I’ve rarely used EFI systems outside virtualization. All of my systems are pre-2014-ish, and the one or two I’ve had the (dis)pleasure of poking at were all Windows machines. So, what are we to do?

(Aside: Wiping my girlfriend’s system and installing Linux would probably end with my face on a milk carton.)

First thing’s first: We need to figure out the time. No, I don’t mean how long the whole process had taken up to this point! I quite literally mean the time. One of the pain points dual booting Windows and Linux is how to handle the system clock. Once upon a time (sorry), Linux would begrudgingly chug along with the system clock configured to local time. This is a terrible idea, and I still have no idea why Windows insists, but fortunately, you can change Windows’ behavior easily enough. However, doing so requires the foresight to change this setting before getting started–something I didn’t have because I’m stupid and it only occurred to me when I got to the point of configuring the clock. Perhaps you’ll be luckier. You’re reading this after all!

Now, where were we? Oh, right! The bootloader. This is one of the deficiencies with using Linux on newer systems. Generally speaking, EFI, UEFI, or whatever your motherboard manufacturer has decided to call it, requires special attention that we Linux users haven’t had to give to boot loading since the 2000s. No, I wouldn’t use grub either–it does apparently have EFI support, but I have painful memories getting it working under VirtualBox with EFI enabled. Perhaps this is a VirtualBox-specific issue, but I’m inclined to believe we’re better off using tools designed for a specific purpose. In this case, rEFInd.

I won’t pretend to be a subject matter expert. I’ve never used rEFInd before. The Arch Linux wiki does have fantastic resources that can help you get started, but the thing I noticed with my particular configuration is that special attention had to be placed on configuring the OS declarations in esp/EFI/refind/refind.conf. If you’re following along at home, you should at least read this section on UEFI and the entry on rEFInd.

For my system, I did not follow the automatic installation with refind-install, because I didn’t want to overwrite the default EFI boot entry. Thus, I followed the manual installation process by copying rEFInd into the EFI system partition. Note that this alone is not enough to get rEFInd to work with the Y740’s BIOS. I’m not certain whether this is due to a step I’d skipped earlier in the installation or whether it’s an artifact of the manual install, but I found the only way the Lenovo BIOS would see rEFInd even if it’s in the EFI system partition is to configure the boot order via efibootmgr –bootorder 0004,0009,2001,2002,2003. I assume this should just work, but nothing I did would force the BIOS to recognize rEFInd (with the ID 0009). Changing the boot order did work, however, and with Windows (0004) as the primary–temporarily at least–and rEFInd (0009 on my system) as the secondary, I learned that BIOS had been forcibly configured to see rEFInd allowing me to change the UEFI boot order accordingly.

I also discovered that I could not get rEFInd to recognize the /boot/refind_linux.conf configuration. I’ve not investigated why, but I suspect it’s either due to my choice of partitioning (remember, I’m using a separate /boot and root!) or misunderstanding of rEFInd. However, configuring Arch via esp/EFI/refind/refind.conf has worked just fine. I should also note that by esp in the former example and above paragraphs, I mean the EFI System Partition. I mounted this (/dev/nvme0n1p1 on my system) to /boot/efi for ease of access. I’d suggest doing the same if you plan on going down this route.

Finalizing the Linux installation was relatively painless once the bootloader struggle was resolved, and I had NFS plus KDE working more or less out of the box. The system does quite well under Linux thusfar, although I’ve yet to configure wireless networking. I suspect the wireless NIC will work fine: I can see the card in ip link and the ridiculously named “Killer 1550i” cards are, as far as I know, simply rebranded Intel 9560NGW chips.

There is some unfinished business, and I’ve encountered at least one or two teething problems. First, this configuration doesn’t address secure boot. I was primarily focused on getting Linux working rather than working with secure boot. I’m hopeful this won’t be especially difficult, and from what I’ve read it appears the process is relatively straightforward. I’m planning on going the machine owner’s route with regards to kernel signing, with further plans to automate the process in a pacman hook. I’ve also noticed the wired network didn’t come up automatically in spite of being enabled via systemd (I prefer using systemd network configurations via /etc/systemd/network over NetworkManager), and there’s a large amount of i2c_hid spam in the journal. I suspect this may have something to do with some of the peripherals in the system (touchpad? wireless mouse?).

I’ll eventually write a part two once I get some of the remaining issues resolved, along with documenting whatever else I may encounter. If you’re using Linux and just bought one of these systems, please don’t feel overwhelmed with the installation process. Just be cautious, think things through, and have a plan of attack. Don’t forget to back things up, too. Oh, and as much as I don’t like Windows 10’s Microsoft account option, I would recommend logging in with one at least once because it ties the software license for that machine to your account. If you decide to reinstall Windows, this is a good thing, in my opinion!

2 comments.
***

Remediation Service: Windows 10’s Dirty Secret

I don’t use Windows often. Much of my time is spent in Arch Linux except on the rare occasion I have an interest in doing something that requires Windows (typically gaming or Reason). Imagine my surprise when I booted in Windows about a week or two ago and started noticing a series of processes consuming a significant amount of disk bandwidth and appearing to scan the entirety of a) installed applications and b) everything in my user profile directory.

It turns out that sometime late last year (November 2018, possibly earlier), Microsoft released a series of patches for “reliability improvements” which include the “remediation service” that performs a few interesting tasks. Notably, this includes a service that “may compress files in your user profile directory to help free up enough disk space to install important updates.” If you’ve seen sedlauncher.exe in Windows Resource Monitor, it belongs to the remediation service and is the tool design to scan your user profile directory, presumably for files that may be candidates for compression.

sedlauncher.exe‘s malware-like behavior stems from the fact that a) it isn’t strictly launched when Windows Update requires additional space and b) it performs a thorough scan of everything in the user profile directories (pidgin chat logs, pictures, media, desktop files–everything). I assume this is because it is collating a list of files it would compress in the event Windows Update runs out of space based on some heuristic, but what perplexes me is that it is impossible to tell precisely how well a file will compress until the file is actually compressed. Yes, there’s a few heuristics you could apply (it is a file type known to compress well) but these don’t always hold true: Imagine a virtual machine image that contains a large number of compressed archives. VM images do compress well, generally, but only because the contents of the image aren’t typically compressed. But this also presents the question: Why scan for compression targets when there’s already plenty of disk space available to Windows Update? What exactly is this tool doing?

Most guides online direct visitors to one of two solutions: Remove the applicable updates or disable the Windows Remediation Service. The former isn’t a sustainable solution, because the updates will eventually be applied or because Windows’ stellar history of absolutely no security flaws (sarcasm) strongly suggests skipping updates isn’t wise. Curiously, the latter option–that is, disabling the culprit service–appears to be a foolhardy solution as well, because sedlauncher.exe returns, diligently, to its previous state of scanning everything it can access. It’s likely Windows Remediation Service scanners are launched via the task scheduler, but I’ve yet to find exactly where or how.

There is one particular solution that might work. Unlike most other core Windows tools, sedlauncher.exe is not contained in the Windows root. Instead, it resides under C:\Program Files\rempl. This rather bizarre choice suggests Microsoft has a keen interest in packaging this tool separately for other operating systems or wishes to disguise it as an installed application to keep it from prying eyes. You decide.

I’ve found renaming sedlauncher.exe to something else appears to work as a temporarily solution (but only temporary) with the appropriate caveats applied (exercise caution as this may break things). I expect it to be reinstalled with a future update, but for now it won’t be scanning my profile directory for files to assault. Whether this works in your case (or not) is left as an exercise to the reader, but be aware this may break other parts of Windows Update. I have no idea how deep the tendrils of this telemetry run into the dark recesses of Windows 10.

No comments.
***