https://learn.microsoft.com/en-us/windows-hardware/drivers/k...
In this way, NT is similar to Unix in that many things are just files part of one global VFS layout (the object manager name space).
Paths that start with drive letters are called a "DOSPath" because they only exist for DOS compatibility. But unfortunately, even in kernel mode, different sub systems might still refer to a DOSPath.
Powershell also exposes various things as "drives", pretty sure you could create your own custom drive as well for your custom app. For example, by default there is the 'hklm:\' drive path:
https://learn.microsoft.com/en-us/powershell/scripting/sampl...
Get-PSDrive/New-PSDrive
You can't access certificates in linux/bash as a file path for example, but you can in powershell/windows.
I highly recommend getting the NtObjectManager powershell module and exploring about:
https://github.com/googleprojectzero/sandbox-attacksurface-a...
ls NtObject:\
I think you could make this same statement about *nix, except it's 10 years _worse_ (1970s). I strongly prefer the fhs over whatever MS thinks it's doing, but let's not pretend that the fhs isn't a pile of cruft (/usr/bin vs /bin, /etc for config, /media vs /mnt, etc)
Why get upset over /media vs /mnt? You do you, I know I do.
For example The Step CA docs encourage using /etc/step-ca/ (https://smallstep.com/docs/step-ca/certificate-authority-ser...) for configuration for their product. Normally I would agree but as I am manually installing this thing myself and not following any of the usual docs, I've gone for /srv/step-ca.
I think we get enough direction from the ... "standards" ... for Unix file system layouts that any reasonably incompetent admin can find out which one is being mildly abused today and get a job done. On Windows ... good luck. I've been a sysadmin for both platforms for roughly 30 years and Windows is even odder than Unix.
Why is the root of one of my drives `/` while the roots of my other drives are subdirectories of that first drive?
The mechanism is generic and pretty. The specifics of how it's often used are legacy-driven. Nothing in unix really depends on the specifics.
The point is that any filesystem can be chosen as the OS’s root.
The root of all other filesystems - there could be multiple per drive - is where you tell the filesystem to be mounted, or in your automounter’s special directory, usually /run/media, where it makes a unique serial or device path.
* clarity
And anyway, there has to be a naming scheme; the naming scheme is abstracted from the storage scheme.
It's not the case that your /var and /usr are different drives; though it can be in a given installation.
Maybe some Windows wizards could get around the mandatory restrictions, but an average Linux user can get around the optional ones.
Of course there are alternatives but the resource-as-stream metaphor is so ubiquitous in Unix, it’s hard to avoid.
But anyway ignoring the sarcasm my question was implying: if this is totally customizable in Windows, why Microsoft still ships C: (or whatever other letter) as the default name for the first user partition? Show it to legacy programs with hardcoded values to maintain compatibility, but at least in Explorer and MS controlled software, use some more modern/legible name.
Zip disks presented themselves with drive letters higher than B (usually D: assuming you had a single hard disk). However, some (all?) Zip drives could also accept legacy 3.5" floppies, and those would show up as B.
Zip drives were never compatible with 3.5" floppies, and always were enumerated using the first available external storage letter (ie, D: in typical machines).
“The file system itself is 128 bit, allowing for 256 quadrillion zettabytes of storage. All metadata is allocated dynamically, so no need exists to preallocate inodes or otherwise limit the scalability of the file system when it is first created. All the algorithms have been written with scalability in mind. Directories can have up to 248 (256 trillion) entries, and no limit exists on the number of file systems or the number of files that can be contained within a file system.”
https://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6qth/inde...
Don’t want to hit the quadrillion zettabyte limit..
It took me a minute to figure out that this was supposed to be 2^48, but even then that's ~281 trillion. What a weird time for the tera/tibi binary prefix confusion to show up, when there aren't even any units being used.
> I've completely missed this and would like to know more, may I be so bold as to request a link?
"A way out for a.out" https://lwn.net/Articles/888741/
"Linux 6.1 Finishes Gutting Out The Old a.out Code" https://www.phoronix.com/news/Linux-6.1-Gutting-Out-a.out (with links to two earlier articles)
https://lwn.net/ml/linux-kernel/202203161523.857B469@keescoo...
While I understand the appeal of software longevity, and I think it's a noble and worthy pursuit, I also think there is an under-appreciated benefit in having unmaintained software less likely to function on modern operating systems. Especially right now, where the concept of serious personal computer security for normal consumers is arguably less than two decades old.
But Gary Kildall didn't come up with the idea of drive letters in CP/M all on his own, he was likely influenced by TOPS-10[1] and CP/CMS[2], both from the late 60s.
[0] https://en.wikipedia.org/wiki/86-DOS
(And mostly, I'm talking about using drive letters rather than something like what unix does. C being the first fixed media device, may seem more arbitrary now, but it was pretty arbitrary even in the floppy era.)
Or Wine, which is less reliable but funnier.
Wine itself doesn't run on Windows AFAIK.
It does, if you use an old enough version of windows that SUA is available :). I never managed to get fontconfig working so text overlapped its dialogue boxes and the like, but it was good enough to run what I needed.
Mind you, Wine might lose that too ...
You'd expect Microsoft to support things because it doesn't make money for them anymore or some other calculated cost reason, but Microsoft is supporting old things few people use even when it costs them performance/secure edges.
Though personally, while I care a lot about using old software on new hardware, my desire to use new software on old hardware only goes so far back and 32 bit mainstream CPUs are out of that range.
Open source isn't where I'd expect abandonware to happen.
Depends on how much power it's wasting, when we're looking at 20 year old desktops/laptops.
> 32 bit is still valid for many applications and environments that don't need >3GB~ ram.
Well my understanding is that if you have 1GB of RAM or less you have nothing to worry about. The major unresolved issue with 32 bit is that it needs complicated memory mapping and can't have one big mapping of all of physical memory into the kernel address space. I'm not aware of a plan to remove the entire architecture.
It's annoying for that set of systems that fit into 32 bits but not 30 bits, but any new design over a gigabyte should be fine getting a slightly different core.
> For example, routers shouldn't use 64bit processors unless they're handling that much load, die size matter there
I don't think that's right, but correct me if I missed something. A basic 64 bit core is extremely tiny and almost the same size as a 32 bit core. If you're heavy enough to run Linux, 64 bit shouldn't be a burden.
Linux goal is only for code compatibility - which makes complete sense given the libre/open source origins. If the culture is one where you expect to have access to the source code for the software you depend on, why should the OS developers make the compromises needed to ensure you can still run a binary compiled decades ago?
NTVDM leverages virtual 8086 mode which is unavailable while in long mode.
NTVDM would need to be rewritten. With alternatives like DOSBox, I can see why MSFT may not have wanted to dive into that level of backwards compat.
NTVDM as it existed Windows NT (3.1 through 10) for i386 leveraged V86 mode. NTVDM on Windows NT (e.g. 4.0) for MIPS, PowerPC, and Alpha, on the other hand, already had[1] a 16-bit x86 emulator, which was merely ifdefed out of the i386 version (making the latter much leaner).
Is it fair of Microsoft to not care to resurrect that nearly decade-old code (as of Windows XP x64 when it first became relevant)? Yes. Is it also fair to say that they would not, in fact, need to write a complete emulator from scratch to preserve their commitment to backwards compatibility, because they had already done that? Also yes.
[1] https://devblogs.microsoft.com/oldnewthing/20060525-04/?p=31...
> Lucovsky was more fastidious than Wood, but otherwise they had much in common: tremendous concentration, the ability to produce a lot of code fast, a distaste for excessive documentation and self-confidence bordering on megalomania. Within two weeks, they wrote an eighty-page paper describing proposed NT versions of hundreds of Windows APIs.
and chapter 6 mentions the NTFS spec being initially written in two weeks by Miller and one other person on Miller’s sailboat.
> Maritz decided that Miller could write a spec for NTFS, but he reserved the right to kill the file system before the actual coding of it began.
> Miller gathered some pens and pads, two weeks’ worth of provisions and prepared for a lengthy trip on his twenty-eight-foot sailboat. Miller felt that spec writing benefited from solitude, and the ocean offered plenty of it. [...] Rather than sail alone, Miller arranged with Perazzoli, who officially took care of the file team, to fly in a programmer Miller knew well. He lived in Switzerland.
> In August, Miller and his sidekick set sail for two weeks. The routine was easy: Work in the morning, talking and scratching out notes on a pad, then sail somewhere, then talk and scratch out more notes, then anchor by evening and relax.
(I’m still relatively confident that the Win32 spec was written in 1990; at the very least, Showstopper! mentions it being shown to a group of app writers on December 17 of that year.)
I was inspired by the Dr Seuss, "On beyond Zebra."
For example, Windows 11 has no backwards compatibility guarantees for DOS but operating systems that they do have backwards compatibility guarantees for do.
Enterprises need Microsoft to maintain these for as long as possible.
It is AMAZING how much inertia software has that hardware doesn’t, given how difficult each are to create.
Windows 10 no longer plays the first Crysis without binary patches for instance.
Windows earns money mainly in the enterprise sector, so that's where the backwards-compatibility effort is. Not gaming. That's just a side effect.
Anecdotal, you can run 16bit games (swing; 1997) on Windows, only if you patch 2-3 DirectX related files.
And with win11, Microsoft stopped shipping 32bit versions of the OS, and since they don't support 16bit mode on 64bit OSes, you actually can't run any 16bit games at all.
As a counter-anecdata, last week I ran Galapagos: Mendel's Escape with zero compat patches or settings, that's a 1997 3D game just working.
But that's a pretty low bar - previously Windows went to great lengths to preserve backwards compatibility even for programs that are out of spec.
If you just care about keeping things working if they were done "correctly" then the average Linux desktop can do that too - both for native Linux programs (glibc and a small list of other base system libraries have strong backwards compatibility) as well as for Windows programs via Wine.
Also, some programming languages have a setting to export code compatible with just Baudot characters: http://t3x.org/nmhbasic/index.html
So, you could feed it from paper tape and maybe Morse too.
Wait what? There were devices called teletypes in the Victorian era (ending in 1901)? What were they doing?
Of more interest, to myself at least, teleprinters have a long history:
* Early developments (1835–1846)
* Early teleprinters (1849–1897)
Of course software developers are still stuck with 80 column conventions even though we have 16x9 4K displays now… Didn’t that come from punchcards ???
80 characters per line is an odd convention in the sense that it originated from a technical limitation, but is in fact a rule of thumb perfectly familiar to any typesetting professional from long before personal computing became widespread.
Remember newspapers? Laying the text out in columns[0] is not a random quirk or result of yet another technology limitation. It is the same reason a good blog layout sets a conservative maximum width for when it is read on a landscape oriented screen.
The reason is that when each line is shorter, the entire thing becomes easier to read. Indeed, even accounting for legibility hit caused by hyphenation.
Up to a point, of course. That point may differ depending on the medium and the nature of the material: newspapers, given they deal with solid plain text and have other layout concerns, limit a line to around 50 characters; a book may go up to 80 characters. Given a program is not a relaxed fireside reading, I would place it closer to the former, but there are also factors and conventions that could bring acceptable line length up. For example, indentation and syntax highlighting, or typical identifier length (I’m looking at you, CNLabelContactRelationYoungerCousinMothersSiblingsDaughterOrFathersSistersDaughter), or editor capability to wrap lines nicely[1].
Finally, since the actual technical limitation is gone, it is actually not such a big deal to violate the line length rule on occasion.
[0] Relatedly, codebases roughly following the 80 character line length limitation unlock more interesting columnar layouts in editors and multiplexers.
[1] Isn’t the auto-wrap capability in today’s editors good enough that restricting line length is pointless at the authoring stage? Not really, and (arguably) especially not in case of any language that relies on indentation. Not that it could not be good enough, but considering code becomes increasingly write-only it seems unlikely we will see editors with perfect, context-sensitive, auto-wrap any time soon.
Code isn’t prose. Code doesn’t always go to the line length limit then wrap, and prose doesn’t need a new line after every sentence. (Don’t nitpick this; you know what I’m saying)
The rules about how code and prose are formatted are different, so how the human brain finds the readability of each is necessarily different.
No code readability studies specifically looking for optimal line length have been done, to my knowledge. It may turn out to be the same as prose, but I doubt it. I think it will be different depending on the language and the size of the keywords in the language and the size of the given codebase. Longer keywords and method/function names will naturally lead to longer comfortable line lengths.
Line length is more about concepts per line, or words per line, than it is characters per line.
The 80-column limit was originally a technical one only. It has remained because of backwards compatibility and tradition.
of typography and not be overly wide, lest my saccadic
motion leads my immersion and comprehension astray.
However when I read code I do not want to scan downwards to complete the semantics of a given expression because that will also break my comprehension and so when a line of code is long I'd prefer for it to remain long unless there are actually multiple clauses
and other conditionally chained
semantic elements
that are more easily read aloneExcept 99.9% of times it's becomes 50 characters with 32pt font which occupies ~25% of the horizontal space on a 43".
"Good" my ass.
Sometimes I would visually separate a short bit of code from its surroundings (and usually add a comment on top) to make it clear that it is a controversial bit that needs attention of the reader. The same mechanism applies in less extreme cases, lifting baseline legibility.
Speak for yourself, all my projects use at least 100 if not 120 column lines (soft limit only).
Trying to keep lines at a readable length is still a valid goal though, even without the original technical limitations - although the bigger win there is to keep expression short, not to just wrap them into shorter lines.
Linting and autoformats help here... just allowing any length of line in code is just asking to get pwned at some point.
> even though we have 16x9 4K displays now
Pretty much no normal person uses those at 100% scaling though, so unless you're thinking of the fellas who use a TV for a monitor, that doesn't actually help so much:
- 100% scaling: 6 panels of 80 columns fit, no px go to waste
- 125% scaling: 4 panels of 80 columns fit, 64 px go to waste (8 cols)
- 150% scaling: 4 panels of 80 columns fit, no px go to waste
- 175% scaling: 3 panels of 80 columns fit, 274 px go to waste (34 cols)
- 200% scaling: 3 panels of 80 columns fit, no px go to waste
This sounds good until you need any additional side panels. Think line numbers, scrollbars, breakpoint indicators, or worse: minimaps, and a directory browser. A minimap is usually 20 cols/panel, a directory browser is usually 40 cols. Scrollbar and bp-indicator together 2 cols/panel. Line numbers, probably safe to say, no more than 6 cols/panel.
With 2 panels, this works out to an entire additional panel in overhead, so out of 3 panels only 2 remain usable. That's the fate of the 175% and 200% options. So what is the "appropriate" scaling to use?
Well PPI-wise, if you're rocking a 32" model, then 150%. If a 27" model, then 175%. And of course, given a 22"-23"-24" unit, then 200%. People of course get sold on these for the "additional screen real estate" though, so they'll instead sacrifice seeing the entire screen at once and will put on their glasses. Maybe you prefer to drop down by 25% for each of these.
All of this is to say, it's not all that unreasonable. I personally feel a bit more comfortable with a 100 col margin, but I do definitely appreciate when various files nicely keep to the 80 col mark, they're a lot nicer to work with side-by-side.
"That obviously means Users, so that's where the home directories are, right?"
"Well, no. And it actually means Unix System Resources"
(but historically it was in fact "user", just not in that sense)
I'm sure we'll eventually bacronym C: as well.
This will generally work with everything using the Win32 C api.
You will however run into weird issues when using .Net, with sudden invalid paths etc.
The abstraction of putting a display into an two-dimensional array of primitive cells is also not limited to teletypes. Using characters instead of picture elements (commonly shorted to pixels) is not a bad choice when all you want to do is render text and means that your rendering code can be much simpler. That's the case independently of the earlier technology forcing this way.
Teletype emulators also typically have a way of using pixels as the primitive (framebuffers). GUI Teletype emulators now don't, because there is a fine alternative to use pixels (the display server).
As for baffling, I mean, I type in things like 'grep' everyday which is a goofy word. I'm not even going to go into all the legacy stuff linux presents and how linux, like windows, tries hard not to break userland software.
Some apps (in this case Steam) don't run "what is is space in current path" (despise say GetDiskFreeSpaceExW accepting full path just fine), they cut it to the drive letter, which causes them to display space of the root drive, not the actual directory that they are using and in my case was mounted as different partition
What do you find weird about the directory naming structure?
It works under Windows too.
Proof:
https://winclassic.net/thread/1852/reactos-registry-ntobject...
After [copying over .dll and importing .reg files], you will already be able to open these shell locations with the following commands:
NT Object Namespace: explorer.exe shell:::{845b0fb2-66e0-416b-8f91-314e23f7c12d}
System Registry: explorer.exe shell:::{1c6d6e08-2332-4a7b-a94d-6432db2b5ae6}
If you want to add these folders in My Computer, just like in ReactOS, add these 2 CLSIDs to the following location:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace
Seems ReactOS holds some goodies even for long time Windows users![0] https://pnp.github.io/powershell/cmdlets/Connect-PnPOnline.h...
I don't understand what you mean by this. I can access them "as a file" because they are in fact just files
$ ls /etc/ca-certificates/extracted/cadir | tail -n 5
UCA_Global_G2_Root.pem
USERTrust_ECC_Certification_Authority.pem
USERTrust_RSA_Certification_Authority.pem
vTrus_ECC_Root_CA.pem
vTrus_Root_CA.pemThe difference is similar to being able to do 'ls /usr/bin/ls' vs 'ls /proc/12345/...' , the first is a literal file listing, the second is a way to access/manipulate the ls process (supposedly pid 12345). In windows, certificates are not just files but parsed/processed/validated usage specific objects. The same applies on Linux but it is up to openssl, gnutls,etc... to make sense of that information. If openssl/gnutls had a VFS mount for their view of the certificates on the system (and GPG!!) that would be similar to cert:\ in powershell.
A Linux equivalent of listing certificates through the Windows virtual file system would be something like listing /proc/self/tls/certificates (which doesn't actually exist, of course, because Linux has decided that stuff like that is the user's problem to set up and not an OS API).
However, GNU/Linux does lack such an API. There is no standard API for listing certificates and private keys. All GNU/Linux provides is a list of files that may or may not contain one or more certificates and/or private keys that may or may not be related to each other. You have to go beyond the basic GNU parts to get to things like keychains.
PS Cert:\LocalMachine\Root\> ls
PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\Root
Thumbprint Subject EnhancedKeyUsageList
---------- ------- --------------------
CDD4EEAE6000AC7F40C3802C171E30148030C072 CN=Microsoft Root C…
BE36A4562FB2EE05DBB3D32323ADF445084ED656 CN=Thawte Timestamp…
A43489159A520F0D93D032CCAF37E7FE20A8B419 CN=Microsoft Root A…
92B46C76E13054E104F230517E6E504D43AB10B5 CN=Symantec Enterpr…
8F43288AD272F3103B6FB1428485EA3014C0BCFE CN=Microsoft Root C…
7F88CD7223F3C813818C994614A89C99FA3B5247 CN=Microsoft Authen…
245C97DF7514E7CF2DF8BE72AE957B9E04741E85 OU=Copyright (c) 19…
18F7C1FCC3090203FD5BAA2F861A754976C8DD25 OU="NO LIABILITY AC…
E12DFB4B41D7D9C32B30514BAC1D81D8385E2D46 CN=UTN-USERFirst-Ob… {Code Signing, Time Stamping, Encrypting File System}
DF717EAA4AD94EC9558499602D48DE5FBCF03A25 CN=IdenTrust Commer…
DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 CN=DigiCert Global …
Now do the same without a convoluted hodge-podge of one-liner involving grep, python and cutting exact text pieces with regex.I always love how linux fans do like to talk without any experience nor the will to get the said experience.
For the Certificate provider specifically: When I think certificates and hierarchy, I think signing hierarchy of issueing certs. But this is not what is exposed here, just the structure of the OS cert store without context. and moving items has much more implications that inside a normal data folder. Thus I prefer certlm/certmgr.msc as they provide some more of it.
Sometimes It feels as they crammed too much into that idea, a forced concept. https://superuser.com/q/1065812/what-is-psprovider-in-powers...
Not for certs specifically (that I know of) but Plan9 and it's derivaties are very hard on making everything VFS abstracted. Of course /proc , /sys and others are awesome, but there are still things that need their own FS view but are relegated to just 'files'. Like ~/.cache ~/.config and all the xdg standards. I get it, it's a standardized path and all, but what's being abstracted is here is not "data in a file" but "cache" and "configuration" (more specific), it should still be in a VFS path, but it shouldn't be a file that is exposed but an abstraction of "configuration settings" or "cache entries" backed by whatever thing you want (e.g.: redis, sqlite, s3,etc..). The windows registry (configuration manager is the real name btw) does a good job of abstracting configurations, but obviously you can't pick and choose the back-end implementation like you potentially could in Linux.
In theory, this is what dbus is doing, but through APIs rather than arbitrary path-key-value triplets. You can run your secret manager of choice and as long as it responds to the DBUS API calls correctly, the calling application doesn't know who's managing the secrets for you. Same goes for sound, display config, and the Bluetooth API, although some are "branded" so they're not quite interchangeable as they might change on a whim.
Gnome's dconf system looks a lot like the Windows registry and thanks to the capability to add documentation directly to keys, it's also a lot easier to actually use if you're trying to configure a system.
Fuse and p9 exist... If anyone wants certs by id in the filesystem, it will exist.
sure you can, /usr/share/ca-certificates tho you do need to run 'update-ca-certificates' (in debian derivatives) to update some files, like hashed symlinks in /etc/ssl/certs
there is also of course /sys|/proc for system stuff, but yes, nowhere near as integrated as windows registry
You can mount partitions under directories just like you can in Linux/Unix.
PowerShell has Add-PartitionAccessPath for this:
> mkdir C:\Disk
> Add-PartitionAccessPath -DiskNumber 1 -PartitionNumber 2 -AccessPath "C:\Disk"
> ls C:\Disk
It will persist through reboots too.
For permanently mounted drives, I'd pick symbolic links over mount points because this lets you do file system maintenance and such much easier on a per-drive level. You can still keep everything under C:\ and treat it like a weird / on Unix, but it you need to defragment your backup hard drive you won't need to beat the partition manager into submission to make the defragment button show up for your mounted path.
When you create/format the partition in the GUI tools it'll actually ask if you want to assign a drive letter or mount as a path as well.
Used to be able to use these with SQL Server.... 2000.
Yea, over the years someone thought of something they wanted to do and then did it without a systematic consideration of what that level of power meant, especially as multi-user network connectivity and untrusted data became the norm.
Explorer, not so much ...
As long as your code page doesn't have gaps, that should be doable. It'll definitely confuse the hell out of anyone who doesn't know about this setup, though!
Well there goes my plan to replace all my drive letters with emojis :(
For everything else, the best advice I can offer is that you can put your own autorun config file on the root of a drive to point the drive icon to a different resource. Though the path will stay boring, the GUI will show emoji everywhere, especially if you also enter emoji in the drive label.
For some reason I remember that the original xbox 360 had "drive letters" which were entire strings. Unfortunately I no longer have access to the developer docs and now I wonder if my mind completely made this up. I think it was something like "Game:\foo" and "Hdd0:\foo".
The Xenia emulator handles them with symbolic links in its virtual-file-system: https://github.com/xenia-canary/xenia-canary/blob/70e44ab6ec...
> Drives with a drive-letter other than A-Z do not appear in File Explorer, and cannot be navigated to in File Explorer.
Reminds me of the old-school ALT + 255 trick on Win9x machines where adding this "illegal trailing character" made the directory inaccessible from the regular file explorer.
I am working on a game where every player has system resources on a Linux computer. The basic idea is that some resources need to be shared or protected in some ways, such as files, but the core communication of the game client itself needs to be preserved without getting in the way of the real system environment.
I am using these abstract data sockets because they sidestep most other permissions in Linux. If you have the magic numbers to find the socket, you get access.
or find it in /proc/net/unix
It would likely break a lot of analysis tools and just generally make things very difficult.
AFAIK you need admin priviledges to play with drives in Windows.
https://www.crowdstrike.com/en-us/blog/anatomy-of-alpha-spid...
There are a few other places where they also show up, but the MotW is the most prevalent one I've found. Most antivirus programs will warn you for unusual alternate data streams regardless of what they contain.
ADS was originally designed to support the HFS resource fork.
CMD also has the concept of a current drive, and of a per-drive current directory. (While “X:\” references the root directory of drive X, “X:” references whatever the current directory of drive X is. And the current directory, i.e. “.”, is the current directory of the current drive.) I wonder how those mesh with non-standard drive letters.
C:\> cd /D λ:\
λ:\> cd bar
λ:\bar> cd /D C:\
C:\> echo %=Λ:%
λ:\bar
C:\> cd /D Λ:
λ:\bar>And indeed, it looks like using `=` as a drive letter breaks things in an interesting way:
=:\> cd bar
Not enough memory resources are available to process this command.
=:\bar>
`cd` exits with error code 1, but the directory change still goes through.With a program that dumps the NULL terminated <key>=<value> lines of the environment block, it looks like it does still modify the environment, but in an unexpected way:
Before `cd /D =:\`, I had a line that looked like this (i.e. the per-drive CWD for C:\ was C:\foo):
=C:=C:\foo
After `cd /D =:\`, that was unexpectedly modified to: =C:==:\
Funnily enough, that line means that the "working directory" of the C drive is `=:\`, and that actually is acted upon: =:\foo> cd /D C:
=:\>
---You might also be interested to know that '= in the name of an environment variable' is a more general edge case that is handled inconsistently on more than just Windows: https://github.com/ziglang/zig/issues/23331
But for some reason, drive letters starting with C feel completely natural, too. Maybe it's because C is also the first note in the most widely known musical scale. We can totally afford to waste two drive letters at the start, right?
Our first home computer (1989 or 1990?) was a 386SX with a 40MB hard disk (so maybe we were bourgeois). My dad had to partition it into a 32MB C drive and an 8MB D drive, because the DOS version (3.3?) had a 32MB maximum filesystem size. It had two separate 5.25 inch floppy drives, a 1.2MB and a 360KB - although the 1.2MB drives could read 360KB disks, they couldn’t write them in a form readable by 360KB drives, or something like that. And later (circa 1991) we got a 3.5 inch floppy drive too, which became drive A, the 1.2MB became drive B, and the 360KB was relegated to drive E. The FDC that came with the computer (back then they were ISA cards, hadn’t been integrated with the motherboard yet) only supported two drives, so he had to buy a new one that supported four.
the linked source checks out. diskcopy will also do this for you if you give it source = dest.
My first contact with PCs was in 1988 and they all had HDDs and were definitely not "IBM PC" but clones. That said, that's just my experience so YMMV.
MIT, where I was at school then, had some IBM PC XTs with 10 MB hard drives, but most of their computer resources were time-sharing DEC VAX machines. You could go to one of several computer labs to get on a terminal, or even dial into them--I did the latter from my PC (the one above) using a 2400 baud modem, which was fast for the time.
We had a dumb "computer literacy" class taught in an computer lab full of PS/2 Model 25s with no hard drives, and were each issued a bootable floppy disk containing both Microsoft Works and our assignment files (word processing documents, spreadsheets, etc.), which we turned in at the end of class for grading.
We started Works in the usual way, by typing "works" at the MS-DOS prompt.
One day, out of boredom, I added "PROMPT Password:" to AUTOEXEC.BAT on my disk, changing the DOS prompt from "A:\>" to "Password:" when booted from my disk.
Two days later, I got called into the dean's office, where the instructor demanded to know how I used my disk to "hack the network" — a network that, up until this point, I didn't even know existed, as the lab computers weren't connected to anything but power — and "lock me out of my computer", and threatened suspension unless and until I revealed the password.
After a few minutes trying to explain that no password existed to a "computer literacy" instructor who clearly had no idea what either AUTOEXEC.BAT or the DOS prompt was, nor why booting a networked computer from a potentially untrustworthy floppy disk was a terrible idea, I finally gave in.
"Fine. The password is works. Can I go now?"
The irony, it was actually faster doublespaced/stacked.
In Russia, we had class full of IBM PCs without hard drives in school - you had to juggle floppies - and that was early 90s. And that was a fancy school.
c:\> subst d: c:
On my laptop, D is the SD card slot. On my desktop, it's the 2nd SSD.
We used to set our machines so the CD-ROM was always drive L. This way we always had 'room' to add HDs so there was no gap in the alphabetical sequence. Drive D - data drive, E - swapfile, etc.
Test and external drives (being temporary) were assigned letters further down than L. Sticking reasonably rigidly to this nomenclature avoided stuff-up such as cloning an empty drive onto one with data on it (cloning was a frequent activity).
Incidentally, this rule applied to all machines, a laptop with HD would have C drive and L as the CD-ROM. Machines with multiple CD-ROMs would be assigned L, M and so on.
I mainly did it so that CD installs wouldn’t lose their install drive since even Windows tracked it by the absolute path. Not as important with everything installed by download and Windows copying the install media to the hard drive anyway.
Between CD/DVD drives, writers, Zip Drives, and extra hard drives, it wasn't unusual for a workstation to naturally end up with G: or H:, before mapped network storage became common.
As another commenter mentioned, when you didn't have a second floppy drive, A: and B: mapped to two floppy disks in the same floppy drive, with DOS pausing and asking you to insert the other floppy disk when necessary. Which explains why, even on single-floppy computers, the hard disk was at C: and not B: (and since so much software ended up expecting it, the convention continued even on computers without any floppy disk drive).
I also use the drive letter assignment feature, so my external USB drive is always drive X.
Oh, so that is how terminal servers are able to mount different network shares (e.g. the user's home directory always being H:\) for each user's session on the same drive letter.
That may have been DOS 3.3, not later. IDK when it changed.
When you do that on windows, everybody loses their mind.
I never tried, but I wonder if you could use direct registry editing to create some really strange drive letters.
I wonder, does `subst I: .` create i: or ı: under the Turkish locale?
A path like "f:myfile.txt" actually means f:\path\to\whatever\myfile.txt" where \path\to\whatever is the current working directory of the f drive.
This is one of the details which makes the replacement DLL more of a "native" run-time library, whose behavior is less surprising to Windows users of the applicaton based on it.
PS C:\Users\jtm> & 'C:\Program Files\Windows Defender\MpCmdRun.exe' -Scan -ScanType 3 -File '\\?\Volume{91ada2dc-bb55-4d7d-aee5-df40f3cfa155}\'
Scan starting...
Scan finished.
Scanning \\?\Volume{91ada2dc-bb55-4d7d-aee5-df40f3cfa155}\ found 1 threats.
Cleaning started...
Cleaning finished.
[1] https://www.eicar.org/download-anti-malware-testfile/If anyone adds this behaviour as a bet on a market about a future CVE or severity, can they add a link to the bet here?
Draw your drive letter today!
So it’s fixed. What’s windows’ excuse? :-)
\\.\Volume{3558506b-6ae4-11eb-8698-806e6f6e6963}\Windows NT and UNIX are much more similar than many people realize; Windows NT just has a giant pile of Dos/Win9x compatibility baked on top hiding how great the core kernel design actually is.
I think this article demonstrates that very well.
Only if the machine's BIOS is configured to give bootable USB devices boot-order priority. So it's not about Linux -- in fact, the same thing would happen on a Windows machine.
Remember that in a properly configured Linux install, the boot partition is identified by UUID, not hardware identifier (in /etc/fstab). Consequently if you change a drive's hardware connection point, the system still boots.
Fixed that for you. It used to be normal to use the device path (/dev/hd* or /dev/sd*) to reference the filesystem partitions. Using the UUID or the by-id symlink instead is a novelty, introduced precisely to fix these device enumeration order issues.
Two other people were able to concisely explain the problem instead of being rude and condescending.
I think the concept of drive letters is flawed.
Otherwise, the drive letter is allocated statically and won't be used by another volume.
I regularly have this conversation with my end-user neighbor -- I explain that he has once again written his backup archive onto his original because he plugged in his Windows USB drives in the wrong sequence. His reply is, more or less, "Are computers still that backward?" "No," I reply, "Windows is still that backward."
The good news is that Linux is more sophisticated. The bad news is that Linux users must be more sophisticated as well. But this won't always be true.
Edit: Also /dev/sdX paths in Linux are not stable. They can and do vary across boot, since Linux 5.6.
Not better at all, which is why Linux uses partition UUIDs to identify specific storage partitions, regardless of hardware identifiers. This isn't automatic, the user must make it happen, which explains why Linux users need to know more than Windows users (and why Linux adoption is stalled).
> Edit: Also /dev/sdX paths in Linux are not stable. They can and do vary across boot, since Linux 5.6.
Yes, true, another reason to use partition UUIDs.
> Plan 9 takes the everything is a file concept to its logical conclusion and is much better designed.
It's a shame that Plan 9 didn't get traction -- too far ahead of its time I guess.
One vision is "medium-centric". You might want paths to always be consistently relative to a specific floppy disc regardless of what drive it's in, or a specific Seagate Barracuda no matter which SATA socket it was wired to.
Conversely it might make more sense to think about things in a "slot-centric" manner. The left hand floppy is drive A no matter what's in it. The third SATA socket is /dev/sdc regardless of how many drives you connected and in what order.
Either works as long as it's consistent. Every so often my secondary SSD swaps between /dev/nvme0 and /dev/nvme1 and it's annoying.
> Conversely it might make more sense to think about things in a "slot-centric" manner. The left hand floppy is drive A no matter what's in it. The third SATA socket is /dev/sdc regardless of how many drives you connected and in what order.
A third way, which I believe is what most users actually want, is a "controller-centric" view, with the caveat that most "removable media" we have nowadays has its own built-in controller. The left hand floppy is drive A no matter what's in it, the top CD-ROM drive is drive D no matter what's in it, but the removable Seagate Expansion USB drive containing all your porn is drive X no matter which USB port you plugged it in, because the controller resides together with the media in the same portable plastic enclosure. That's also the case for SCSI, SATA, or even old-school IDE HDDs; you'd have to go back to pre-IDE drives to find one where the controller is separate from the media. With tape, CD/DVD/BD, and floppy, the controller is always separate from the media.
You could even reference media that was not loaded at the time (e.g. GAMEDISK2:) and the OS would ask you to insert it into any drive. And there were "virtual" devices (assigns) that could point to a specific directory on a specific device, like LIBRARIES:
You can use mountvol command to see the mount-letter/GUID mapping.
https://news.ycombinator.com/item?id=17652502
VMS expects to be run as a cluster of machines with a single drive system. How that actually happens is “hidden” from user view, and what you see are “logicals”, which can be stacked on top of each other and otherwise manipulated by a user/process without affecting the underlying file system. The results can be insane in the hands of inexperienced folks. But that is where NT came from.
OTOH on Linux out of the box they'd get /media/usb0, /media/usb1 etc. Which has the same exact problem. And the same exact solution - if you need stable names, mount them as such (except on Windows you can do it with a few clicks with a mouse).
Linux can exploit the UUIDs of USB drives to avoid confusion, and Linux users know how to do this. Windows has a way to do this also, but Windows users often don't know it.
> ... (except on Windows you can do it with a few clicks with a mouse)
Yes, clicks that are not in the average Windows user's skill set. This is more about technical knowledge than it is about a choice of OS, but overall, Linux rewards knowledge, while Windows punishes it.