Has reddit not already been scraped? With all of that information exposed bare on the public Internet for decades, and apparently so valuable, I find it hard to believe that everybody's just been sitting there twiddling their thumbs, saying "boy I sure hope they decide to sell us that data one day so that we don't have to force an intern to scrape it for us".
In a certain light, you could argue that Linus doesn't really have any control at all. He doesn't write any code for Linux (hasn't in many years), doesn't do any real planning or commanding or managing. "All" he does is coordinate merges and maintain his own personal git branch. (And he's not alone in that: a lot of people maintain their own Linux branches). He has literally no formal authority at all in Linux development.
It just so happens that, by a very large margin, his own personal git branch is the most popular and trusted in the world. People trust his judgment for what goes in and doesn't go in.
It's not like Linux development is stopped because Linus goes offline (or goes on vacation or whatever). People keep writing code and discussing and testing and whatnot. It's just that without Linus's discerning eye casting judgment on their work, it doesn't enter the mainstream.
Nothing will really get slowed down. Whether something officially gets labelled by Linus as "6.8" or "6.whatever" doesn't really matter in the big picture of Linux development.
The principled "old" way of adding fancy features to your filesystem was through block-level technologies, like LVM and LUKS. Both of those are filesystem-agnostic, meaning you can use them with any filesystem. They just act as block devices, and you can put any filesystem on top of them.
You want to be able to dynamically grow and shrink partitions without moving them around? LVM has you covered! You want to do RAID? mdadm has you covered! You want to do encryption? LUKS has you covered? You want snapshotting? Uh, well...technically LVM can do that...it's kind of awkward to manage, though.
Anyway, the point is, all of them can be mixed and matched in any configuration you want. You want a RAID6 where one device is encrypted split up into an ext4 and two XFS partitions where one of the XFS partitions is in RAID10 with another drive for some stupid reason? Do it up, man. Nothing stopping you.
For some reason (I'm actually not sure of the reason), this stagnated. Red Hat's Strata project has tried to continue pushing in this direction, kind of, but in general, I guess developers just didn't find this kind of work that sexy. I mentioned LVM can do snapshotting "kind of awkward"ly. Nobody's done it in as sexy and easy way to do as the cool new COWs.
So, ZFS was an absolute bombshell when it landed in the mid 2000s. It did everything LVM did, but way way way better. It did everything mdadm did, but way way way better. It did everything XFS did, but way way way better. Okay it didn't do LUKS stuff (yet), but that was promised to be coming. It was Copy-On-Write and B-tree-everywhere. It did everything that (almost) every other block-level and filesystem previously made had ever done, but better. It was just...the best. And it shit all over that block-layer stuff.
But...well...it needed a lot of RAM, and it was licensed in a way such that Linux couldn't get it right away, and when it did get ZFS support, it wasn't like native in-the-kernel kind of stuff that people were used to.
But it was so good that it inspired other people to copy it. They looked at ZFS and said "hey why don't we throw away all this block-level layered stuff? Why don't we just do every possible thing in one filesystem?".
And so BtrFS was born. (I don't know why it's pronounced "butter" either).
And now we have bcachefs, too.
What's the difference between them all? Honestly mostly licensing, developer energy, and maturity. ZFS has been around for ages and is the most mature. bcachefs is brand spanking new. BtrFS is in the middle. Technically speaking, all of them either do each other's features or have each other's features on their TODO list. LUKS in particular is still very commonly used because encryption is still missing in most (all?) of them, but will be done eventually.
I love the arrogant confidently incorrect at the end of the blog.
- The comments in the code are wrong
- The official documentation is wrong
- The manpage is wrong
- Every blog article ever written is wrong
- Linus Torvalds is wrong
- Everyone who knows what they're talking about is wrong
- No, I don't know how to read kernel code. Why do you ask? You're wrong
- Shut up. You're wrong
And not all GNU is Linux! Beyond the world famous GNU Hurd, there's also Debian GNU/kFreeBSD, and Nexenta (GNU/Illumos, which is the OpenSolaris kernel).
I think the most esoteric of them, though, is GNU Darwin (GNU/XNU). Darwin is the open source parts of OS X, including its kernel, XNU. There used to be an OpenDarwin project to try to turn Darwin into an actual independent operating system, but they failed, and were superseded by PureDarwin, which took a harder line against anything OS X getting into the system. GNU Darwin took it one step further and removed just about all of Darwin (except XNU) and replaced it with GNU instead.
If go is "round chess", I feel like chess should be "pointy chess".
When Elon Musk wants to see your top 10 most salient lines of code.
Find can actually do the sed itself if you don't want to use a subshell and a shell loop.
find . -type f -iname '*.json' -exec sed -i 's/"verified":"10"/"verified":"11"/' '{}' ';'
This is what I don't get. Rewriting COBOL code into Java code is dead easy. You could teach a junior dev COBOL (assuming this hasn't been banned under the Geneva Convention yet) and have them spitting out Java code in weeks for a lot cheaper.
The problem isn't converting COBOL code to Java code. The problem is converting COBOL code to Java code so that it cannot ever possibly have even the most minute difference or bug under any possible circumstances ever. Even the tiniest tiniest little "oh well that's just a silly little thing" bug could cost billions of dollars in the financial world. That's why you need to pay COBOL experts millions of dollars to manage your COBOL code.
I don't understand what person looked at this problem and said "You know what never does anything wrong or makes any mistake ever? Generative AI"
No, that would be "too egotistical" (in Linus' own words). But he can have his friend who runs an FTP server completely ignore his wishes to have it named "Freax" and name the directory "linux" instead.
When you power on a computer, before any software (any operating system) has a chance to run, there's "firmware" (kind of similar to software, except stored directly in the motherboard) that has to get things going (called "Platform Initialization"). Generally the two jobs of the Platform Initialization firmware: (1) to detect (and maybe initialize) some hardware; and (2) to find the operating system and boot the operating system.
We have a standard interface for #2, which is called UEFI. But for #1, it's always been sort of a mysterious black box. It necessarily has to be different for every chipset/every motherboard. Manufacturers never really saw much reason to open source it. The major community-driven open source project at doing #1 is called "coreboot". Due to the fact that it requires a new implementation for every chipset/motherboard and they are generally not documented (and may require some reverse-engineering of the hardware), coreboot has very very limited support.
So what AMD is open sourcing here is a collection of 3 C libraries which they will be using in all of their firmware, going forward. These libraries are not chipset/motherboard-specific (you still need custom code for each motherboard) and do not implement UEFI (you would still need to implement UEFI/bootloader on top of it), but they're helper functions that do a lot of what's needed to implement firmware. I just took a cursory look through the source code, but I saw a lot of code in there for detecting RAM DIMMs (how much RAM, what kind of RAM, etc.), which is useful code. (Edit: I just read through the Wikipedia article on coreboot and it says "The most difficult hardware that coreboot initializes is the DRAM controllers and DRAM. In some cases, technical documentation on this subject is NDA restricted or unavailable.". So if they can make use of AMD openSIL's DRAM code, that could be a very big win!!)
The fact that AMD is going to use this in their own firmware, and also make it available for coreboot under an MIT licence, means that coreboot may* have a much easier time in the future supporting AMD motherboards.
* we will see
I was saying Boo-urns.