It’s such a shame that Rust developers feel like they feel unwelcome, especially due to a complete misunderstanding in implementation details.
Even worrying, this is kernel developers saying they prioritise their own convenience over end user safety.
Google has been on a Rust adoption scheme, and it seems to have done wonders on Android: https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html?m=1
But also, there is a bit of a problem to adopt Rust. I think the memory model may prove challenging to some, but I do worry in this case that even if it was super simple, the existing C kernel devs would still reject the code due to it not being C and not willing to adopt a new language.
Perhaps the fact that Google is keen on Rust internally is part of what Ted Tso does not like about it ( he works for Google ).
Many outside the Rust community see the enthusiasm for Rust as overblown. Perhaps they think that pushing back on Rust to create a brake on this momentum is restoring the balance or something.
One thing I have noticed, when devs push back on inferior languages, they are able to cite all kinds of technical reasons for doing so. When they cannot come up with reasons, perhaps that is evidence that the language is pretty good.
Ted’s rant basically says “we have more code so we matter more and that will be true for a long time”. I agree with the assessment that this kind of blatant tribalism is “non-technical nonsense”.
The thing I don’t get in these discussions is that there are people who have convinced themselves that a language we came up with in the first 20 years or so of the industry’s existence is the pinnacle of programming language development and that all those newer languages are really completely equivalent in terms of outcome once you add up their up- and downsides.
There’s a long thread on Mastodon by the main Arm Mac Graphics dev for Asahi Linux. Perhaps one of the fastest developed and most stable graphics drivers ever made, thanks to a couple amazing developers but also very very much thanks to Rust. And one of the kernel devs flippantly calls it an “unmerged toy project” as if it’s not kernel devs’ fault that useful stuff and even small non-breaking improvements to existing systems are so incredibly hard to get merged. Not to mention that writing the entire m1 graphics driver in Rust ended up actually thoroughly documenting the DRM subsystem’s API for the first time as a side effect because everything the Rust code interacts with pretty much gets strictly defined within Rust’s type systems and lifetimes.
The LARPing levels in moronix comments are higher than usual, but the comedic value is still not lost.
What exactly is the “nontechnical nonsense” he’s complaining about?
In summary, a bunch of 60 year old C developers with social deficits hijacking the conversation when he gives a talk or tries to get anything done. E.g. the link was people interrupting a QA session to complaining “I don’t want to learn Rust”.
There is a video linked in the article for context:
https://youtu.be/WiPp9YEBV0Q?t=1529
If I try to interpret the context, it could be C programmers just being negative to Rust because it is not C, that there is a conception of Rust programmers trying to enforce Rust on others, or that Rust programmers will break things.
Behind all the negative tone there is a valid concern though.
If you don’t know Rust, and you want to change internal interfaces on the C side, then you have a problem. If you only change the C code, the Rust code will no longer build.
This now brings an interesting challenge to maintainers: How should they handle such merge requests? Should they accept breakage of the Rust code? If yes, who is then responsible for fixing it?
I personally would just decline such merge requests, but I can see how this might be perceived as a barrier - quite a big barrier if you add the learning cliff of Rust.
That seems based on the same misconception as the whole “fighting the compiler” view on Rust, namely that other languages are better because they let you get away with not thinking through the problems in your code up front. I am not surprised that this view is common in the C world which is pretty far on the end of the spectrum that believes that they are a “sufficiently disciplined programmer” (as opposed to the end of the spectrum that advocates for static checks to avoid human mistakes).
The problem you mention is fundamentally no different from e.g. changing some C internals in the subsystem you know well and that leads to breakage in the code in some other C subsystem you don’t know at all. The only real difference is that in C that code will break silently more likely than not, without some compiler telling you about it. The fact that the bit you know well/don’t know well is the language instead of some domain knowledge about the code is really not that special in practical terms.
That’s a very good point. I hadn’t considered potential lack of domain knowledge at all. In that case Rust might even help, because it’s easier to write interfaces that can’t be used wrong - so that even someone without the needed domain knowledge might be able to fix compile issues without breakage.
See also Asahi Lina’s thread on this, which explicitly says that Rust is one reason why their drivers cause fewer kernel panics than others: https://vt.social/@lina/113045456734886438
Ask the Rust maintainers to fix it presumably? The antagonist in the video above claimed there are 50 filesystems in Linux. Do they really fix all 50 filesystems themselves when they change the semantics of the filesystem API? I would be very surprised. I suspect what actually happens currently is either
- They actually don’t change the semantics very often at all. It should surely be stable by now?
- They change the semantics and silently break a load of niche filesystems.
I mean, the best answer is “just learn Rust”. If you are incapable of learning Rust you shouldn’t be writing filesystems in C, because that is way harder. And if you don’t want to learn Rust because you can’t be bothered to keep up with the state of the art then you should probably find a different hobby.
These “ooo they’re trying to force us to learn Rust” people are like my mum complaining you have to do everything online these days “they’re going to take away our cheque books!” 🙄
I would be very surprised if they wouldn’t fix all 50 filesystems.
In all projects I have worked on (which does not include the Linux kernel) submitting a merge request with changes that don’t compile is an absolute no-go. What happens there is, that the CI pipeline runs, fails, and instead of a code review the person submitting the MR gets a note that their CI run failed, and they should fix it before re-opening the MR.
submitting a merge request with changes that don’t compile is an absolute no-go.
Right but unless the tests for all 50 filesystems are excellent (I’d be surprised; does Linux even have a CI system?) then the fact that you’ve broken some of them isn’t going to cause a compile error. That’s what the linked presentation was about! Rust encodes more semantic information into the type system so it can detect breakages at compile time. With C you’re relying entirely on tests.
Breaking that kind of API probably breaks users pace. Linus is very vocal about breaking userpace.
At some point, reading kernel code is easier than speculating. The answer is actually 3. there are multiple semantics for filesystems in the VFS layer of the kernel. For example, XFS is the most prominent user of the “async” semantics; all transactions in XFS are fundamentally asynchronous. By comparison, something like ext4 uses the “standard” semantics, where actions are synchronous. These correspond to filling out different parts of the VFS structs and registering different handlers for different actions; they might as well be two distinct APIs. It is generally suspected that all filesystem semantics are broken in different ways.
Also, “hobby” is the wrong word; the lieutenant doing the yelling is paid to work on Linux and Debian. There are financial aspects to the situation; it’s not solely politics or machismo, although those are both on display.
Well that just sounds insane. Isn’t the whole point of an abstracted API that you can write code for it once and it works with all of the implementations?
From what I understood, the Rust devs weren’t asking to change the interface, only to properly document it, and asked the kernel devs to cooperate with them so that Rust for Linux doesn’t break without warning.
The Rust devs were trying to say they were fine if the Rust code ends up breaking, and they would take care of it. But they got talked over but the kernel dev.
IMO, if a developer finds Rust too difficult to learn, they probably shouldn’t be writing kernel code in the first place.
Nearly expected. The Linux Foundation doesn’t spend nearly enough money on the linux kernel to get new blood into it willing to contribute what is necessary (in this case Rust).
This is basically what you said here, and it’s still wrong: social dynamics, not money, is the main reason why young hackers (don’t) work on Linux. I’m starting to suspect that you’ve not hacked kernel before.
Isn’t your objection there basically “LF doesn’t pay enough for people to put up with negative social dynamics”? In which case, wouldn’t paying more help a lot?
How much more? When it comes to whether I’d write GPU drivers for money, I can tell you that LF doesn’t pay enough, Collabora doesn’t pay enough, Red Hat doesn’t pay enough, and Google illegally depressed my salary. Due to their shitty reputation, Apple, Meta, or nVidia cannot pay enough. And AMD only hires one open-source developer every few years. Special shoutout to Intel, who is too incompetent to consider as an employer.
I don’t know, but I also don’t know how much you think is “enough” to deal with project cultural issues. It sounds like it must be quite a bit?
Developers in general also get treated like crap by the community too.
I used to be passionate about open source and relessed a project. But I dropped it because there were a few vocal trolls in the community tore into me.
And there are so many developers which probably have a similar issue.
The biggest enemy to the success of Linux isn’t actually Microsoft or Apple, but potentially the Linux community itself
I’m all in for new takes that start with a clean slate, if that’s what happens in the near future (e.g. redox-os grows bigger than gnu/ linux
*
), yet it saddens me that there’s personal health costs on these developers that just wanted to contribute.*
after all, the year of the gnu/ linux desktop has already been past :POh, it’s already happening. Discussion here: https://programming.dev/post/19211993