Thursday, February 20, 2025

Rust in the Linux kernel

I have been seeing a lot of news about drama and conflict related to Rust in the Linux kernel recently.

My first experience with Rust was some years ago when I was debugging and enhancing Anki. Rust had been introduced in the back end and some of the bugs I was working to understand and resolve were in the Rust code, so I had to learn enough about Rust to set up a development environment and to be able to read, understand and modify the Anki Rust code. It was an unpleasant experience. I found the Rust documentation to be inadequate, leading to many days of research and experimentation to learn enough about what the Rust toolchain was doing. I was able to hack at the Anki Rust code sufficiently to find and fix several bugs but it did not leave me enamoured with Rust. The experience made me sympathetic to the concerns of the Linux kernel maintainers who are concerned about the complexity and supportability of the Linux kernel if Rust is included.

The Linux kernel documentation has a section on Rust. It begins with an overview that includes:

If you are an end user, please note that there are currently no in-tree drivers/modules suitable or intended for production use, and that the Rust support is still in development/experimental, especially for certain kernel configurations.

Given the intensity of the disputes I had been hearing about, I wondered if this was obsolete and in fact Rust was now essential to the kernel.

 So I downloaded the current mainline kernel and built it without installing any of the Rust toolchain. No problem, so evidently Rust is not yet required to build a kernel.

Then I wanted to see what it was like building a kernel that included the Rust support.

I was doing this on a system running Debian 12 (Bookworm).

I installed the required Rust packages from the Debian repositories but they were too old:

$ make LLVM=1 rustavailable
***
*** Rust compiler 'rustc' is too old.
***   Your version:    1.63.0
***   Minimum version: 1.78.0
***
***
*** Please see Documentation/rust/quick-start.rst for details
*** on how to set up the Rust support.
***

I found a page that described how to install Rust on Debian 12:

https://idroot.us/install-rust-debian-12/

I downloaded https://sh.rustup.rs, reviewed it then ran it, proceeding with standard installation.

Through a sequence of trial and error making the rustavailable target, I installed various other prerequisites:

$ cargo install bindgen-cli

$ sudo apt install libclang-dev clang

$ rustup component add rust-src

After these I get 'Rust is available!'

But also:

*** libclang (used by the Rust bindings generator 'bindgen')
*** version does not match Clang's. This may be a problem.
***   libclang version: 15.0.6
***   Clang version:    14.0.6

While I had installed Debian libclang-dev and it was version 14, somehow libclang 15 was installed and found. I guess it came along with the installation of Rust but I didn't investigate to determine where version 15 was found and how it was installed.

So, the rustavailable target reports Rust is available but when I make the menuconfig target, there is no option under General setup for Rust support.

Fortunately I found

 
This provided the configuration requirement missing from the Rust Quick Start Guide in the Linux kernel documentation:

'Module versioning support' in section 'Enable loadable module support' must not be enabled. Somehow my configuration had it enabled: I had copied the configuration from /boot, so either it is enabled there or it is a new configuration option enabled by default. I didn't determine where the setting came from but I disabled it and then 'Rust support' appeared in 'General setup'.

So I enabled Rust support and re-ran make and saw it building various targets in the 'rust' directory.

There were no obvious errors during the build, but I haven't tried running the built kernel and wouldn't know how to test the bits written in Rust if I did. I haven't learned much about what is written in Rust. But, superficially, it seems I now have the Rust build tools set up well enough that I can build a kernel including components written in Rust.

That I could not use the Rust packages from Debian 12 (Bookworm) is a reflection of how young and volatile the Rust language and build tools are. This makes me feel that Rust is not yet mature enough to be writing essential components of the Linux kernel in it, but that isn't happening yet. Rust in the Linux kernel is, it seems to me, still experimental. If the experiment succeeds, perhaps by then the Rust language and build tools will have stabilized sufficiently that the version of Rust in stable version of Debian will be sufficient.

In summary, Rust is not yet an essential part of the Linux kernel. The integration of Rust is still experimental.

Despite being experimental or, at least, still optional, according to
The Road to Rustification: Lessons from the Linux Kernel Development Process, the Rust code is already 15% of the kernel codebase with an expectation that it will become 25% in 2025. 

I understand that Rust offers memory safety that is not guaranteed by the existing C code and build tools and that errors in memory management are a significant percentage of all the errors in the Linux kernel. I haven't come across an explanation why, of all the new languages that guarantee memory safety, Rust was selected for inclusion in the Linux kernel. 
 
Evidently, Rust has advocates with resources to push it forward into the kernel. But where is the record of comparison of Rust with the other options for ensuring memory safety and avoiding or eliminating a common type of error from the kernel? Where is the evidence that Rust is the best choice?

Maybe Rust, like C, isn't the best choice. Maybe it is just a choice with enough support that it is being implemented, despite concerns and objections from some quarters. But if there is some other language that would be better, more effective, easier and less contentious, it is irrelevant if no one is advocating it and willing to do the work to make it available and use it in the kernel.

I have great respect for the people who build and maintain the Linux kernel. Their achievements to date are amazing. While past success is no guarantee of future performance, as they say in the financial industry, I think there is good reason to trust and respect the decisions of the kernel maintainers.

And, ultimately, Linux is open source. Rather than complaining about the team at kernel.org, the Rust advocates can always do as they wish with the fork of the kernel and, benefiting from the superiority of Rust, they may so outperform the current development effort in speed and quality that soon everyone will follow them. There is no good reason for dispute and disrespect. They are free to do better.


 

Friday, June 21, 2024

ECMAScript module equivalent of CommonJS module usage: const x = require('myModule')(options);

A common, concise idiom of CommonJS modules that export factory functions is:

const instance = import('myModule')(myOptions);

This is concise and minimizes consumption of the name space.

With ECMAScript modules and the import declaration, the same or similar is not possible because the import declaration is not an expression that returns a value and it must be at the top level of the module: they cannot be within a block or function, including an immediately invoked function expression (IIFE).

Assuming the module in question exports a factory function as its default export, then the nearest equivalent would be something like:

import myFactory from 'myModule';

const instance = myFactory(myOptions);

If a named export the something like:

import { exportName as myFactory } from 'myModule';

const instance = myFactory(myOptions);

This is a little less concise: two lines of code instead of one, and an additional name consumed in the module name space. But otherwise gives the same result.

Something more equivalent is possible with the dynamic import keyword. This is function-like and can occur within a function, including an IIFE.

const instance = (await import('myModule')).default(myOptions);

This is only a little less concise than the CommonJS syntax. Similar construct is possible with named exports, by simply replacing 'default' with the name of the named export.

Using dynamic import keyword instead of the import declaration, one gives up the static analysis made possible by the import declaration, and there may be other benefits of the declaration over the keyword.

Sunday, June 16, 2024

JavaScript testing with Tape

 I mostly use tape for testing my JavaScript packages. Today I was updating dependencies and update of tape was among them.

The link from npmjs.com to GitHub was broken, yielding a 404 page. So I logged an issue and within a few minutes the links were fixed.

So, the GitHub repository for tape has moved to https://github.com/tape-testing/tape but all is well. Jordan is still actively maintaining the package.

It remains my favourite test package for JavaScript.

First release from the current repository was Nov 26, 2012. Well over a decade ago. It is good to see that it is still maintained and with attention to backwards compatibility.

Thus far, all my packages are CommonJS.

For linting, I use eslint with configuration @ig3/eslint-config-entrain

For coverage, I use c8.

For test runner I use multi-tape. Pending an update of multi-tape dependencies, I have published @ig3/multi-tape.

Typical test script is: "eslint . && multi-tape test/*.js"

@ig3/eslint-config-entrain is based on neostandard and eslint version 9. Typical eslint.config.js is:

 

'use strict';

const eslintConfigEntrain = require('@ig3/eslint-config-entrain');

module.exports = [
  ...eslintConfigEntrain,
];

Or sometimes something like:

'use strict';

const eslintConfigEntrain = require('@ig3/eslint-config-entrain');

module.exports = [
  ...eslintConfigEntrain,
  {
    ignores: ['public/js/**'],
  },
];

Setup is:

$ npm install -D @ig3/eslint-config-entrain c8 tape multi-tape

 Change the test script to:

"eslint . && c8 multi-tape test/*.js"

 And create eslint.config.js as above.

 

Tuesday, March 26, 2024

Thunar sort order

For many years I have run Debian with xfce4 desktop.

I have always hated the way Thunar sorts files by name. The developers would say it is not a Thunar issue but rather a Gtk issue. But I really don't care. I hate it whatever the root cause. It is made worse by the fact that there is absolutely no configuration possible other than collation options (LC_ALL and LC_COLLATE) but these don't fix the problem.

So, today I dug in to fix it.

I tried several other file browsers, but they all have slight variations of the same nonsense and I didn't find any that sort file names sensibly (e.g. like ls does).

There are many bug reports against Thunar, Nautilus and several others, reporting that available sorting options are not satisfying.

There are so many, it makes me wonder why there isn't a simple plug-in option, to separate sorting from other aspects of the file browser, allowing people to easily develop the sorting algorithm they need. Gtk could provide this but so could the various file browsers. It seems all I investigated in any detail delegate sorting to Gtk and refuse to add any features to work around its limitations and stupidity (i.e. 'natural' sorting).

I read many bug reports and posts, with the conclusion that there is no configuration option to fix it.

I tried building Thunar from source, thinking I would hack on it to fix the sorting or add an interface to an external sort implementation, but there were too many dependencies and I got tired of installing them.

But in reading through all the bug reports, I came across GlibSortFileNameHackLibrary so I gave it a try. I cloned the repo and tried to build it.

On Debian Bookworm, despite having built various other packages, I still had to install libglib2.0-dev:

$ sudo apt install libglib2.0-dev

Then:

$ make all

This terminates with an error compiling test.c, but that's only a test program. The library builds OK.

Then, from a command window:

$ thunar -q; LD_PRELOAD=./glibSortFileNameHack.so thunar

This launched a new Thunar instance and files were sorted sensibly (as opposed to the Gtk idea of 'naturally'). It was wonderful! Finally, I can find files in Thunar without wasting time scrolling up and down to try to figure out where they have been misplaced.

I couldn't find trace of a Thunar background process running after I logged in (i.e. ps -ef showed nothing with 'thunar' in the command line). So I guessed there isn't one and didn't worry about why.

I use Whisker menu but don't really know how to configure it. When I tried prefixing the command for the file browser launcher with LD_PRELOAD=/path/to/glibSortFileNameHack.so, I got an error that LD_PRELOAD isn't executable. 

So, I made a bash script:

#!/bin/bash
LD_PRELOAD=/home/ian/lib/glibSortFileNameHack.so thunar

And I changed the launcher to run the script.

This seems to work fine. At least, I haven't noticed any problems yet.

Now every Thunar instance sorts files sensibly.

Kudos to Alexandre Richonnier for publishing GlibSortFileNameHackLibrary. It hasn't been updated in 9 years, but it still works a treat!

Sunday, April 2, 2023

@ig3/srf - Spaced Repetition Flashcards

It is 2 years since I gave up on Anki and wrote @ig3/srf.

I have used it to study almost very day, for 2 years now. I have changed the scheduling algorithm several times. Sometimes little tweaks of parameters and sometimes fundamental changes to the algorithm.

The most recent big change was to eliminate the percentage of correct answers as a factor in calculating the new interval. Instead, for each review of a card with interval longer than the learning threshold, the interval and due date of every card with interval longer than the learning threshold is adjusted, according to the difference between percent correct and the target percent correct. This decreases the delay of feedback. It's early days, but it seems to be working.

Overall I am very pleased with @ig3/srf. I am limited by my poor memory but not overwhelmed and not feeling like a failure. At least, not most days. There are some days, usually after a few days without sleep, that it truly seems hopeless. But then I get some rest and get back on track.

I have put a lot of time into development @ig3/srf over the past two years, but I am confident that it was less effort for a better outcome than trying to maintain my Anki plugin to improve Anki's algorithm. I don't regret my decision at all.

Thursday, March 23, 2023

apt update failing with connection failed

Today, attempting apt update was failing:

Hit:1 http://deb.debian.org/debian bullseye InRelease
Err:2 http://security.debian.org/debian-security bullseye-security InRelease
  Connection failed [IP: 151.101.166.132 80]
Err:3 http://deb.debian.org/debian bullseye-updates InRelease
  Connection failed [IP: 151.101.166.132 80]
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up to date.
W: Failed to fetch http://deb.debian.org/debian/dists/bullseye-updates/InRelease  Connection failed [IP: 151.101.166.132 80]
W: Failed to fetch http://security.debian.org/debian-security/dists/bullseye-security/InRelease  Connection failed [IP: 151.101.166.132 80]
W: Some index files failed to download. They have been ignored, or old ones used instead.
I was able to access the resources in my browser, which was redirected to https.

So I updated /etc/apt/sources.list, changing http to https throughout and then apt update completed without errors.

Hit:1 https://deb.debian.org/debian bullseye InRelease
Get:2 https://deb.debian.org/debian bullseye-updates InRelease [44.1 kB]
Get:3 https://security.debian.org/debian-security bullseye-security InRelease [48.4 kB]
Get:4 https://security.debian.org/debian-security bullseye-security/main Sources [192 kB]
Get:5 https://security.debian.org/debian-security bullseye-security/main amd64 Packages [236 kB]
Get:6 https://security.debian.org/debian-security bullseye-security/main Translation-en [154 kB]
Fetched 675 kB in 5s (138 kB/s)                            
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
68 packages can be upgraded. Run 'apt list --upgradable' to see them.

The original sources had been working for well over a year. It seems http is no longer supported.




Monday, March 13, 2023

@ig3/couchapp is deprecated

 @ig3/couchdb is deprecated.

It works with CouchDB 3.x but vhost and rewrite rules are deprecated and planned to be removed from CouchDB 4. Without them, couchapps will not work.

The primary feature of a couchapp is that it is served from a CouchDB server. No other server is required.

The concept of a CouchApp arose with CouchDB. But, as the CouchDB docs indicate, CouchApps are deprecated:

Note: Previously, the functionality provided by CouchDB’s design documents, in combination with document attachments, was referred to as “CouchApps.” The general principle was that entire web applications could be hosted in CouchDB, without need for an additional application server.

Use of CouchDB as a combined standalone database and application server is no longer recommended. There are significant limitations to a pure CouchDB web server application stack, including but not limited to: fully-fledged fine-grained security, robust templating and scaffolding, complete developer tooling, and most importantly, a thriving ecosystem of developers, modules and frameworks to choose from.

The developers of CouchDB believe that web developers should pick “the right tool for the right job”. Use CouchDB as your database layer, in conjunction with any number of other server-side web application frameworks, such as the entire Node.JS ecosystem, Python’s Django and Flask, PHP’s Drupal, Java’s Apache Struts, and more.

Several tools have been written to automate building and deploying CouchApps. The earliest I know about were developed by Chris Anderson. Some history is available in the post: What is Couchapp?.

I had been using a node based implementation of the couchapp tool but when I set up a new CouchApp early 2022 I found it no longer worked. I forked it and released @ig3/couchapp but learned that CouchDB had deprecated the features that make CouchApps possible.

So I rebuilt my CouchApp / couchapp based apps using a combination of nginx and CouchDB.

It isn't hard.

Everything comes from nginx:

 * static content is served by nginx directly

 * CouchDB access is proxied by nginx

 * Rewrite rules are written in nginx

 A very simple app might be configured in nginx as:

server {
  server_name mycoucnapp.example.com;
  listen 443 ssl http2;

  client_max_body_size 8M;

  location / {
    root /usr/local/data/mycouchapp/attachments;
    try_files $uri /index.html;
  }

  location /couchdb {
    proxy_pass http://localhost:5984/;
  }
  location /couchdb/ {
    proxy_pass http://localhost:5984/;
  }
}
 
Note the trailing slashes on the URLs to proxy_pass. They are 
significant. Without them, the entire request path is passed to CouchDB.
With them, the / replaces the location (/couchdb or /couchdb/). 



Labels