GHC WebAssembly Weekly Update, 2023-02-08

Much less bugfixes, still a busy week.

  • Reported a bug that affects the wasm native codegen (#22896). It only affects source location info in IPE builds, but it turns out to be a bit trickier than I thought, still working on this.
  • Fixed a bug in the testsuite driver (!9919). The driver creates a Python thread for each run of a test case, and uses semaphores/locks explicitly for rate limiting & synchronization. It’s a bad practice and can result in live-lock situations, which frequently occurs when I test the wasm backend. I didn’t bother to pin down the exact place or cause of live-lock, instead I rewrote a small part of the driver to delegate test case running to a thread pool, which is simpler and more robust, sufficient to unblock my testing work.
  • Partially fixed an RTS bug on 32-bit targets (!9924). The RTS has a lot of tunable magic numbers, most notably the storage manager block/mblock sizes. Turns out that experimenting with alternative mblock sizes results in a lot of crashes. This is bad and violates the single source of truth principle, other places of the RTS implicitly depends on the old number, and these numbers ought to only affect performance, not correctness. I digged into the RTS and identified two such places. This is a work in progress, there’s still a nonmoving GC crash I need to look into here. It may seem a bit odd to spend time on something that already works by default, but I dislike it when things work but I don’t know why.
  • In a recent discourse thread, I mentioned the possibility of running some Haskell computation to initialize some global state, serializing the entire memory into a new module to be deployed without any initialization overhead. I proceeded to actually experiment with the idea and it works. @amesgen helped to put up an ormolu-live prototype at There’s one issue though, when snapshotting a heap that has run some Haskell computation, a lot of garbage bytes will be captured as well which bloats the wasm size. I’ve put up a GHC patch to fix this (!9931). After that lands, the pre-initialization idea will work in a production app.
  • I’ve written more tutorial (ghc-wasm-meta!13). Added more detailed explanation of what a WASI command/reactor is, an example to use wizer to pre-initialize the module, and how to add custom imports that can be called in C/Haskell.
  • Looked into fixing -dtag-inference-checks and -falignment-sanitisation for all backends, as a part of my wasm backend testing work. I know how to do it now.
  • Opened a nofib ticket (nofib#29) to record missing features for testing cross backends. Once I’m done with GHC testsuite for wasm backend, it’ll be time to look into nofib and I’ll implement those features if they’re still absent by that time.
  • Working on copying some existing ghc-wasm-meta documentation into the GHC user guide.
  • I experimented a bit with wasm32-wasi-threads, given it’s already supported in wasi-sdk and wasmtime. libc pthreads, locks and atomics and shared memory concurrency…the old nightmare is new again! Jokes aside, my work in GHC wasm backend will stick to single threaded wasm for a long time, but it won’t hurt to take a glimpse at what is yet to come in the wasm world.

Previous update: