I find this model more appealing that services where I have to build a full system container, and worry about provisioning (virtual) servers.
So naturally the next question is: Can I run Haskell on these?
With GHC gaining both JS and WebAssembly targets, it seems that there are possibly multiple ways to compile Haskell to something that can be run on these platforms, with probably just a small shim to be written. It remains to be seen if the output is small enough (Deno Deploy has a limit of 20MB), and if the startup time and memory consumption is acceptable.
Regarding code size: One real-world example that I know is ormolu-live, https://ormolu-live.tweag.io/ormolu.f1c4a8ef.wasm is 18.32MB uncompressed. But it should be easy to upload a compressed bundle and add extra decompression logic, so I wouldn’t worry about code size here.
Regarding startup time and memory consumption: The wasm backend uses cross-compiled vanilla GHC RTS. You can tune the RTS flags and use different GC algorithms (including the nonmoving gc), and produce profiles as if they are native programs.
It’s even possible to link a wasm program first, run some Haskell computation to initialize some heap state, then snapshot the entire program state into a new wasm program, allowing it to hit the ground with zero initialization overhead in the servers. I shall write a tutorial in ghc-wasm-meta next week about this.
Cool! I guess the next question is how to provide the interface expected by these service providers, i.e. what is the Haskell equivalent of this Typescript code:
For this particular use case you may export a Text -> IO Text Haskell function. The JavaScript glue takes care of marshaling request/response.
EDIT: the type signature is a conceptual one, the FFI mechanism currently doesn’t allow using Text directly as argument/return value. But you get the point, you can move blobs across JavaScript/Haskell.
I think ormolu is huge because it embeds the information about Hackage operator fixities as code through TH. Turning that into runtime information would likely both fix how long it takes to compile and the size of the output.
Actually, we don’t/can’t do that via TH as the WASM backend does not support TH yet. Instead, we initialize the fixity DB (1.2 MB) at runtime, but it is not part of the 18.32 MB mentioned above.
Ah, thanks for filing the bug, the assembly source example I gave was actually wrong! It confuses wasm-ld since in hello.o, the unresolved symbol http_resp_new is always an env import given how the current C toolchain works.
Here is a fastly-sys.c instead which can unblock your example:
Note that the http_resp_new function being called by Haskell is merely a wrapper that calls the actual wasm import fastly_http_resp.new. See Attributes in Clang — Clang 17.0.0git documentation for more explanation of these clang-specific attributes to access wasm imports in C. I’ll make sure to update the documentation later this week.
Not yet. It’s surely doable, but it will involves some changes in GHC internals, to make it aware of distinction between a raw wasm import and an actual wasm function defined in other objects.
I chose fastly because I was at a WebAssembly event, and some fastly people were there, so I was inspired to see if I can get it to work during some of the talks
Depending on the workload I might still trust a static executable uploaded to AWS or similar more. Didn’t do any measurements, though.