DevOps Weeky Log, 2024-03-20

It’s the return of the weekly log!

The last month has been a whirlwind for me personally, so let’s see how much of my work I can accurately recall.

Curiously, the very next task I recorded in my journal after completing the last weekly log was something that finally just reached a big milestone yesterday. Namely, the main body of the data migration process for the Stackage handover has finished! The process was mostly passive from my perspective, so it was actually a good time for me to be offline. I did have some issues with the migration tool that needed to be ironed out at first, but the migration itself took a week and a half. Unfortunately, it’s still not quite done. There were a number of failures that need investigation. The failure rate was less than one in a million, and we may end up needing to accept it as the price of dealing with state in the real world. I haven’t given up on those 161 objects yet, however.

Since the main migration process completed, I was able to address an issue that became known last week. Unbeknownst to me, stack used an s3.amazonaws.com URL by default to fetch snapshot versions. I had stopped updating the old bucket right before starting the data migration, which means the old URL had gone stale. Today, I wrote a little script to sync the snapshot data back to the old bucket, so nobody should be affected anymore. Stack will use an updated URL by default in the next release, and if you can’t upgrade, you can change the default. See snapshots for details. The old bucket will eventually disappear, so everybody will need to use the new URL at some point. I don’t yet know when that point might be. I’ll add it to the list of followup tasks for the handover.

Another thing I worked on was a version of hackage-mirror-tool that uses amazonka rather than relying on a custom implementation of the AWS API. While I actually appreciate writing custom API clients in general because it reduces your exposure to churn you don’t care about, hackage-mirror-tool was missing an implementation of AWS’s SignatureV4 algorithm, which is the default for S3 and the only version supported by R2. I felt the world did not need another implementation of that algorithm. Plus, adapting the tool to use amazonka (which already does implement SignatureV4, of course) took less work overall.

I have only finished implementing and testing this new version of the tool — the code is not yet pushed anywhere and I haven’t deployed it yet. I will be doing those things soon.

Besides these main tasks, there was the usual number of GHC CI issues that arose, mostly having to do with runner platforms being unreliable. I hope to address that more substantially after the Stackage handover is complete.

There were also a few Cabal CI issues and discussions I participated in, though most of the hard work is being done by the Cabal maintainers and release managers. I’m just giving my opinion where it is requested. :slight_smile:

And yes, I have a summary of GHC issues triaged this week!

GHC issues triaged this week
3 Likes