Fork `basement`? As `baseplate`?

My own desire is simple: the Stack project’s stated aims include “to depend on well-known packages”, and I would like to eliminate (or, at least, reduce) its (direct or indirect) dependency on unmaintained packages (even well-known ones).

I can’t speak for Kazu Yamamoto (the maintainer of crypton) but I hope he would be receptive to pull requests that reduced its dependency on unmaintained basement.

not to dab on @ApothecaLabs work, but this package has a different goal, it’s just drop in replacement so that the community can transition over to something better designed later on.
I do accept contributions, and I would love if people would help maintaining it.

claude ripped out all basement/foundation parts. should work for other packages too…
I’ll see if I can finish porting over the crypton changes tomorrow.

3 Likes

we’re doing this guys: Rip out basement by jappeace · Pull Request #67 · kazu-yamamoto/crypton · GitHub

I’ll go over the other ones in the graph in the comming days with my claude hammer of justice. see how much we need basement. (we don’t).

6 Likes

@mpilgrem That graph is looking much nicer than the first one :slight_smile:

It is always good to have more hands helping keep the boat afloat - y’all working on maintaining a drop-in replacement gives me lets me focus on making my work a significant and meaningful improvement over the previous memory interface - if I was worried about maintaining a fork of memory itself too, I don’t think I’d have the energy!

Keep slaying those legacy dependencies!

5 Likes

I hate to bring this up again, but ram depends on base64 which will make downstream packages crash on 32bit systems (yes, I know. there are some out there to this day)

TBH GHC barely works on 32-bit ARM, so I’m not sure if the crash is a fault of base64. The way to check would be to try on i386 or wasm.

I was gonna say “what about MicroHs?” but ram/memory depend on ghc-prim. Although idk if there’s any reason MicroHs doesn’t support that.

How about the fact that ghc-prim literally only consists of GHC internals and primops? Why would you expect MicroHs (or any Haskell compiler that isn’t GHC) to support it?

For what it’s worth, packages shouldn’t depend on ghc-prim in the first place and the majority of those who do, don’t need to and could use modules like GHC.Exts instead.

As for MicroHs support of 32 bit platforms, the MicroHs CI also runs on 32 bit architectures, so everything should work.

1 Like

I can revert the commit undoing the dependencies on base16, base32 and base64. I just thought it’d be nicer to not have a custom implementation of that. I think the maintained packages would be faster and more reliable.

I dropped base16, base32 and base64 dependencies for now, we can do these kind of cleanups in a seperate step after we got rid of basement.

1 Like

I personally would never apply for global/public name in that case.

Why would I trust someone else to know better than me who is a good maintainer for the package/library I wrote?

This policy is in very stark contrast to e.g. what the CLC does for core libraries: although it can appoint new maintainers, it generally only does so when the current maintainer is not contactable or has quit.

I don’t think you’ll find a whole lot of passionate package authors who would agree to your proposed deal.

You could do it another way and say global/public names are just symlinks maintained by a committee or something. But at that point, I’d rather not upload my packages to hackage anymore. If they decide to put one of my packages into global namespace with my name on it, then change it in a couple of years to something worse, this could indeed damage my own reputation as a maintainer.

5 Likes

Absolutely fair point. Actually, my proposal would be much better if we:

  • keep the same technical solution
  • still let maintainers apply for a global name
  • let the CLC (or a separate, similar body) decide if/when it is necessary to reassign a global name
  • adopt the current CLC policy for these decisions

Yes, that’s pretty much my proposal.

The implicit perspective here is, I believe, that of a responsible maintainer. Those are not the reason for such measures, and also shouldn’t ever have their packages reassigned. The reason for such a policy are maintainers who lose interest, or otherwise become a liability from the perspective of the Haskell ecosystem. So a responsible maintainer should have negligible disadvantages from applying for a global name.

The advantage for a global name is higher visibility: especially newcomers to Haskell and experienced devs new to a particular part of the ecosystem can be recommended to pick a library with a global name.

Of course, noone needs to apply for a global name. But I believe many would. It could be a requirement for core libraries. I would definitely apply.

Hackage admins can already choose to do this. Right now they say “we won’t do this”. If they instead said “we’ll only do this in circumstances X, Y or Z”, where you’re confident you won’t make those circumstances arise, would you trust them less?

I think that we fundamentally have to choose one of two options:

  1. Maintainers can say “I don’t want to maintain this any more and no one else is allowed to either”, and everyone who was using their package needs to update.
  2. Someone gets to override maintainers on that matter, and then maintainers and users need to trust that person (or group) to make good decisions on when to override them.

I think we can find ways to make (1) less of a problem, like “this package is actually provided by this other package now”. But “everyone who uses a package depending on basement needs to put “basement is provided by baseplate" in their cabal.project/stack.yaml, until the whole ecosystem has caught up” is still not great.

(2) feels to me like a reasonable amount of trust to extend to the people who already maintain a bunch of critical infrastructure.

That said, with my proposal from above where I get both @philh/acme-missiles and a global acme-missiles pointing at it (unless that already exists), then I’d be fine with maintainers saying “actually I don’t want the global one”, and then users have to point directly at the namespaced one.

(What if I do that, and then someone forks my package and grabs the global name acme-missiles pointing at @hilph/acme-missiles? Then he could choose to do nothing with his fork except keep it up to date with mine, and we’re back in basically the same situation. Seems fine to me! Just like it would be fine if someone had forked basement as committee-basement years ago, and done nothing except keep the fork up to date with basement, right up until Vincent had stepped away.)

No, I don’t think we do.

Core libraries is a third option. To become a core library…

  • the current maintainer has to explicitly apply or agree
  • the CLC itself has to agree as well

So this is all opt-in and has nothing to do with hackage. No one is being overridden here. If basement was a core library, then it would still have a maintainer today. But it isn’t.

That just means all those packages depending on basement, sadly, made a poor decision to rely on this ecosystem, because there was no sustainability guarantee around it. I’m sorry, but that’s your responsibility as a package author to also look at the maintainers of your dependencies and their policies. I do that and I drop packages that I find unsustainable.

To me personally it was very clear long time ago that the entire Vincent ecosystem is not sustainable and it was on my personal blacklist of things to not use (for more than one reason). Please, let it rest in peace and let people salvage what remains useful of it.

I don’t think we need to take this exceptionally eccentric example of maintenance push us towards making sketchy hackage policies. There have been similar attempts other than core libraries to make sustainability guarantees around packages. I would like to suggest that this is a better course of action.

I could very well also imagine a project that goes through hackage packages and makes an opinionated list that features a sustainability score. Lots of ways to tackle this problem. But tbh, I don’t think it’s a frequent problem (as in: a bitrotted package and the maintainer blocking takeover).

9 Likes

On another note, I think it should be possible to get a name that doesn’t pollute the global namespace. Haskellers put projects on hackage for others to view, even if they aren’t finished or good. e.g. Proton, Units List, and others. I think this would also clarify which packages are seriously maintained and which are just pet projects with no guarantees. (This is sometimes hard to find). My suggestion isn’t really related to basement, but just to the idea of namespace polution.

Core libraries seem like (2) to me. If basement was a core package, then CLC would pick a new maintainer for it regardless of Vincent’s wishes.

(It’s true that we don’t need to pick between the two options globally, we can make different choices for different packages. So right now we have (1) by default and (2) for core packages, but it would also be possible to have (2) for everything, or (2) by default but (1) if someone explicitly wants it.)

If I want to write a package that makes an https call, do I have a realistic choice that doesn’t depend on a Vincent package? Even if someone looks at every single package in their dependency graph (and looks again every time the graph changes) and asks “does this seem sustainably maintained to me”, I don’t think it’s obviously a poor decision to go with the library that does the thing you want.

(If you expose yourself to risk correlated with everyone else, then when things go wrong, they go wrong for everyone; and there’s a good chance that when the problem gets solved for everyone else it gets solved for you too.)

I agree we shouldn’t make sketchy hackage policies, but I don’t consider anything I’m suggesting to be sketchy.

curl bindings, or shelling out to curl have both been superior for some time.

the problem is more if one wants to serve https, but there the superior option imho has been to run behind a nginx reverse-proxy or the like.

1 Like

nice state of the ecosystem “just shell out to curl” lol

2 Likes

ah so here’s an interesting question:

hackage revisions currently are done to adjust bounds. if a drop-in, strictly more compatible fork under a different name is created..does it merit downstream hackage revisions?

hackage revisions are usually because the package author made a mistake. too strict of bounds.

in this case, they made the mistake of coupling to vincent.

in both cases, no harm is done to the end-user. they get a strictly more compatible thing where before they get a cabal-install error.

like if i make basement2, why can’t we do revisions to basement-depending libraries? we would if i did basement 2.0.0.0. absent versioning, all hackage versions could be implemented as global identifiers on hackage byconcatenating name and version. they are functionally the same thing

we obviously don’t want to step on vincent’s rights to the basement name. but does he have a right to freeze everyone downstream of him too? i assume
they don’t care about him but rather just want their packages to work for anyone trying to use them.

I guess this is semi-on-topic.

This is exactly what cabal-install and GHCup do. It’s not a hack or a “we don’t know better”.

Curl is the de-facto standard when it comes to fetching things. It works behind complicated proxy configs and whatnot. Why would you try to re-discover all that knowledge from scratch?

Now, you could say: well, just use the libcurl bindings. Programmatically that’s indeed nicer, but now you run into distribution issues:

  • if you link dynamically to libcurl, the end user system might not have the required library (or SONAME)
  • if you link statically, now you’re suddenly in charge of following CVEs across the whole curl stack (enjoy) and there might still be some portability issues (e.g. where to find certificates etc. is complicated and might now differ, because the system where you statically linked curl is not the same as the system where it’s run)

So yeah: shelling out to curl is in fact a smart thing to do.

4 Likes