How to give libraries optional dependencies?

I have to apologize: I have severely misremembered how this worked.
Indeed, also in Rust enabling features is a manual process.

I’m sorry.

A difference with Cabal is that in Rust/Cargo, packages are able to create feature flags that enable feature flags in their dependencies, and that optional dependencies (by default, this can be overridden) automatically become feature flags for the package they are used in.

In essence, this means that currently in Rust it is okay to have a library that has little to no features enabled by default, and anything that uses the library can easily enable those features.
Conversely, in Haskell, since features can only be enabled at the top level by the ‘end user’ and not by intermediate libraries, libraries pretty much have to enable all features by default and the onus on figuring out which features could be disabled for which (transitive) dependencies squarely falls on the end user.

2 Likes

Your suggestion may well be the next step in the evolution of Cabal, and it will be a good thing if it happens. The step after that might be adding some CLI and API to enquire which flags are available for tweaking in this way.

One can dream about an alternative history, where GHC itself reported which dependency arrows were used exclusively for instances, as well as which instances were actually used. Combined with some finer-grained link-time control, this would let us eliminate not only the Cabal flags but also the #ifdefs that guard the instances.

I’m confused. Are you saying the serialization typeclasses are useless for writing good programs? That doesn’t at all agree with my experience from industry.

More generally, my impression is that the origin of this abuse heaped on the lawless type classes and orphan instances is a defensive mechanism to cover up the deficiencies in the language and its package system. It’s far better to acknowledge that the problems exist, otherwise they will never be fixed.

No worries, I think your idea, what I would call “weak” dependencies because of its similarity to weak pointers, sounds even more useful than Rust/Gentoo-style optional dependencies.

By the way, there is a discussion on the cabal issue tracker about this same topic with mentions of Rust: Support for adding flag constraints in Cabal files · Issue #2821 · haskell/cabal · GitHub

Oh, and by the way, by the way, as “mmhat” mentions in that thread, using public sublibraries can cover this use case. You can specify which public sublibraries to depend on. Kowainik wrote about it here: Insane in the Membrain :: Kowainik

2 Likes

No, I’m saying:

  • An instance that does what is already possible except worse is most probably only useful for collecting dust and confusing newcomers (documentation conciseness is a concern too).

    Just because you can have a Semigroup (Map a) where (<>) = union doesn’t mean the instance deserves a spot on the documentation list. Similarly Factorial Text may make sense for the length, but the folds in that instance are directly worse than the datatype ones.

  • For serialization in either direction unambiguous conversions are quite rare.

    The general expectation, it would seem, is that having separate functions for each conversion would be a mess, but bytestring#Builder is a great example of how that is not the case.

Violating either of these points doesn’t make the instance outright useless, it just makes it bad.

Perhaps, but out of all the problems the language has this is a relatively minor one and solving it nicely is in no way obvious, as I assume any proper solution would require changes to both GHC (with syntax additions), Hackage and Cabal.

No.

It is very bad to make cabal flags that control exposed API, because a library foo depending on length cannot specify that it requires flags of length. So you end up with compile errors depending on the flags that are set.

Public sublibraries are irrelevant to this topic, so I’m not sure why people are bringing them up.

Yes, and I urge everyone to take a moment and consider the amount of packager coordination and end user config fiddling this concept causes. It is nothing but bad UX. And hackage isn’t a managed repo like the portage tree. We’d be steering into a disaster.


For the specific issue, I suggest to embrace orphan instances.

4 Likes

I, too, suspect this is the approach that will lead to, by far, the least pain in practice.

1 Like

If public sublibraries are irrelevant to this topic, then I feel like they’re irrelevant to any topic. Indeed, you can always just replace them by multiple proper packages. However, this seems like a prime example of where public sublibraries could facilitate maintenance and discoverability.

There is an argument to be made that public sublibraries introduce more complexity in the tooling than they are worth, but it is not very productive to make that argument in passing when a potential use case for public sublibraries comes up.

Or have I misunderstood what you’re saying?

The only use case for them is:

  • you have a huge number of packages
  • all these packages should share the same PVP version

This is a bit awkward to achieve with multiple packages. And yet, the only use case I’ve seen so far is HLS and its plugins, but I’m not convinced that the current versioning that is used (plugins share the same version as HLS core itself) is proper.

It is a theoretical use case and I’ve never seen it in the wild. We’re solving a problem that is not really a problem.

It’s more likely that this approach will lead to bad PVP versioning, because maintainers just don’t care that their sublibraries didn’t change API, but now since they’re sublibraries and not proper packages, they will follow all the major version bumps.

This can lead to a degradation of quality of hackage.

It’s questionable whether they’re appropriate here. Only if package length just consists of the class definition and nothing else, then it would be sound. Otherwise you’re coupling major versions bumps of the instance sub-libraries to any API change in length, which is nonsensical.

I’m pretty sure most users of sub-libraries will not consider this.

2 Likes

That’s merely Cabal file format convention, it doesn’t have to be this way.

Are you proposing that one cabal file can describe multiple versions? That sounds horrible and will break even more tooling.

Your suggestion is unclear, what are you trying to say?

That at the end of the day each public library has its own version and so the Cabal file could allow each library entry to specify it explicltly instead of inheriting from the “file version”. Sharing versions with the common stanzas makes total sense in my mind.

I have an idea for a setup I’ve expressed in a different topic that would benefit from this.

Having multiple libraries share the same version is no different from having multiple modules share the same version, it just allows you to split those modules up in a convenient way.

Fine-grained versioning can be nice but can also be a lot of work for marginal gain. That’s why we don’t have exclusively one-module packages. There’s nothing wrong with sometimes having coarser-grained versioning.

So I don’t see how this would lead to a degradation of hackage.

1 Like

It gives you one more way to get PVP wrong in the name of convenience.

The way it’s aggressively advertised all over discourse without any mention that you should still think about whether it actually makes sense to have lock-step versioning doesn’t make the status quo better.

People already fail to follow PVP wrt internals. Explaining that you can use a separate package to have better PVP is already an uphill battle. Public sublibraries are going to make it worse, because of said convenience.

Well, you’re free to have your own cabal binary that reads and interprets those cabal files, but I heavily doubt we in the cabal team will implement such a thing. And I don’t think any reasonable package repository out there will do the same.

Well, if they [the Cabal team] solve the related issues in any other way or determine that the current approach is the lesser of all evils, they’re free to communicate that to the broader public. In the absence of such a statement I consider the status quo lackluster (I don’t like importing .Internal modules).

1 Like

I hope one day to provide Amazonka as a single package with public sublibraries, instead of 300+ (and rising) standalone libraries.

2 Likes

I sympathize, since I can imagine the amount of work this must require.

But, conceptually, if one data type in the core changes, do all 300 packages need a major version bump? :slight_smile:

Technically no, but realistically yes: if we’re sending something to Hackage, in an ideal world we’d always regenerate the service against the most recent AWS-provided definitions just before shipping. (We can’t do that right now because of reasons.)

1 Like