Towards a prevalent alternative prelude?

When I do suggest learning Haskell, I don’t find many if any of these 30 years of materials to be helpful. I tried for years to learn Haskell with little success despite the amount of these tutorials and blogs I read. Elm and others have done a far better job of teaching similar materials despite being 20 years behind. I don’t think those 3 decades of material are that vital to Haskell being successful in the future.

18 Likes

We support and move the entire community to a better base and Prelude. That’s clear and unifying progress.

An alternative prelude divides us.

4 Likes

@bgamari wrote:

Of course, there is a somewhat major hole in this plan: typeclasses. The basic typeclasses that we all rely on (e.g. Functor, Enum, Generic) would all need to live in ghc-base (in part because they have some special support in GHC and in part because otherwise you end up with instance-hell). This poses a challenge for proposals that want to change typeclass methods (e.g. removing a typeclass method, à la Monad of No return) or the typeclass hierarchy (e.g. adding a superclass à la Applicative-Monad Proposal) since these changes also require source-incompatible changes in downstream users.

This is a major problem that was discussed numerous times in the Foldable-Traversable, Applicative-Monad, and Semigroup-Monoid proposals. In short, there are a few ideas that may help mitigate much of the damage, but I don’t recall any approach that eliminates this source of breakage entirely.

I don’t know how GHC handles the above cases, so I have a basic question: Is it possible to instruct GHC to make the hierarchies referenced above in (possibly augmented) Haskell, so there is not any need for GHC to have special support for them? (Or expressed another way: Is it possible to add this magic to the language and GHC declaratively so these aren’t special cases any more?) This would help to reduce (or even eliminate) the coupling between GHC versions and typeclasses.

The iconic example is head. ‘head :: f a -> Maybe a’ is foundational for understanding what we do in fp - natural transformations ftw amiright? Calcification of some comonadic hell-child that is ‘head :: [a] -> a’ into standards is against any principled approach.

To be clear, these are all “normal” typeclasses. They are defined in standard Haskell. The only special treatment they get is that GHC knows how to derive instances for them (e.g. using DeriveFunctor) and desugars some syntax in terms of them (e.g. Monad and Applicative in the case of do syntax). The latter can essentially already be disabled (by way of the RebindableSyntax extension) but decoupling the former is a bit trickier.

Regardless, I don’t think decoupling these from GHC really solves the root of the problem: we simply have no good tools for changing typeclasses without breaking downstream users, regardless of whether the typeclass is defined in base or elsewhere. Typeclasses are fairly unique in this regard: types can be “shimmed” using pattern synonyms, functions can be easily given a new name.

@viktor writes:

I guess the underlying motivation for a major break is that otherwise it is hard to see how a new UTF8 lazy text becomes the real String, so that filenames are UTF8 strings, show returns UTF8 strings, …, lazy I/O is deëmphasised and [] is a lazy iterator, that isn’t nearly as often abused as a catch-all container.

Yes, shades of Python3, but Haskell is not an interpreted language and module dependencies are versioned, whether compiled statically or loaded as shared libraries (the shared objects have hashed names). So a major compatibility break would not introduce nearly as much deadlock, but it would take O(decade) for the install base to switch to the new way.

I think we are trying to solve different problems. In short, my suggestion was intended to:

  • ease on-boarding of new users by giving the libraries that nearly every Haskell program a place in a “standard” library
  • reduce the burden of compiler upgrades on users by beginning the process of decoupling base from GHC
  • ease future library evolution and perhaps begin to make some small clean ups within base
1 Like

This is a great suggestion. base coupled to GHC versions clearly causes pain, so a “standard library” with a cadence separate from the compiler makes sense.

However, to be useful, it will need to be available by default, just like standard libraries of Python, Rust and, well, virtually all other real-world languages. If this expanded base requires explicit opt-in to use, what makes it any different from third-party “standard libraries” we already have? I’m not entirely sure if this was part of @bgamari’s proposal or what everyone else is talking about, so let’s be extra clear about it.

Having the library available by default is crucial for two groups: library authors and beginners.

The most important types are the ones used in interfaces: primitive values, containers, error-handling, streaming, effects… etc. Core types need to be sufficiently standard so that library authors can use them without thinking about it. Today text and containers are quasi-standards—I’ve never seen a serious Haskell project that didn’t import containers and/or unordered-containers—but library authors are manifestly reluctant to use them because they are treated as explicit dependencies. No matter how hard you try to make your codbase use Text everywhere, you will still have a lot of conversion to/from String because that’s the only standard type for strings. A new standard library could help fix this, but only if it is actually standard.

Having a solid standard library massively improves the Haskell beginner experience. Once you’re used to Cabal and cabal files it might seem trivial to throw in vector, containers, text, bytestring and aeson as dependencies, but dependency management is a major barrier to beginners! I remember that dealing with Cabal was one of the most frustrating aspects of learning Haskell when I started, and I’ve seen the same pattern with people I’ve taught Haskell so I know this hasn’t changed. It’s an even bigger hassle with Python—which has an even more painful and confusing packaging situation than Haskell!—but Python’s standard library means that you can get invested in the language before you have to deal with the pain. This also makes Python far more feasible for quick one-off scripts which, again, really lowers the initial barrier to using the language.

While a standard standard library is less crucial for industry users—internal projects can easily move to a different “standard” library—I believe a stronger standard library will actively help industrial adoption. I’ve seen how language decisions are made on corporate projects, and a weak standard library is absolutely a consideration people bring up. Sure, we know that using an alternate standard library for an internal project is easy, but it’s an extra hoop to jump through, and Haskell already has far more obstacles to adoption than it should. The problem of existing libraries using different interface types will never go away, so your codebase ends up needing a lot of ceremony to convert between String and Text, wrap library functions into your internal error-handling mechanism… etc.

I’ve actually written a fair amount of OCaml which, back when I used it, was in a situation similar to Haskell: the OCaml equivalent of base shipped with the language was small, inconsistently designed and warty, so the OCaml community came up with the batteries library (think extended-base) and Jane Street release an alternative called core. The result was not great. core and batteries don’t always play well together, and libraries are often hesitant to use one or the other. As a beginner, the situation was simply painful and confusing. I would absolutely not want Haskell to end up in a similar state, especially when so many other languages have demonstrated just how powerful a good standard library available by default can be. At this point, people simply expect to be able to write useful code without pulling in a half-dozen packages first. If we really have to give people a way to avoid this, opt-out is a much better pattern than opt-in.

18 Likes

One minor point I want to make: we overall tend to treat base as a deeply special library, where backwards compatibility is absolutely required in almost all cases. Breaking changes are viewed with huge skepticism, and for very good reason. That’s what leads to situation where base is following practices today that most people agree should be fixed, many of which have been mentioned here: partial functions, lazy I/O, type FilePath = String, etc.

I’m in favor of this backwards compatibility. However, it’s completely out of sync with GHC itself, which seems to break huge swaths of existing code with every single release. So we’ve ended up with the worst of both worlds: our code breaks every six months, and we’re stuck with a base library nobody likes.

Trying to decouple the base release cycle from the GHC release cycle and many other approaches have been discussed plenty of times in the past, and again for very good reason. But IMO, it’s presupposing something that isn’t true: GHC itself doesn’t tend to break code.

13 Likes

As someone else who’s written an alternative prelude/alternative standard library, let me weigh in on why I think we should be taking a greenfield approach instead of using something that already exists. I know when working on rio (and others) I often made subpar decisions in the interests of minimizing friction with other libraries working directly with base. There are many decisions I would have made differently if I didn’t have to keep compatibility with a separately evolving base/bytestring/text/vector/containers/etc ecosystem.

IMO, any new base library should not take the same tact we’ve been taking up until now of an “alternative prelude” that simply reexports better things than Prelude. We should be looking at fixing many of the flaws you’ve already pointed out, like inconsistencies in different “core” libraries.

That’s not to say relude, or rio, or others, should be ignored. On the contrary: I think the lessons learned from alternative preludes should feed in to the design process here. But I also think that if we limit ourselves to libraries that already exist, we’re going to be selling ourselves short on what a great standard library should look like for Haskell.

21 Likes

@bgamari decoupling base from GHC sounds like an extremely worthy cause that can unlock a lot of action. If some foundational modules and typeclasses can’t be decoupled, that is fine. I hate to see perfect be the enemy of good, especially when remaining problems can be iterated on later.

@Tikhon’s point about availability is something worth considering. How can we decouple, but remain available by default?

4 Likes

In my opinion a completely new standard library with a new API would break to many existing codebases. A lot of people have come to terms with the existing functions in base. Personally I don’t have a problem with the String type as I don’t use wordprocessing that much. Also head is ok for me, because I prefer to have a simple list as input. If I want more safety I can implement my own safeHead.

I think an an evolutionary approach would be the better option. Maybe a starting point would be to pick some widely adoptet core libraries, e.g. containers, bytestring, text, random, vector, process, time and put them all together in one single core package. As a guideline for an improvement process the core libraries maintainers should strive that each package only depends on base. The content of base would define the Haskell language standard. Common wanted functions by the core library maintainers should go into base. Also if language extensions are needed for the core libraries, they should be in the new Haskell language standard. The bundled core package should be only loosely coupled to make it easier for newbies.
As a consequence of this approach I would expect shorter build times because of shorter dependency lists and more low-level code in the core libraries which is by trend more performant. Another benefit would be that Haskell compiler developers would know that their implementation would also build the core libraries.

Surely this would be a task for several years to implement, but I think there is an urgent need to establish guidelines before the ecosystem becomes too frayed. Growing dependency lists of the core libraries are generating growing concern for me.

3 Likes

I find this a bit problematic. The learning resources I used pointed out the warts of Haskell, but just end up saying “it is what it is” because nothing can be done since it’s the current standard. How lots choose to accept the quirks, circumvent around them, or just drop Haskell as an option entirely. Because all the little annoyances compound into a bad experience which matters a lot in the context of increasing Haskell’s industry adoption. For instance (the simplest example I know of) is String vs Text. The amount of resources I read where it was recommended to use Text over the former is a lot. So I definitely found it strange that if this is such a common recommendation, why is it not the standard?

But I do understand the sentiment on why it’s a difficult decision to make. Since if the new standard library is not as great as we all have envisioned it to be, it breaks the existing projects, and doesn’t convince potential adopters to use Haskell.

4 Likes

@sekun: I don’t recommend to use String over Text in general. Options to have is beneficial. There are cases where Text is surely the better pick. But I refused to take it for my project because I simply don’t know if the library maintainers come at some point to the idea to take some additional dependencies that blow up the dependency tree in consequence. If there would be an officially maintained core package with some guidelines that I can trust, I would be more confident.

It’s true that it can be confusing for newcomers which package to take. I had the same experience two years ago. But I’ve learned in many years before not jump up on every new train. I remember that the lens package was very often proposed as the solution for easier record access. Now there is RecordDotSyntax. The more high level a technology is, the fewer maintainers it will have and will possibly get obsolete (exceptions prove the rule).

2 Likes

I resonate mostly with @snoyberg and @viktor. In my opinion the prelude is just a small part of the issues of base that need to be fixed. Just piling more stuff onto base will make the situation worse, not better.

As others said before me, base needs to be decoupled from GHC, but also still be available by default. A standard library that is not always readily available is useless.
This would mean:

  • Put all the primops in their own package that is coupled with GHC (AFAIK ghc-prim is already a thing)
  • Create base-5 that uses PVP for the ghc-prim library, so it can be used with more than one GHC version as long as there are no breaking changes to the primops.
  • Also have the typeclass hierachy in base, and have the compiler support a PVP range of base-5. The incompatibilities that could arise here are just if changes to the typeclasses are not reflected by the deriving mechanism of GHC, so this could just give a compiler error if one tries to use base in a wrong version. I don’t expect this to happen very often, as the barrier for breaking changes should still be high.

For this new and decoupled base, we should fix the warts that are already there. No partial function, no lazy IO, builtin UTF-8 string type, builtin (mutable) array type, builtin simple stream and the rest that others already mentioned.

At the moment AFAICT the main argument against folding text/vector/containers into base is that it is hard to release a new version of base with bug fixes/perfomance improvements/etc because of the coupling. This wouldn’t be an issue any more.

TL;DR: We need a big breaking change that includes decoupling base and GHC and even after that we should not avoid breaking changes at all costs (again, just like GHC).

P.S.: My wishlist for the new base (in addition to the stuff that was already said):

  • PartialOrd superclass of Ord
  • Lawless Num class for overloading literals and more fine graned number hierachy
7 Likes

I wrote in my reply to the origin thread that the only way to fix base’s issues is to break it up. We need finer-grained libraries to separate the uncontroversial/stable stuff from the controversial/unstable parts, avoid version churn, and give the ecosystem greater ability to experiment while also not breaking old code.

We can then make all the “batteries included” preludes/bases/package-collections for end applications we want, purely with reexport without the friction that arises today, and without coarsening the underlying library ecosystem which would be disastrous.

2 Likes

In my opinion a completely new standard library with a new API would break to many existing codebases. A lot of people have come to terms with the existing functions in base.

Yeah, and in language like Haskell which implements lots of functionality on library level, standard library built from scratch could have as much effect as a new language.

But honestly, I think that may be a good thing, as long as:

  • community provides official, well written, authoritative resource about it’s usage (as mentioned above)
  • developers provide some incremental approach for switching to new standard library (e.g. migration tools)
  • developers give existing users enough time to do so

Incremental approach sounds appealing, but honestly, there are so many things that could be potentially changed that such transition could take years, frustrating users by regularly breaking stuff along the way.

Python 2 is a scary example of an old version of ecosystem staying alive for years, though it feels slightly different compared to Haskell - even though it is an old language, big part of it’s history consisted of mostly academic use, which while interesting and important in it’s own way, isn’t usually about long term support and probably won’t be strongly affected by such change. When it comes to learning materials, as long as we have official resources mentioned above, we can invest energy in pointing newcomers towards it - there are few strategic places where we can put links to new materials.

In this light of doing breaking changes, it may be good idea to pair release of new standard library with corresponding GHCXXXX extension as a new default, so that newcomers hop straight into modern setup without need for collecting extensions they want/need manually.

4 Likes

Trying to decouple the base release cycle from the GHC release cycle and many other approaches have been discussed plenty of times in the past, and again for very good reason. But IMO, it’s presupposing something that isn’t true: GHC itself doesn’t tend to break code.

Yes, but I think part of the benefit is doing that is the that the pain of breakage grows superlinearly with the amount at once:

  • base having small breakages and ghc breakage occurs together
  • libraries are more likely to be impacted by the combination than either alone, and if they just cut a new release from master rather than backporting fixes, they might release breaking changes of their own
  • the process cascades when downstream libraries are also forced to upgrade to support the new GHC.
  • Doing all our are breaking changes in one big wave forces a bunch of busy work all at once.

(We are getting this right now in 9.2 with bytestring-11, for example.)

Conversely, if we decouple things, and amortize the cost of breakage, we get huge benefits even without doing less breakage

  • Pure amortization of benefits: less busy work happens at once, less “ecosystem adapting” latency
  • Less cascading: little ripples of breakages are less likely to amplify themselves causing downstream breakage. It might even work like a lower transmission rates for infectious diseases, something we are all more familiar with.
  • Spreading the work out also frees up time to backport fixes to prior incompatible releases, further avoiding breakage cascades.
2 Likes

IMO, any new base library should not take the same tact we’ve been taking up until now of an “alternative prelude” that simply reexports better things than Prelude. We should be looking at fixing many of the flaws you’ve already pointed out, like inconsistencies in different “core” libraries.

That’s not to say relude, or rio, or others, should be ignored. On the contrary: I think the lessons learned from alternative preludes should feed in to the design process here. But I also think that if we limit ourselves to libraries that already exist, we’re going to be selling ourselves short on what a great standard library should look like for Haskell.

I agree, but to me, this also means it’s a bit premature to be able to be making the grand new library at all. I don’t subscribe to the “design by committee” memes that collaborative design doesn’t work, but I also think it is hard to sit down and design a bunch of radical changes that are a big imaginative leap from what we have without making mistakes. And yet making multiple whole-cloth new batteries standard libraries would be immense churn.

I am focused on breaking up base, making orphans good, etc. because I want to set up the foundation for more fine grained experimentation to allow many ideas to be explored in parallel with minimal breakage (due nice properties of good fine grained dependency partial orders). If we do that that, we can then start independently curating them into radical alternative preludes / Haskell platforms, and thereafter, in a 1+ years time, come back to the table to finally deliberate on the one true HF-endorsed curation with actual evidence.

6 Likes

Another issue that we might be able to address is to include strict versions of standard data types, e.g. strict Maybe, Either. But we should go further than that, e.g. it would be nice to have strict versions of MVar, TVar, which should be encouraged (other than writing MonadFix instances lazy mutable variables should not be that common). There is an advantage of strict data types versus strict interface (i.e. making putMVar strict). We did that at IOHK, and it has worked quite well for us (the library is not yet published at hackage, but soon it will be released). In conjuction with the nothunks library finding memory leaks was a matter for writing quickcheck tests.

2 Likes

This is from a very practical perspective: I don’t fully understand all the Haskell concepts. But I can use a lot of them and have written quite a few popular and fully functional apps using it.

I enjoyed working with ClassyPrelude: it gave me access to almost all the common types I need, put Text in charge, and the use of type classes meant not needing to worry about which version of a function to import in – which seemed like the ideal usage (close to interfaces in OOP).

I’ve since moved onto using RIO because (I think?) that’s what the ClassyPrelude folks moved onto next. I’m not such a fan of always having to import the type specific versions of functions – seems to miss what I thought type classes were for. But the conventions and such are all helpful.

2 Likes