Vector type proposal discussion, blog

Starting off a discussion thread for my soon-to-be-published blog post, which will live at https://www.snoyman.com/blog/2021/03/haskell-base-proposal-2/

5 Likes

Exciting! Iā€™m curious whether thereā€™s been any discussion of replacing lazy Text and ByteString with a common streaming abstraction over the strict variants?

I know Iā€™ve mentioned it in a few places before, but thereā€™s been no active discussion of next steps. I think a great move on parallel would be to start on a PoC library for something like that. If it was me, Iā€™d start with stream fusion. I even wrote a blog post years ago about a draft library called Vegito for this.

1 Like

Vegito seems extremely similar to streaming. Thereā€™s even already streaming-bytestring. Maybe the path of least resistance could be to write a streaming-text to complement the existing streaming ecosystem.

The point of stream fusion is to provide a mechanism which GHC is almost always capable of optimizing away down to a tight inner loop. I donā€™t believe streaming offers that, nor do any other general purpose streaming data libraries.

1 Like

Are you sure thatā€™s true for streamly? Afair they put a great deal of work into performance and how GHC optimizes.

@harendra

streamly already seems to be satisfying the high level goals being proposed here, and much more. It has a stream type that is similar to vector. It provides an Array type supporting pinned memory - bytestring is just a special case of Array, itā€™s just ā€œArray Word8ā€. There is no need for text or bytestring (and the myriad strict/lazy/short variants of these), we deal with byte level streams directly and serialize the streams to Array rather than having many specialized types for such purposes. stream and array are the only types needed, removing all the complexity of having many different abstractions patched together. It inter-operates with existing bytestring types though.

Our goal from beginning has been to have a better library for idiomatic yet high performance Haskell that can supplement or be included in the base package at some point. At this point we can claim that there is no other library that has better stream fusion or better performance than streamly.

I think @snoyberg is well aware of streamly. When I started writing streamly back in 2017 the first two people I asked about feedback was him and Gabriel Gonzalez. I even discussed streamly with @snoyberg in-person at functional conf 2019. He sat through the presentation as well. I am a bit surprised that streamly was never mentioned in these discussions/proposals even once.

6 Likes

Why donā€™t we need a lazy variant? Is it because explicitly streaming replaces the use-cases?

I shared many of my concerns about streamly with you in person, and didnā€™t bring up the library as a result of those. The relevant ones here:

  • As you mention, streamly is based around pinned memory. The general consensus has been for a while that we need to move more deeply into unpinnned memory, not pinned memory. The blog posts I wrote discuss that explicitly.
  • Iā€™m concerned about the ā€œdoing too muchā€ concept, and specifically handling both a general purpose streaming and asynchronous streaming may be too much for this proposal. Iā€™m talking about including something minimalistic which optimizes well.
  • And on the optimization front, my understanding from the talk was that, in order to optimize fully, compiler plugins were still required. That wouldnā€™t be an option for this kind of a proposal.

In any event, the streaming aspect of this proposal is secondary and not my focus right now. The real focus is on a packed data representation unification which promotes unpinned memory, and which can be used by the current core libraries of bytestring, text, and vector.

If you have some kind of a write-up explaining why pinned memory is the right choice, and how it avoids leading to fragmentation (which is a real motivating concern), please point it out, Iā€™d be interested in reading it. But Iā€™ll admit that, personally, Iā€™m far from an expert on these topics, and have essentially deferred on pinned-vs-unpinned to others who have been working with the runtime system and garbage collector more closely than I have.

2 Likes

Right, you always use explicit streaming instead of lazy bytestring.

All the three are a bit of a non-concern, let me explain why:

  • pinned vs unpinned is more of an implementation question rather than a point about how the high level abstractions should look like. The Array in streamly uses pinned memory to be compatible with bytestring. But there is no reason why it cannot be changed to use unpinned memory.
  • About ā€œdoing too muchā€, streamly is pretty modular and different parts can be taken out as separate packages, including the serial streaming functionality, in fact we have plans/an issue about it. BTW, the concurrent streaming in streamly has exactly the same API as serial streaming, so there is no too much for the user, it is only an implementation detail. So I am not sure why that should not be desirable.
  • The stream fusion optimization plugin is not specific to streamly, it is a GHC issue which will be faced by any library you write that uses stream fusion. It is unfair to say that only streamly requires it, I am not so sure that the proposed library will not require anything like that and will still fuse everything. And its only a matter of time to get it into GHC, these are bugs in GHC.

If the proposal is only about only tweaking the existing bytestring/text/vector packages then the point is moot anyway. streamly is more about better unified abstractions rather than small incremental changes to existing packages.

4 Likes

I donā€™t quit get why do we need another stream type or fusion system. A quote from my colleague:

The lazy variations of text and bytestring should never been existed, they only cause endless confusions.

In Z.Haskell, we only provide Bytes and Text, and pack/unpack working with baseā€˜s build-foldr fusion, period.

Rule based fusion is fragile and often breaks due to GHC changes, I canā€™t think a better team to maintain this other other ghc team themself. As for streaming IO, thatā€™s a complete different problem, and lazy chunks is definitely not the answer.

4 Likes