Polysemy's performance?

Calamity is currently built over Polysemy, but Polysemy as of the past few years has been revealed to be a massive performance sink.

Has Polysemy been updated with newer GHC primitives to improve performance, or is the simpler Discord-Haskell library (single-threaded, no effect system) more performant?

I have not seen clear evidence that polysemy is “a massive performance sink”. Not a zero-cost abstraction - sure, but this is not a blocker for Discord integration.

4 Likes

Mostly Alexis King’s takedown; i.e, effectful / cleft / eff (if Alexis King ever puts it on Hackage) should be standard these days.

Effectful is about 20-33% more time consuming than a pure implementation in static, but Polysemy is 35-50x. It seems a questionable choice for effect-heavy code.

The reason I brought this up was because I was hoping Sandy Maguire fixed it between the Effectful benchmarks and now.

The link you quoted says that the performance difference is much less pronounced for real-world code operating in IO. Would polysemy be my first choice for a new application? Unlikely. Is it that bad that one should avoid polysemy-based libraries such as calamity? I don’t see any clear evidence for this.

3 Likes

Yeah, with effects benchmarks it’s important to actually read the code of the benchmark and figure out what is being measured.

For instance, the effectful benchmarks are actually what made me choose cleff over it for my 60fps 2D games. effectful won by maybe 10-20 micros tops (and the pure implementation won by 30 micros). But it was micros per 1k effect dispatches. Given that I had a budget of 16000 micros per frame, I called that a rounding error (or even a good use of my budget!) and chose cleff based on other impressions.

I highly doubt your web service is gonna be effect-system-overhead-bound. People write webapps in Ruby after all. So if polysemy seems cool, use it.

3 Likes

Problem is more, Discord Haskell is easier to use and more accessible. Cataclysm is interesting for using Polysemy, but as I understand it, unless Sandy et Al can make it more performant, it’s a dead end.

1 Like

If what you’re doing is not explicitly HPC (High-performance Computing) or similar, and you’re using the network… the CPU performance of your code (or libraries you use) isn’t going to be the bottleneck 99% of the time. You’d have to do something really strange for that to be significant to the 20+ms round-trip time (and likely 100+ms server response time).

4 Likes

Do you have any references that say this? I understand that you may have a vague feeling that this is the case from reading misc. bits & bobs around the interwebs, but “dead end” is severely overstating it if that’s the case.

1 Like

If in fact Polysemy delivers a 30-50x slowdown compared to pure, ST, or a 20-30x slowdown compared to cleft / Effectful, what is the point? You get an expressivity improvement that can be trivial compared to the fact that you’re now operating in the performance range of Python, Ruby, and Smalltalk, legendary slouches.

My fundamental Haskell values are for a reasonably performant high-expressivity language. Deliberately hobbling myself with Polysemy means that I now have to work harder to get performance out of other parts of my program to compensate for Polysemy creating a major loss, or be careful structuring my program so little code execution occurs in the Polysemy layer.

It’s a bit Julian on my part (Julia users want it all, but will never have it), but it’d be cool if Polysemy were sped up by delimited continuations or other recent optimizations such that people who like Polysemy as a provider for the free / freer monads pattern can use it in more versatile ways.

(Iirc Discord-Haskell vs Calamity comes down to Calamity being more powerful and multithreaded. If performance gain in the latter is lost through Polysemy, then Calamity has much less to gain over Discord-Haskell).

1 Like

You’re misinterpreting the benchmark. The benchmark is solely for effect dispatch afaiu. So unless your program is constant effect dispatch in a tight loop, your entire program isn’t sliding into Ruby range.

EDIT: Just checked. Polysemy is like 200 micros per 1k effect dispatches. There’s some absolute, useful numbers instead of “30-50x.” It actually outperformed mtl in one case!

So saying polysemy will hobble you isn’t based in facts imo. In no way is polysemy ever going to be an appreciable part of your performance budget in a web app. I used freer-simple back in the day which is equally “slow” and it literally didn’t matter.

4 Likes