There is no such thing as “declarative all the way down”: at some point something has to tell the computer what to do step by step. However, if a language claims to be “declarative” then I expect to solve problems in it (mostly) declaratively. That is the case in relational paradigms. E.g. MiniZinc, SQL, Datalog and Prolog. Every language has a “and then we break the paradigm and do X instead” cut-off, I suppose I’m saying it didn’t take long in this case for a very simple problem.
Yes, exactly . Additionally though, note that almost all solutions except for mine and perhaps one other, are by experts. Meanwhile, the naive solutions literally don’t work because of the way thunks accumulate in lazy evaluation, wherein “only evaluate what you need to” ends up causing no evaluation at all due to memory overflow
When eventually they do work they are exceptionally slow. I’ve coded Haskell for 4-5 weeks and I’ve run into this same problem in 3 different data processing use cases (all on this forum).
For example, check out the “Optimiser performance problems” thread, where I both had a memory leak and terrible performance. To resolve the memory leak I was advised by the experts to either use force from deepseq or adopt a specialised strict container structure (that many were unaware of), and told that the ad
package is basically doomed to be slow because of the way it deals with some aspect of thunk management (I had noticed that it was 2x slower than an analytically calculated gradient for a 2 parameter problem!). Meanwhile, we are talking about <15LoC of actual Haskell, and a noob like me obviously wonders what that means at scale or with more complex problems