First, thank you for reading the article carefully and providing objective and well-founded feedback. Thanks to you, I feel I can now see the issue from a broader perspective.
Let me begin by responding to the point I most want to discuss at the moment. I apologize that this response has become rather long and may not be well organized, but I would appreciate it if you could review it again to see whether it can withstand objective critique.
These statements are based on my experience when I attempted to improve the performance of my library by forking an existing IO-wrapper implementation. (Incidentally, while I succeeded in making it work correctly, I failed to achieve any speed improvements.)
What I encountered at that time was that even with types, runtime errors and segfaults could occur. This was something I had not experienced in either my library or the previously type-protected Haskell programming, and it was a harsh experience that reminded me of my days working in C.
A typical bug was runtime access to an uninitialized handler in the evidence vector. This occurs, for example, when runState
and runReader
are composed in a particular order. In the type-safe version of my library, it is simply impossible to compose them in that order (although the matter is a bit more complicated. It is not merely about the order of runState
and runReader
, but about the compatibility between higher-order effects and delimited continuations). Any such operation always triggers a type error and cannot even be written. In other words, the IO-wrapper approach, in its default state, allows operations that are essentially wrong, and to prevent this, one must retroactively guarantee interface type safety by isolating and hiding unsafe modules. (This corresponds to the reverse of making invalid states unrepresentable.)
Thus, there are two contrasting development processes here:
- Starting with complete safety guarantees that are too strict to be practical and gradually relaxing them (while still remaining safe as long as typing is preserved).
- Starting with an interface that may have safety gaps and filling them in as they are discovered.
I do not deny the possibility that this is an overgeneralization of my own experience, but fundamentally, IO-wrapper libraries tend to fall under category 2. Of course, my library also contains some elements of 2âsuch as open union or certain functions that are not sufficiently generalizedâbut in terms of the sheer number of cases and their locality, I would say it is small.
During the development of bluefin
and effectful
, I believe there were several occasions when such safety holes in the interface were discovered and then patched. This includes not only issues recorded in issue trackers but also minor fixes applied immediately when tests.
For example, aside from runtime errors, there may have been cases where IORef
combined with certain interpreter mechanisms or concurrency exhibited unintelligible behavior. If you have indeed encountered almost none of these issues, I would like to know how you managed to prevent them. I do not know how to prevent such issues in advance. My understanding is that they can only be dealt with reactively after they occur.
(Moreover, I am particularly interested in looking further back to the period when people were experimenting with whether UnliftIO and delimited continuations could coexist.)
In other words, what Iâm ultimately trying to say is that there are two design philosophies here, each holding that:
- If there is a possibility of bugs, you should eliminate that possibility. Rather, you should demonstrate that there is no possibility of such bugs.
- If you say there is a possibility of bugs, you should show the basis for that claim.
As for this framework, Iâm stepping back and donât know how one should think about it at that moment. However, I believe this is simply a difference in philosophy, and it is not a matter of one being right and the other wrong. What do you think?
I had a misunderstanding about this. I did not distinguish between issues recorded in the issue tracker and those discovered and fixed on the spot through implementation and testing. In my next article, I will correct this and revisit the outcomes of our discussion here.
I understand this, but I feel it overlooks the distinction between the level of technical elements and the protocols that connect them.
Isnât this specifically a guarantee regarding resource safety, rather than a guarantee of the safety of the entire effect system?
It cannot be denied that it may be used as a buzzword in common parlance, but there is at least a clear definition that people ought to rely on, and I adhere to it. This is what is presented in Plotkinâs work and in the literature on the eff
language. I intend to write about it in an article in due course.
That is helpful. Thank you.