Sorry but I don’t understand where this negativity comes from. It’s not an “terrible hack”.
Even if Hackage rejected outright packages without upper bounds, the ability to adjust the bounds is what allow us to extend the life of a package.
A newer version of a dependency will eventually come up; and, as I have been told many times, there’s a good chance it will be still compatible after all. So what do we do?
Release a new minor version? (as I think most ecosystems do). Redistributing a single file is a much cheaper solution.
And who does that? The author or maintainer might be MIA or have just moved on. This is when a group of “repository curators” can step in.
(Needless to say: automated testing could make all this easier and less error prone. There have been cases of bad revision but they can be fixed quickly)
At some point some code change would be required to keep the package working with newer versions of its dependencies. At this point a new version is necessary, and if the author is unavailable that might be the end of the story (but there is a process for adopting a package).
I don’t want to state that this is perfect, but that, at least from some point of view, it makes sense.
E.g. I feel this supports perfectly the “I wrote some Haskell packages during my PhD” kind-of scenario: original author will disappear, content is valuable, we still want to use it even if very few people understand it.