ANN: - searchable links database

Hello Haskell friends,

Recently I said this on reddit:

There’s no way these very frequent reddit and chat requests for learning materials can capture all the existing resources. There are too many, and folks who know where they are get burned out reposting them. There have been many attempts to gather them, being the most obvious, but none of them have fully succeeded. We could really do with some more systematic, scalable (crowd-sourced, lightweight) approach.

Then I experimented and set up . This is version 1, providing:

After contemplation and discussion on #haskell, I believe version 2 should allow web editing and use its own database. But I wanted to share this one as it’s a nice simple setup. I hope you might find useful ! Help is welcome, see README.



Instead of a separate site, would having this resource on the Haskell site as e.g. (or just be a better option?

(Alternately, it could just be a redirection to the haskell-links site…)


Always a possibility down the road. But it’s often better to at least start with a dedicated domain, so you can move fast and break things. (Also might be problematic.)

[ could work though.]


Minor bug report: \cats — what else? — resists redirection!

Thanks Simon, is great in many ways: not only to have a handy method of pointing to resources, but also to to browse the collective memory of Haskell user and to eventually weed out links which have lost their relevance.


Thanks! Ideally the app wants IDs to be hyphenated words, for greatest ease and flexibility of implementation, but it seems lambdabot is a bit more permissive…

This is awesome, great work!

1 Like

Ie, I wasn’t sure what to do here, but with apologies to the IRC users of yore, I think haskell-links will stick with simpler IDs (words with hyphens/underscores). I saw only a few problematic ones:

  • \cats (points to a dead link, the correct link is also present as lambdacats)
  • _|_
  • f#

Can we weed links that are no more useful from lambdabot?

It would greatly enhance the list, in my opinion.

Yes: currently, the best way to do that is remove them from lambdabot, with @where+ ID. The removal will propagate to in 5-10m. (For those unaware: you can interact with lambdabot in eg the #haskell IRC channel, for public awareness, or in a private chat (/msg lambdabot @help), to reduce noise.)

You can’t just edit links.csv, as I regenerate that regularly, treating lambdabot’s data as the master. Originally I just imported new links from lambdabot, links.csv was the master, but that didn’t notice when links were removed from lambdabot (without further work).

Since the old links have a little bit of historical interest, and this would be (probably) the first cleanup, a little bit of coordination with #haskell and lambdabot’s operator int-e, is probably wise. (Though, they have been archived in haskell-links and lambdabot-where github repos by now.)

I had in mind being able to also use the web UI for more efficient adding, editing and cleanup; but this raises a bunch of questions and a rather large design space of where and how data is stored and synced, which I’m very much still pondering…


PS there’s an easter egg possibly useful for link curators: → column filters

(Disabled by default as it doesn’t add much value until we have more tags. It does let you search individual fields though.)

Updates: javascript no longer required, more powerful searching, UI tweaks. Details.