Do you see yourself ever writing Haskell in plaintext again?
I guess types for sure. They are just rigorous prompts - propositions to prove. But terms?
This blog post tickled my brain
It makes me think Curry-Howard is more important than ever in this new AI world.
I have very mixed feelings on AI use in programming (I detest the thief machine), but I will say this:
One of the few things that LLMs are actually well-designed for (as opposed to shoehorning things into one because business majors* have confused the ability to speak with that of intelligence), is syntax. So it makes some amount of sense that a rigorous language like Haskell would benefit quite strongly from machine learning supporting the disambiguation and rectification of syntactic errors.
Indeed, this is my current approach to AI coding assist (aside from reformatting, which admittedly is also helpful); I find that it can be extremely helpful in remembering what particular brand of properties I need (gadts vs type families vs fundeps hmmmm), or exactly how many stars deep I am - mechanical, syntactic things.
The moment it goes from manipulating syntax to generating content, it tends to be less useful, so any actual prompts I write are rare, concise, targeted, and abstracted / stripped of non-essential context. Even then it fucks things up quite often, so I only let it suggest things, and do not let it make any changes itself. Half the time I solve my own problem without submitting.
* I remember the flood of “programmers” in the 2010s who copied other people’s repos, attended a single coding bootcamp, got a job, and immediately dropped out of coding to get promoted to manager; they are now by and large the very managers and executives pushing AI slop today.
Great write-up. I am optimistic about the role of agentic programming in a niche community like Haskell.
If I see anyone opening a PR against my repos with AI generated patches, I close them without comment.
If I can’t see you used AI, then it doesn’t matter. But it’s usually blatantly visible.
This phenomenon has made industry tough. Before, juniors struggled and got stuck. And overcame and learned. They landed PRs after and they were easy to review because I trusted the process. And eventually I could trust them with entire services and business domains.
Now, they feed the ticket into Claude, poke at it, and send a PR. I have to review this code way more carefully because it does have bugs. I recently merged an PR that fundamentally misunderstood Nix. The bugs are often subtle. Or not even bugs - hacks that Claude did to satisfy a prompt.
I don’t have the luxury to reject AI PRs at work. The MO from the top is “any engineer who doesn’t lean into AI will be obsolete by 2030”
I’ve been AI-signaling ever since. Gotta make sure my Claude bill is in the peloton to avoid suspicion ![]()
I agree, there’s many layers why AI is going to make programming worse.
It turns out they’re not actually good for learning. One reason is because they lie (or are not sophisticated and give pretty boring answers by default). The other reason is because they obviously facilitate taking shortcuts. Nothing is memorable anymore.
But what’s worse is that they’re in my opinion giant copy-paste/templating machines. LLMs are not an abstraction, unlike a high-level programming language. I’ve read arguments that say “but we’re not writing assembly anymore either”. I can reason in Haskell, it’s an abstraction that gives me tools to not care about the low-level details anymore, because I know the compiler will take care of it. With AI there’s no such abstraction, everything becomes ad-hoc.
I also think we will see a decline in well written libraries, because LLMs can just mush something together anyway, even if it has been solved in a more concise manner already. I stop caring as a vibe coder whether the code is beautiful, concise or robust. I just piece stuff together. In a way that’s beautiful prototyping, but people actually submit such patches.
I’ve noticed in my work monorepo the Claude Haskell commits look a lot like Core. Lots of case and let. Unabstracted. I’ve never seen Claude use the lazy State monad or Semialign, for instance. It will use foldr happily tho. And give the prompter a comment to justify it.
LLMs are making production codebases their bytecode. Our source code is becoming akin to a bindist.
At least LLMs are really good at reverse engineering! That’ll come in handy - this ouroboros will happily eat its own tail if it invoices you appropriately kek.
When running out of tokens ![]()
I’ve a big account now though.
Small stuff is fine too. I’m sometimes faster with minor edits than the AI
make production code bases their byte code
I wish to point out that compiling to byte code is usually a deterministic process and not a long time ago, people employed a lot of effort to figure out how to formally verify that the bytecode matches the semantics of the code written.
Comparing LLMs which are machines whose performance is measured on how plausible their generated text is to humans to this is, frankly, ludicrous.
Letting a plausibility based text generator – because essentially, this is what it is – the industry gave up on the AGI promise and is now just putting in more tokens – brute force their way towards passing tests is only sustainable for as long as you have a rigid test framework – is this what our job is going to look like? Writing test cases? I thought it was universally agreed that this was neither the fun nor the fast bit of the software development process? Also, after everybody has been deskilled to not understand the code they’re vouching for with their name, who provides new training data? Who is going to be able to review whether code is actually correct? It’s remarkable how fast well-established “principles” are going out of the window as soon as someone waves with enough VC-money – and also, coming back to Haskell, how in a community that talks so much about “beautiful code” or the craft of writing a nice program, so many people immediately stopped talking about these aspects. I ask you, can you justify writing Haskell at all? Why would you not write Javascript? LLMs write it faster and better, it doesn’t matter how many abstractions we are missing, it’s all foldr anyway!
I would also like to point out that it is a bit sad to me that the culture around these things is so hopelessly utilitarian – people start their posts about how they replaced their development process with a slot machine modeled after literal slavery with “ethical concerns aside” – or even worse, don’t even consider these anymore (as in the above post – it’s “just vibes”, yes, your vibes) – is this really the aspiration we have towards our morality as a community of (human(e?)) developers?
What are your ethical concerns?
I don’t really know how I can view this as a question asked in good faith. I would hope that ethical concerns with LLMs are pretty clear by now – I think the most critical aspects are those related to devaluing human labour, art and craft, enabling of large scale manipulation, increasing power imbalances and concentration as well as overall resource usage, both environmental and societal. The details have been hashed out many, many, many times, ad nauseam.
I asked in good faith ![]()
evaluing human labour, art and craft,
I talked about the labor aspects quite a bit in the blogpost. I don’t think it’s devaluing it, on the contrary!
enabling of large scale manipulation
I’m not sure what is meant by this
increasing power imbalances and concentration
Aren’t I the one becoming more powerfull because I can just do more things?
Not sure how using a claude code terminal increases power imbalance?
Resource usage environment
Isn’t it an antrhopic problem if it’s expensive to run these?
and societal
Society can make laws, I’m doing legal stuff. I you feel this is wrong you should convince law makers, not me.
I don’t think it’s devaluing it, on the contrary!
Unfortunately, I think you’re pretty alone on this. Please ask artists about what they feel about being cut out in the middle. I think this is sadly a result on how we have been conditioned by the Advertising Industrial Complex – we just equate the artwork with the art. I think it’s quite sad. We only consume anymore, the interpersonal aspect of art is just gone.
I’m not sure what is meant by this
I am not sure what you think what’s in for the relentless data collection and being in the minus in the literal Trillions for companies like OpenAI or Anthropic, except for more effective ways of shaping politics and public opinion. (Since when Sam Altman has redefined the term “AGI” to “I can buy an island”)
increasing power imbalances and concentration
No. Because you, congratulations, you are now powerless without the tools. The real power lies with the companies that make you (us) dependent on them. This of course also works on a more drastic scale, e.g. when nation-states are purchasing Palantir contracts.
Isn’t it an antrhopic problem if it’s expensive to run these?
Unfortunately not. The scale at which tech companies have been investing into this technology makes it such that it can be counted as an investment by humanity as a species - they are investing your share of “being able to live on a planet with livable climate”.
I’m doing legal stuff
This is so hilariously myopic, it comes out pretty provocative, I give you that. ^^
Please ask artists about what they feel about being cut out in the middle.
You could also ask the people who now have art on their crummy blogs that lose money. Or artist who use AI to create new kinds of art which they couldn’t do before because they didn’t have the throughput. You still get better art if you just comission someone, but it’s expensive!
I am not sure what you think what’s in for the relentless data collection and being in the minus in the literal Trillions for companies like OpenAI or Anthropic, except for more effective ways of shaping politics and public opinion.
Facebook (and google) did the same before the advent of LLMS. Large companies have always been in politics (especially in amarica).
Because you, congratulations, you are now powerless without the tools.
I can still program. The tool just makes me go faster. Actually I asked my friends about local models just now and they’re just not here yet. but that will happen too. Give it time.
The scale at which tech companies have been investing into this technology makes it such that it can be counted as an investment by humanity as a species
It’s their own money and they can spend it how they please. If they’re wrong, and AI is in fact a bubble, they’ll lose it.
This is so hilariously myopic, it comes out pretty provocative, I give you that. ^^
I suppose ethics goes beyond legal definitions, but I truly think I’m not doing anything ethically wrong. There are perhaps societal concerns to have a debate about. I’m just not very interested in that.
That’s a fine conclusion. In that case I would like to ask you to disregard what I said – you’re not the intended audience.
The article does address this - the Haskell that the AI slops together is way easier to verify than the JS it can make. Due to Haskell. Specifically due to Haskell’s rigorous underpinnings.
I do think this AI stuff has made it more clear than ever to me that there is a fork in Haskell thinking. There’s the static types and there’s the functional programming. The AI-maxxers are very into the static types.
But before AI, a lot of them were already into that. They were managers or staff engineers and instead of wrangling AI, there were wrangling org charts of fungible humans. In both cases, the ethos is “how do we get this thing to do a good job building the stuff falling down the waterfall without trusting it?” It is no surprise to me that AI is taking off in industry. Nobody built software well in industry anyways, and now they can convert their money into bits cheaper. With fewer dependencies on humans with agency.
The functional part of Haskell feels like it’s getting lost. The AI code I see is so low-level. It’ll freely just write some direct recursion or a hacky fold. But it’ll write a doc comment! Should we do better? idk. But a single diff of that code looks fine. A couple years of diffs of those and the codebase is soup.
I fear the day when we start linting to optimize code for AI consumption and editing. I’m sure it’s coming. We already format our code to minimize git diff changes due to whitespace at the cost of human readability. There’s a general problem where engineers let their tools drive their code, and it only gets worse as the engineers get more meta and the tools get more powerful.
I think it’s not a particularly strong argument. Curry Howard only works if you at least understand the proposition you’re trying to prove. Not withstanding that the propositions you’re talking about when your project is entirely vibecoded are neither reviewed nor particularly meaningful.
yeh that’s fair. I don’t see why you wouldn’t still write types in Haskell. It’s more precise and concise than English. Aka easier lolol.
I find English to be exhausting when coding, myself. If I’m pronouncing code in my head, something has gone deeply wrong.