[ANN] ollama-haskell-0.2.1.0 released

I’m happy to announce the release of ollama-haskell-0.2.1.0, a Haskell client library for interacting with Ollama.

This release includes:

  1. Compatibility with Ollama 0.12.1

  2. Support for the Create Blob option

  3. Standardized field names across modules for consistency

  4. Ability to specify dimensions options for embeddings

:package: Hackage: ollama-haskell
:laptop: GitHub: tusharad/ollama-haskell

As always, feedback, bug reports, and contributions are very welcome!

9 Likes

This is really cool, definitely going to try it out.

It’s been a while since it played around with ollama, how does it compare now to other models like Claude Gemini etc? And for using it on Haskell code bases?

Every week, some new model comes out that exceeds expectations. I personally love using the Qwen3 model. These models are obviously not as smart as Gemini or Claude, but it feels good not to have to worry about exhausting my API key!

1 Like

The link to examples/OllamaExamples.hs in the Readme on Hackage gives me a Page Not found.

The same happens on GitHub: examples/OllamaExamples.hs: “404 - page not found The main branch of ollama-haskell does not contain the path examples/OllamaExamples.hs.”

I guess the link was supposed to go to the folder examples/ollama-example/?

Two thumbs up for having full examples! I’m especially interested in tools/MCP.

2 Likes

ollama isnt model but model runner/manager :smiley: So question how it compares with other models is pointless.

Better question is how it compares to LM Studio/LocalAI/llama.cpp,

what is important Ollama 0.11 added support for “cloud” models behind paywall so you can use your own local instance or if you need model with big hw req. you can pay for it run in the cloud.

I’ve dabbled a modest amount with ollama and llama.cpp (written up in this diary: LLMs — see heading 2025-08-19), the notable difference being GBNF support for me personally. I haven’t explored the others.

1 Like

read your blog and this is typo or something wrong in your setup? I’ve experimented with local LLM models on my MacBookPro Max M4, which can run 3b parameter models. ?
Because on my mini M4 I can run models with 30b parameters ..

Yep, I’ve been experimenting with e.g llama3.2:3b and similar. I found that larger models are slower than I’d like in my MacBook, whereas the 3b ones are snappy.