Presumably Ollama had been working on this for quite a while already - it sounds like they've broken their initial dependency on llama.cpp. Being in charge of their own destiny makes a lot of sense.
Do you know what exactly the difference is with either of these projects adding multimodal support? Both have supported LLaVA for a long time. Did that require special casing that is no longer required?
I'd hoped to see this mentioned in TFA, but it kind of acts like multimodal is totally new to Ollama, which it isn't.
It's a turducken of crap from everyone but ngxson and Hugging Face and llama.cpp in this situation.
llama.cpp did have multimodal, I've been maintaining an integration for many moons now. (Feb 2024? Original LLaVa through Gemma 3)
However, this was not for mere mortals. It was not documented and had gotten unwieldy, to say the least.
ngxson (HF employee) did a ton of work to get gemma3 support in, and had to do it in a separate binary. They dove in and landed a refactored backbone that is presumably more maintainable and on track to be in what I think of as the real Ollama, llama.cpp's server binary.
As you well note, Ollama is Ollamaing - I joked, once, that the median llama.cpp contribution from Ollama is a driveby GitHub comment asking when a feature will land in llama-server, so it can be copy-pasted into Ollama.
It's really sort of depressing to me because I'm just one dude, it really wasn't that hard to support it (it's one of a gajillion things I have to do, I'd estimate 2 SWE-weeks at 10 YOE, 1.5 SWE-days for every model release), and it's hard to get attention for detailed work in this space with how much everyone exaggerates and rushes to PR.
EDIT: Coming back after reading the blog post, and I'm 10x as frustrated. "Support thinking / reasoning;
Tool calling with streaming responses" --- this is table stakes stuff that was possible eons ago.
I don't see any sign of them doing anything specific in any of the code they link, the whole thing reads like someone carefully worked with an LLM to present a maximalist technical-sounding version of the llama.cpp stuff and frame it as if they worked with these companies and built their own thing. (note the very careful wording on this, e.g. in the footer the companies are thanked for releasing the models)
I think it's great that they have a nice UX that helps people run llama.cpp locally without compiling, but it's hard for me to think of a project I've been more by turned off by in my 37 years on this rock.
I worked on the text portion of gemma3 (as well as gemma2) for the Ollama engine, and worked directly with the Gemma team at Google on the implementation. I didn't base the implementation off of the llama.cpp implementation which was done in parallel. We did our implementation in golang, and llama.cpp did theirs in C++. There was no "copy-and-pasting" as you are implying, although I do think collaborating together on these new models would help us get them out the door faster. I am really appreciative of Georgi catching a few things we got wrong in our implementation.
It's impossible to meaningfully contribute to the C library you call from Go because you're calling it from Go? :)
We can see the weakness of this argument given it is unlikely any front-end is written in C, and then noting it is unlikely ~0 people contribute to llama.cpp.
Dude, they literally announced that they stopped using llama.cpp and are now using ggml directly. Whatever gotcha you think there is, exists only in your head.
> Ollama is written in golang so of course they can not meaningfully contribute that back to llama.cpp.
llama.cpp consumes GGML.
ollama consumes GGML.
If they contribute upstream changes, they are contributing to llama.cpp.
The assertions that they:
a) only write golang
b) cannot upstream changes
Are both, categorically, false.
You can argue what 'meaningfully' means if you like. You can also believe whatever you like.
However, both (a) and (b), are false. It is not a matter of dispute.
> Whatever gotcha you think there is, exists only in your head.
There is no 'gotcha'. You're projecting. My only point is that any claim that they are somehow not able to contribute upstream changes only indicates a lack of desire or competence, not a lack of the technical capacity to do so.
FWIW I don't know why you're being downvoted other than a standard from the bleachers "idk what's going on but this guy seems more negative!" -- cheers -- "a [specious argument that shades rather than illuminates] can travel halfway around the world before..."
> As you well note, Ollama is Ollamaing - I joked, once, that their median llama.cpp contribution from Ollama is asking when a feature will land in llama-server so it can be copy-pasted into Ollama.
Other than being a nice wrapper around llama.cpp, are there any meaningful improvements that they came up with that landed in llama.cpp?
I guess in this case with the introduction of libmtmd (for multi-modal support in llama.cpp) Ollama waited and did a git pull and now multi-modal + better vision support was here and no proper credit was given.
Yes, they had vision support via LLaVa models but it wasn't that great.
There's been no noteworthy contributions, I'd honestly wouldn't be surprised to hear there's 0 contributions.
Well it's even sillier than that: I didn't realize that the timeline in the llama.cpp link was humble and matched my memory: it was the test binaries that changed. i.e. the API was refactored a bit and such but its not anything new under the sun. Also the llama.cpp they have has tool and thinking support. shrugs
The tooling was called llava but that's just because it was the first model -- multimodal models are/were consistently supported ~instantly, it was just your calls into llama.cpp needed to manage that,a nd they still do! - its just there's been some cleanup so there isn't one test binary for every model.
It's sillier than that in it wasn't even "multi-modal + better vision support was here" it was "oh we should do that fr if llama.cpp is"
On a more positive note, the big contributor I appreciate in that vein is Kobold contributed a ton of Vulkan work IIUC.
And another round of applause for ochafik: idk if this gentleman from Google is doing this in his spare time or fulltime for Google, but they have done an absolutely stunning amount of work to make tool calls and thinking systematically approachable, even building a header-only Jinja parser implementation and designing a way to systematize "blessed" overrides of the rushed silly templates that are inserted into models. Really important work IMHO, tool calls are what make AI automated and having open source being able to step up here significantly means you can have legit Sonnet-like agency in Gemma 3 12B, even Phi 4 3.8B to an extent.
They are talking a lot about this new engine - I'd love to see details on how it's actually implemented. Given llama.cpp is a herculean feat, if you are going to claim to have some replacement for it, an example of how you did it would be good!
Based on this part:
> We set out to support a new engine that makes multimodal models first-class citizens, and getting Ollama’s partners to contribute more directly the community - the GGML tensor library.
My takeaway is, the GGML library (the thing that is the backbone for llama.cpp) must expose some FFI (foreign function interface) that can be invoked from Go, so in the ollama Go code, they can write their own implementations of model behavior (like Gemma 3) that just calls into the GGML magic. I think I have that right? I would have expected a detail like that to be front and center in the blog post.
I wish multimodal would imply text, image and audio (+potentially video). If a model supports only image generation or image analysis, vision model seems the more appropriate term.
We should aim to distinguish multimodal modals such as Qwen2.5-Omni from Qwen2.5-VL.
In this sense: Ollama's new engine adds vision support.
Unfortunately the speed of AI/ML is so crazy fast. I don't know a better way to keep track other than paying attention all the time. The field also loves memey names. A few years ago everyone was naming models after Sesame Street characters, there were the YOLO family of models. Conference papers are not immune, in fact they are greatest "offenders".
Their example "understanding and translating vertical Chinese spring couplets to English" has a lot of mistakes in it. I'm guessing the person writing the blog post to show off that example doesn't actually know Chinese.
What is actually written:
Top: 家和国盛
Left: 和谐生活人人舒畅迎新春
Right: 平安社会家家欢乐辞旧岁
What Ollama saw:
Top: 盛和家国 (correct characters but wrong order)
Left: It reads "新春" (new spring) as 舒畅 (comfortable)
Right: 家家欢欢乐乐辞旧岁 (duplicates characters and omits the first four)
Besides the "culture"/licensing/FOSS issue already mentioned, I just wanted to be able to reuse model weights across various applications, but Ollama decided to ship their own way of storing things on disk + with their own registry. I'm guessing it's because they want to eventually be able to monetize this somehow, maybe "private" weights hosted on their registry or something. I don't get why they thought splitting up files into "blobs" made sense for LLM weights, seems they wanted to reduce duplication (ala Docker) but instead it just makes things more complicated for no gains.
End result for users like me though, is to have to duplicate +30GB large files just because I wanted to use the weights in Ollama and the rest of the ecosystem. So instead I use everything else that largely just works the same way, and not Ollama.
To me, Ollama is a bit the Docker of LLMs. The user experience is inspired and the model file syntax is also inspired by the Dockerfile syntax. [0]
In the early days of Docker, we had the debate of Docker vs LXC. At the time, Docker was mostly a wrapper over LXC and people were dismissing the great user experience improvements of Docker.
I agree however that the lack of acknowledgement to llama.cpp for a long time has been problematic. They acknowledge the project now.
They refuse to work with the community. There's also the open question of how they are going to monetize, given that they are a VC-backed company.
Why shouldn't I go with llama.cpp, lmstudio, or ramalama (containers/RH); I will at least know what I am getting with each one.
Ramalama actually contributes quite a bit back to llama.cpp/whipser.cpp (more projects probably), while delivering a solution that works better for me.
r/localLLaMa is very useful, but also very susceptible to groupthink and more or less astroturfed hype trains and mood swings. This drama needs to be taken in context, there is a lot of emotion and not too much reason.
I think you misunderstand how these pieces fit together. llama.cpp is library that ships with a CLI+some other stuff, ggml is a library and Ollama has "runners" (like an "execution engine"). Previously, Ollama used llama.cpp (which uses ggml) as the only runner. Eventually, Ollama made their own runner (which also uses ggml) for new models (starting with gemma3 maybe?), still using llama.cpp for the rest (last time I checked at least).
ggml != llama.cpp, but llama.cpp and Ollama are both using ggml as a library.
Yeah, those both makes sense. ggml was split from llama.cpp once they realized it could be useful elsewhere, so while llama.cpp is the "main playground", it's still used by others (including llama.cpp). Doesn't mean suddenly that llama.cpp is the same as ggml, not sure why you'd believe that.
The strength with Ollama for me was the ease of being able to run a simple Docker command and be up and running locally without any tinkering, but with image and video Docker is no longer an option as Docker does not use the GPU. I'm curious how Ollama plans to support their Docker integration going forward or if it is a less important part of the project that I'm giving it credit for.
Thank you. I should have specified on MacOS. I ran into this recently trying to setup stable-diffusion-webui/InvokeAI/Foocus and finding it much more complicated to get working for me on my personal laptop than the llms.
Before I attempted, I had no idea. I hadn't ran any AI models locally and I don't follow this stuff too closely, so I wasn't even sure if I could get something usable on my M1 MacBook Air. I went in fairly blind which is why the Ollama Docker installer was so appealing to me–I got to hold off fighting Python and Homebrew until I had a better sense of what the tool could provide.
After my attempt, I think chat is performant enough on my M1. Code gen was too slow for me. Image generation was 1-2 minutes for small pixel art sprites, which for my use case is fine to let churn for a while, but the image generation results were much worse than ChatGPT browser gives me out of the box. I do not know if poor image quality is due to machine constraints or me not understanding how to configure the checkpoint and models.
I would be interested to hear how an M3 or M4 Mini handles these things as those are fair affordable to pick up used.
I have mostly used Ollama to run local models for close to a year, love it, but I have barely touched Llava, etc. multi modal support because all my personal use cases are text based.
Question: what are cool and useful multi modal projects have people here built using local models?
Does Ollama support the "user context" that higher level LLMs like ChatGPT have?
I'm not clear what they are called (or how implemented) — but perhaps 1) the initial prompt/context (that, for example, Grok has got in trouble with recently) and 2) the kind of saved context that allows ChatGPT to know things about your prompt-history so it can better answer future queries.
(My use of ollama has been pretty bare-bones and I have not seen anything covering these higher level features in -help.)
My understanding is that ollama is more of an "LLM backend", i.e. it provides a server process on your machine that answers requests relatively statelessly.
I believe it keeps the model loaded across sessions, and possibly keeps the KV cache warm for ongoing sessions (but I doubt it, based on the API shape; I don't see a "session" parameter), but that's about it. Nothing seems to be written to disk.
Features like ChatGPT's "memories" or cross-chat context require a persistence layer that's probably best suited for a "frontend". Ollama's API does support passing in requests with history, for example: https://github.com/ollama/ollama/blob/main/docs/api.md#chat-...
There must be some heavy compression/filtering going on, as there's no chance GPT can hold everybody's entire ChatGPT conversation history in its context.
But practically, I believe that Ollama just doesn't have a concept of server-side persistent state at the moment to even do such a thing.
The timing makes sense if you consider the broader trend in the LLM space. We're moving from just text to more integrated, multimodal experiences, and having a tightly controlled engine like this could be a game changer for developers building apps that require real-time, context-rich understanding.
Where "supporting" a model doesn't mean what you think it means for cpp
Between that and the long saga with vision models having only partial support, with a CLI tool, and no llama-server support (they only fixed all that very recently) the fact of the matter is that ollama is moving faster and implementing what people want before lama.cpp now
And it will finally shut down all the people who kept copy pasting the same criticism of ollama "it's just a llama.cpp wrapper why are you not using cpp instead"
What the hell is going on there? It’s utterly bizarre to see devs discussing granting each other licences to work on the same code for an open source project. How on earth did they end up there?
Went with my own wrapper around llama.cpp and stable-diffusion.cpp with optional prompting hosted if I don’t like the result so much, but it makes a good start for hosted to improve on.
Also obfuscates any requests sent to hosted, cause why feed them insight to my use case when I just want to double check algorithmic choices of local AI? The ground truth relationship func names and variable names imply is my little secret
The timing on this is a little surprising given llama.cpp just finally got a (hopefully) stable vision feature merged into main: https://simonwillison.net/2025/May/10/llama-cpp-vision/
Presumably Ollama had been working on this for quite a while already - it sounds like they've broken their initial dependency on llama.cpp. Being in charge of their own destiny makes a lot of sense.
Do you know what exactly the difference is with either of these projects adding multimodal support? Both have supported LLaVA for a long time. Did that require special casing that is no longer required?
I'd hoped to see this mentioned in TFA, but it kind of acts like multimodal is totally new to Ollama, which it isn't.
There's a pretty clear explanation of the llama.cpp history here: https://github.com/ggml-org/llama.cpp/tree/master/tools/mtmd...
I don't fully understand Ollama's timeline and strategy yet.
It's a turducken of crap from everyone but ngxson and Hugging Face and llama.cpp in this situation.
llama.cpp did have multimodal, I've been maintaining an integration for many moons now. (Feb 2024? Original LLaVa through Gemma 3)
However, this was not for mere mortals. It was not documented and had gotten unwieldy, to say the least.
ngxson (HF employee) did a ton of work to get gemma3 support in, and had to do it in a separate binary. They dove in and landed a refactored backbone that is presumably more maintainable and on track to be in what I think of as the real Ollama, llama.cpp's server binary.
As you well note, Ollama is Ollamaing - I joked, once, that the median llama.cpp contribution from Ollama is a driveby GitHub comment asking when a feature will land in llama-server, so it can be copy-pasted into Ollama.
It's really sort of depressing to me because I'm just one dude, it really wasn't that hard to support it (it's one of a gajillion things I have to do, I'd estimate 2 SWE-weeks at 10 YOE, 1.5 SWE-days for every model release), and it's hard to get attention for detailed work in this space with how much everyone exaggerates and rushes to PR.
EDIT: Coming back after reading the blog post, and I'm 10x as frustrated. "Support thinking / reasoning; Tool calling with streaming responses" --- this is table stakes stuff that was possible eons ago.
I don't see any sign of them doing anything specific in any of the code they link, the whole thing reads like someone carefully worked with an LLM to present a maximalist technical-sounding version of the llama.cpp stuff and frame it as if they worked with these companies and built their own thing. (note the very careful wording on this, e.g. in the footer the companies are thanked for releasing the models)
I think it's great that they have a nice UX that helps people run llama.cpp locally without compiling, but it's hard for me to think of a project I've been more by turned off by in my 37 years on this rock.
I worked on the text portion of gemma3 (as well as gemma2) for the Ollama engine, and worked directly with the Gemma team at Google on the implementation. I didn't base the implementation off of the llama.cpp implementation which was done in parallel. We did our implementation in golang, and llama.cpp did theirs in C++. There was no "copy-and-pasting" as you are implying, although I do think collaborating together on these new models would help us get them out the door faster. I am really appreciative of Georgi catching a few things we got wrong in our implementation.
For one Ollama supports interleaved sliding window attention for Gemma 3 while llama.cpp doesn't.[0] iSWA reduces kv cache size to 1/6.
Ollama is written in golang so of course they can not meaningfully contribute that back to llama.cpp.
[0] https://github.com/ggml-org/llama.cpp/issues/12637
It's impossible to meaningfully contribute to the C library you call from Go because you're calling it from Go? :)
We can see the weakness of this argument given it is unlikely any front-end is written in C, and then noting it is unlikely ~0 people contribute to llama.cpp.
They can of course meaningfully contribute new C++ code to llama.cpp, which they then could later use downstream in Go.
What they cannot meaningfully do is write Go code that solves their problems and upstream those changes to llama.cpp.
The former requires they are comfortable writing C++, something perhaps not all Go devs are.
What nonsense is this?
Where do you imagine ggml is from?
> The llama.cpp project is the main playground for developing new features for the ggml library
-> https://github.com/ollama/ollama/tree/27da2cddc514208f4e2353...
(Hint: If you think they only write go in ollama, look at the commit history of that folder)
llama.cpp clearly does not support iSWA: https://github.com/ggml-org/llama.cpp/issues/12637
Ollama does, please try it.
Dude, they literally announced that they stopped using llama.cpp and are now using ggml directly. Whatever gotcha you think there is, exists only in your head.
I'm responding to this assertion:
> Ollama is written in golang so of course they can not meaningfully contribute that back to llama.cpp.
llama.cpp consumes GGML.
ollama consumes GGML.
If they contribute upstream changes, they are contributing to llama.cpp.
The assertions that they:
a) only write golang
b) cannot upstream changes
Are both, categorically, false.
You can argue what 'meaningfully' means if you like. You can also believe whatever you like.
However, both (a) and (b), are false. It is not a matter of dispute.
> Whatever gotcha you think there is, exists only in your head.
There is no 'gotcha'. You're projecting. My only point is that any claim that they are somehow not able to contribute upstream changes only indicates a lack of desire or competence, not a lack of the technical capacity to do so.
FWIW I don't know why you're being downvoted other than a standard from the bleachers "idk what's going on but this guy seems more negative!" -- cheers -- "a [specious argument that shades rather than illuminates] can travel halfway around the world before..."
> As you well note, Ollama is Ollamaing - I joked, once, that their median llama.cpp contribution from Ollama is asking when a feature will land in llama-server so it can be copy-pasted into Ollama.
Other than being a nice wrapper around llama.cpp, are there any meaningful improvements that they came up with that landed in llama.cpp?
I guess in this case with the introduction of libmtmd (for multi-modal support in llama.cpp) Ollama waited and did a git pull and now multi-modal + better vision support was here and no proper credit was given.
Yes, they had vision support via LLaVa models but it wasn't that great.
There's been no noteworthy contributions, I'd honestly wouldn't be surprised to hear there's 0 contributions.
Well it's even sillier than that: I didn't realize that the timeline in the llama.cpp link was humble and matched my memory: it was the test binaries that changed. i.e. the API was refactored a bit and such but its not anything new under the sun. Also the llama.cpp they have has tool and thinking support. shrugs
The tooling was called llava but that's just because it was the first model -- multimodal models are/were consistently supported ~instantly, it was just your calls into llama.cpp needed to manage that,a nd they still do! - its just there's been some cleanup so there isn't one test binary for every model.
It's sillier than that in it wasn't even "multi-modal + better vision support was here" it was "oh we should do that fr if llama.cpp is"
On a more positive note, the big contributor I appreciate in that vein is Kobold contributed a ton of Vulkan work IIUC.
And another round of applause for ochafik: idk if this gentleman from Google is doing this in his spare time or fulltime for Google, but they have done an absolutely stunning amount of work to make tool calls and thinking systematically approachable, even building a header-only Jinja parser implementation and designing a way to systematize "blessed" overrides of the rushed silly templates that are inserted into models. Really important work IMHO, tool calls are what make AI automated and having open source being able to step up here significantly means you can have legit Sonnet-like agency in Gemma 3 12B, even Phi 4 3.8B to an extent.
They are talking a lot about this new engine - I'd love to see details on how it's actually implemented. Given llama.cpp is a herculean feat, if you are going to claim to have some replacement for it, an example of how you did it would be good!
Based on this part:
> We set out to support a new engine that makes multimodal models first-class citizens, and getting Ollama’s partners to contribute more directly the community - the GGML tensor library.
And from clicking through a github link they had:
https://github.com/ollama/ollama/blob/main/model/models/gemm...
My takeaway is, the GGML library (the thing that is the backbone for llama.cpp) must expose some FFI (foreign function interface) that can be invoked from Go, so in the ollama Go code, they can write their own implementations of model behavior (like Gemma 3) that just calls into the GGML magic. I think I have that right? I would have expected a detail like that to be front and center in the blog post.
Ollama are known for their lack of transparency, poor attribution and anti-user decisions.
I was surprised to see the amount of attribution in this post. They've been catching quite a bit of flack for this so they might be adjusting.
I wish multimodal would imply text, image and audio (+potentially video). If a model supports only image generation or image analysis, vision model seems the more appropriate term.
We should aim to distinguish multimodal modals such as Qwen2.5-Omni from Qwen2.5-VL.
In this sense: Ollama's new engine adds vision support.
I'm very interested in working with video inputs, is it possible to do that with Qwen2.5-Omni and Ollama?
I have only tested Qwen2.5-Omni for audio and it was hit and miss for my use case of tagging audio.
What's a use case are you interested in re: video?
The whole '*llama' naming convention in the LLM world is more confusing to me than it probably should be. So many llamas running around out here.
Unfortunately the speed of AI/ML is so crazy fast. I don't know a better way to keep track other than paying attention all the time. The field also loves memey names. A few years ago everyone was naming models after Sesame Street characters, there were the YOLO family of models. Conference papers are not immune, in fact they are greatest "offenders".
Their example "understanding and translating vertical Chinese spring couplets to English" has a lot of mistakes in it. I'm guessing the person writing the blog post to show off that example doesn't actually know Chinese.
What is actually written: Top: 家和国盛 Left: 和谐生活人人舒畅迎新春 Right: 平安社会家家欢乐辞旧岁
What Ollama saw: Top: 盛和家国 (correct characters but wrong order) Left: It reads "新春" (new spring) as 舒畅 (comfortable) Right: 家家欢欢乐乐辞旧岁 (duplicates characters and omits the first four)
I'm one of the maintainers who ran that example. I am Chinese.
The English translation, I thought was pretty spot on. We don't hide the mistakes of the models or fake the demos.
Overtime, of course I hope the models to improve much more
Sidetangent: why is ollama frowned upon by some people? I've never really got any other explanation than "you should run llama.CPP yourself"
Here's some discussion here: https://www.reddit.com/r/LocalLLaMA/comments/1jzocoo/finally...
Ollama appears to not properly credit llama.cpp: https://github.com/ollama/ollama/issues/3185 - this is a long-standing issue that hasn't been addressed.
This seems to have leaked into other projects where even when llama.cpp is being used directly, it's being credited to Ollama: https://github.com/ggml-org/llama.cpp/pull/12896
Ollama doesn't contributed to upstream (that's fine, they're not obligated to), but it's a bit weird that one of the devs claimed to have and uh, not really: https://www.reddit.com/r/LocalLLaMA/comments/1k4m3az/here_is... - that being said they seem to maintain their own fork so anyone could cherry pick stuff it they wanted to: https://github.com/ollama/ollama/commits/main/llama/llama.cp...
Thanks for the good explanation!
Besides the "culture"/licensing/FOSS issue already mentioned, I just wanted to be able to reuse model weights across various applications, but Ollama decided to ship their own way of storing things on disk + with their own registry. I'm guessing it's because they want to eventually be able to monetize this somehow, maybe "private" weights hosted on their registry or something. I don't get why they thought splitting up files into "blobs" made sense for LLM weights, seems they wanted to reduce duplication (ala Docker) but instead it just makes things more complicated for no gains.
End result for users like me though, is to have to duplicate +30GB large files just because I wanted to use the weights in Ollama and the rest of the ecosystem. So instead I use everything else that largely just works the same way, and not Ollama.
That is an interesting perspective, did not know about that at all!
To me, Ollama is a bit the Docker of LLMs. The user experience is inspired and the model file syntax is also inspired by the Dockerfile syntax. [0]
In the early days of Docker, we had the debate of Docker vs LXC. At the time, Docker was mostly a wrapper over LXC and people were dismissing the great user experience improvements of Docker.
I agree however that the lack of acknowledgement to llama.cpp for a long time has been problematic. They acknowledge the project now.
[0]: https://github.com/ollama/ollama/blob/main/docs/modelfile.md
They refuse to work with the community. There's also the open question of how they are going to monetize, given that they are a VC-backed company.
Why shouldn't I go with llama.cpp, lmstudio, or ramalama (containers/RH); I will at least know what I am getting with each one.
Ramalama actually contributes quite a bit back to llama.cpp/whipser.cpp (more projects probably), while delivering a solution that works better for me.
https://github.com/ollama/ollama/pull/9650 https://github.com/ollama/ollama/pull/5059
For me it's because ollama is just a front-end for llama.cpp, but the ollama folks rarely acknowledge that.
Here's a recent thread on Ollama hate from r/localLLaMa: https://www.reddit.com/r/LocalLLaMA/comments/1kg20mu/so_why_...
r/localLLaMa is very useful, but also very susceptible to groupthink and more or less astroturfed hype trains and mood swings. This drama needs to be taken in context, there is a lot of emotion and not too much reason.
Anyone who has been around for 10 years can smell the Embrace, Extend, Extinguish model 100 miles away.
They are plainly going to capture the market, and switch to some "enterprise license" that lets them charge $, on the backs of other peoples work.
I abandoned Ollama because Ollama does not support Vulkan: https://news.ycombinator.com/item?id=42886680
You have to support Vulkan if you care about consumer hardware. Ollama devs clearly don't.
[flagged]
why would I use a software that doesn't have the features I want, when a far better alternative like llama.cpp exists? ollama does not add any value.
I more often than not add multiple models to my WebUI chats to compare and contrast models.
Ollama makes this trivial compared to llama.cpp, and so for me adds a lot of value due to this.
llama-swap does it better than ollama I think.
cpp was just faster and with more features that is all
cpp is the thing doing all the heavy lifting, ollama is just a library wrapper.
It'd be like if handbrake tried to pretend that they implemented all the video processing work, when it's dependent on libffmpeg for all of that.
> ollama is just a library wrapper.
Was.
This submission is literally about them moving away from being just a wrapper around llama.cpp :)
no they are not. the submission uses ggml, which is llama.cpp
I think you misunderstand how these pieces fit together. llama.cpp is library that ships with a CLI+some other stuff, ggml is a library and Ollama has "runners" (like an "execution engine"). Previously, Ollama used llama.cpp (which uses ggml) as the only runner. Eventually, Ollama made their own runner (which also uses ggml) for new models (starting with gemma3 maybe?), still using llama.cpp for the rest (last time I checked at least).
ggml != llama.cpp, but llama.cpp and Ollama are both using ggml as a library.
“The llama.cpp project is the main playground for developing new features for the ggml library” --https://github.com/ggml-org/llama.cpp
“Some of the development is currently happening in the llama.cpp and whisper.cpp repos” --https://github.com/ggml-org/ggml
Yeah, those both makes sense. ggml was split from llama.cpp once they realized it could be useful elsewhere, so while llama.cpp is the "main playground", it's still used by others (including llama.cpp). Doesn't mean suddenly that llama.cpp is the same as ggml, not sure why you'd believe that.
I am amused that one of the handful examples they chose to use is wrong:
"The best way to get to Stanford University from the Ferry Building in San Francisco depends on your preferences and budget. Here are a few options:
1. *By Car*: Take US-101 South to CA-85 South, then continue on CA-101 South."
CA 85 is significantly farther down 101 than Palo Alto.
I'll have to try this later but appreciate that the article gets straight to the point with practical examples and then the details.
The strength with Ollama for me was the ease of being able to run a simple Docker command and be up and running locally without any tinkering, but with image and video Docker is no longer an option as Docker does not use the GPU. I'm curious how Ollama plans to support their Docker integration going forward or if it is a less important part of the project that I'm giving it credit for.
You can use a GPU with docker - at least on some platforms. There's more setup though, nvidia have some details to help https://docs.nvidia.com/datacenter/cloud-native/container-to...
Thank you. I should have specified on MacOS. I ran into this recently trying to setup stable-diffusion-webui/InvokeAI/Foocus and finding it much more complicated to get working for me on my personal laptop than the llms.
Out of curiosity, before you attempted this, what was your impression of the fitness and performance of Macs for generative AI?
Before I attempted, I had no idea. I hadn't ran any AI models locally and I don't follow this stuff too closely, so I wasn't even sure if I could get something usable on my M1 MacBook Air. I went in fairly blind which is why the Ollama Docker installer was so appealing to me–I got to hold off fighting Python and Homebrew until I had a better sense of what the tool could provide.
After my attempt, I think chat is performant enough on my M1. Code gen was too slow for me. Image generation was 1-2 minutes for small pixel art sprites, which for my use case is fine to let churn for a while, but the image generation results were much worse than ChatGPT browser gives me out of the box. I do not know if poor image quality is due to machine constraints or me not understanding how to configure the checkpoint and models.
I would be interested to hear how an M3 or M4 Mini handles these things as those are fair affordable to pick up used.
I have mostly used Ollama to run local models for close to a year, love it, but I have barely touched Llava, etc. multi modal support because all my personal use cases are text based.
Question: what are cool and useful multi modal projects have people here built using local models?
I am looking for personal project ideas.
Does Ollama support the "user context" that higher level LLMs like ChatGPT have?
I'm not clear what they are called (or how implemented) — but perhaps 1) the initial prompt/context (that, for example, Grok has got in trouble with recently) and 2) the kind of saved context that allows ChatGPT to know things about your prompt-history so it can better answer future queries.
(My use of ollama has been pretty bare-bones and I have not seen anything covering these higher level features in -help.)
My understanding is that ollama is more of an "LLM backend", i.e. it provides a server process on your machine that answers requests relatively statelessly.
I believe it keeps the model loaded across sessions, and possibly keeps the KV cache warm for ongoing sessions (but I doubt it, based on the API shape; I don't see a "session" parameter), but that's about it. Nothing seems to be written to disk.
Features like ChatGPT's "memories" or cross-chat context require a persistence layer that's probably best suited for a "frontend". Ollama's API does support passing in requests with history, for example: https://github.com/ollama/ollama/blob/main/docs/api.md#chat-...
Is there more to memory than just an entry into the context/messages array passed to the LLM?
There must be some heavy compression/filtering going on, as there's no chance GPT can hold everybody's entire ChatGPT conversation history in its context.
But practically, I believe that Ollama just doesn't have a concept of server-side persistent state at the moment to even do such a thing.
I _think_ the compression used is literally “Chat, compress this array of messages”. This is the technique used in Claude Plays Pokemon.
I’m sure there’s more to the prompt and what to do with this newly generated messages array, but the gist is there.
If this is the case, an Ollama implementation shouldn’t be too difficult.
The timing makes sense if you consider the broader trend in the LLM space. We're moving from just text to more integrated, multimodal experiences, and having a tightly controlled engine like this could be a game changer for developers building apps that require real-time, context-rich understanding.
why does ollama engine has to change to support new models? every time a new model comes ollama has to be upgraded.
Because of things like this: https://github.com/ggml-org/llama.cpp/issues/12637
Where "supporting" a model doesn't mean what you think it means for cpp
Between that and the long saga with vision models having only partial support, with a CLI tool, and no llama-server support (they only fixed all that very recently) the fact of the matter is that ollama is moving faster and implementing what people want before lama.cpp now
And it will finally shut down all the people who kept copy pasting the same criticism of ollama "it's just a llama.cpp wrapper why are you not using cpp instead"
There's also some interpersonal conflict in llama.cpp that's hampering other bug fixes https://github.com/ikawrakow/ik_llama.cpp/pull/400
What the hell is going on there? It’s utterly bizarre to see devs discussing granting each other licences to work on the same code for an open source project. How on earth did they end up there?
There seems to be some bad blood between ikawrakow and ggerganov: https://github.com/ikawrakow/ik_llama.cpp/discussions/316
My guess is that there's money involved. Maybe a spat between an ex-employee and their ex-employer?
Now it’s just a wrapper around hosted APIs.
Went with my own wrapper around llama.cpp and stable-diffusion.cpp with optional prompting hosted if I don’t like the result so much, but it makes a good start for hosted to improve on.
Also obfuscates any requests sent to hosted, cause why feed them insight to my use case when I just want to double check algorithmic choices of local AI? The ground truth relationship func names and variable names imply is my little secret
Wait, what hosted APIs is Ollama wrapping?
[flagged]
If I understood it correctly: this time no, it is actually new engine builded by the ollama team indipendent from llama.cpp
I doubt it. Llama.cpp just added support for the same models a few weeks ago. Folks at ollama just did a git pull.
It's open source, you could have checked. Seems indeed like the new engine cuts out llama.cpp, using GGML libary directly.
https://github.com/ollama/ollama/pull/7913
seriously? who do you think develops ggml?
hint: it's llama.cpp
llama.cpp added support for vision 6 days ago.
See SimonW post here:
https://simonwillison.net/2025/May/10/llama-cpp-vision/
>If I understood it correctly
You understood it exactly like they wanted you to...