I still see quite a few people in the ML world using 3.10 as their default...probably just habit, but a closer look at the dependencies might answer your question better.
Ah well that's not as bad I guess, I saw in their readme they're recommending people use 3.10, which when I see it is often a bit of a red flag that the project in question may not be well maintained, but I agree I do see quite a few ML repos still noting the use of 3.10 to this day.
Mostly whatever the earliest version pytorch supports. While 3.9 is supported until the end of this year, torch wheels and other wheels in the ecosystem were always troublesome in 3.9. So 3.10 it is.
3.9 would have been the preferred version if not for those issues, simply because it is the default on MacOS.
Yikes those a both very old. Python pre 3.12 had some serious performance issues. You should be aiming to run the current stable version which will contain any number of stability and interoperability fixes. The bundled OS python versions are often far behind and better suited to running the basic tools rather than being used for every application or script that you run where ideally you'd use a python version manager and isolated virtual environment.
Yeah, I honestly think some of the language used with LoRA gets in the way of people understanding them. It becomes much easier to understand when looking at an actual implementation, as well as how they can be merged or kept separate.
I got very briefly excited that this might be a new application layer on top of meshtastic.
Yes! I don't know what LoRA is, but I know what it isn't.
The paper link on that site doesn't work -- here's a working link:
https://arxiv.org/abs/2506.06105
Out of interest, why does it depend on or at least recommend such an old version of Python? (3.10)
from pyproject.toml: requires-python = ">= 3.10"
I still see quite a few people in the ML world using 3.10 as their default...probably just habit, but a closer look at the dependencies might answer your question better.
Ah well that's not as bad I guess, I saw in their readme they're recommending people use 3.10, which when I see it is often a bit of a red flag that the project in question may not be well maintained, but I agree I do see quite a few ML repos still noting the use of 3.10 to this day.
Mostly whatever the earliest version pytorch supports. While 3.9 is supported until the end of this year, torch wheels and other wheels in the ecosystem were always troublesome in 3.9. So 3.10 it is.
3.9 would have been the preferred version if not for those issues, simply because it is the default on MacOS.
Yikes those a both very old. Python pre 3.12 had some serious performance issues. You should be aiming to run the current stable version which will contain any number of stability and interoperability fixes. The bundled OS python versions are often far behind and better suited to running the basic tools rather than being used for every application or script that you run where ideally you'd use a python version manager and isolated virtual environment.
Interesting work to adapt LoRa adapters. Similar idea applied to VLMs: https://arxiv.org/abs/2412.16777
An alternative to prefix caching?
LoRA adapters modify the model's internal weights
Yeah, I honestly think some of the language used with LoRA gets in the way of people understanding them. It becomes much easier to understand when looking at an actual implementation, as well as how they can be merged or kept separate.
not unless they're explicitly merged, which is not a requirement but a small speed only thing
What is such a thing good for?
[dead]
[flagged]
Sounds like a good candidate for an mcp tool!