For Rust-based Kafka alternatives, I like Tansu[1]. It at least provides Kafka API parity, and critically also gives users a pluggable backend (embedded SQLite, S3 for low cost diskless type workloads and Postgres because just use Postgres)
It’s nice to try and out innovate Kafka, but I fear the network effect can’t be beaten unless the alternative is 10x better.
Something like Warpstream’s architecture[2] had a shot at dethroning Kafka, but critically even they adopted the Kafka API. Sure enough, Apache Kafka introduced a competing feature[3] within two years of warpstreams launch too.
Walrus isn’t trying to replace Kafka, but it does beat Kafka in a few narrow areas. It’s a lightweight Rust-based distributed log with a fast WAL engine and modern I/O (io_uring), so the operational overhead is much lower than running a full Kafka stack. If you just want a simple, fast log without JVM tuning, controllers, or the entire Kafka ecosystem, Walrus is a lot easier to run. Kafka still wins on ecosystem, connectors, and massive scale, but Walrus is appealing for teams that want the core idea without the complexity. Really impressed by the direction here, great work!!.
As someone who myself worked on a hobby-level Rust based Kafka alternative that used Raft for metadata coordination for ~8 months: nice work!
Wasn't immediately clear to me if the data-plane level replication also happens through Raft or something home-rolled? Getting consistency and reliability right with something home-rolled is challenging.
Notes:
- Would love to see it in an S3-backed mode, either entirely diskless like WarpStream or as tiered storage.
- Love the simplified API. If possible, adding a Kafka compatible API interface is probably worth it to connect to the broader ecosystem.
Hi, the creator here, I think its a good idea to have S3 backed storage mode, its kinda tricky to do it for the 'active' block which we are currently writing to, but totally doable for historical data.
Also about the kafka API, I tried to implement that earlier, I had a sort of `translation` layer for that earlier, but it gets pretty complicated to maintain that because kafka is offset based, while walrus is message based.
TBH I don't think anyone can utilise S3 for the active segment, I didn't dig into Warpstream too much, but I vaguely recall they only offloaded to S3 once the segment was rolled.
The Developer Voices interview where Kris Jenkins talks to Ryan Worl is one of the best, and goes into a surprising amount of detail: https://www.youtube.com/watch?v=xgzmxe6cj6A
tl;dr they write to s3 once every 250ms to save costs. IIRC, they contend that when you keep things organized by writing to different files for each topic, it's the Linux disk cache being clever that turns the tangle of disk block arrangement into a clean view per file. They wrote their own version of that, so they can cheaply checkpoint heavily interleaved chunks of data while their in-memory cache provides a clean per-topic view. I think maybe they clean up later async, but my memory fails me.
I don't know how BufStream works.
The thing that really stuck with me from that interview is the 10x cost reduction you can get if you're willing and able to tolerate higher latency and increased complexity and use S3. Apparently they implemented that inside Datadog ("Labrador" I think?), and then did it again with WarpStream.
I highly recommend the whole episode (and the whole podcast, really).
s3 charges per 1,000 Update requests, not sure how it's sustainable to do it every 250ms tbh, especially in multi tenant mode where you can have thousands of 'active' blocks being written to
I never understood the popularity of Kafka. It's just a queue with persistent storage(ie. not in-memory queu with ram-size limited capacity) after all.
A queue with persistent storage is like a ledger whose entries don't vanish when you read them, or a git branch whose commits stick around for longer than 24-72 hours.
It's popular because it didn't have any competition while it built up its ecosystem. And even though there's competitors now, I haven't had time to check them out, and they still brand themselves as "Kafka-alternatives".
Most of the other ones at the time it was pop and the data was gone. You had to jump thru some hoops to make it work as persistent. Not 'hard' but just more annoying. Kafka has that out of the box. Where kafka starts to come apart is how to set it up. Its configuration is a bit tedious to setup.
In the current benchmarks, I only have Kafka and rocksdb wal, will surely try to add redpanda there as well, curious how walrus would hold up against seastar based systems.
I don't see any mentions of p99 latency in the benchmark results. Pushing gigabytes per second is not that difficult on modern hardware. Doing so with reasonable latency is what's challenging. Also, instead of using custom benchmarks it's better to just use the OMB (open-messaging benchmark).
Fun anecdote; a couple years ago I started writing a Kafka alternative in C++ with a friend. I got pretty far, but abandoned the project.
We called it `tuberculosis`, or `tube` for short; of course, that is what killed Kafka.
"Consumption" works too :)
Assuming topics are consumed in your version, a la Kafka.
Imagine talking to your clients about tech stacks and "we're running tuberculosis" comes up... while people are dying from it.
You just say "well, the alternative was Kafka" and they'd surely get it. Or not. Either way we imagined it to be hilarious.
t10s, pronounced "tíos" or a stuttering "t- tents" on your geo. :-D
For Rust-based Kafka alternatives, I like Tansu[1]. It at least provides Kafka API parity, and critically also gives users a pluggable backend (embedded SQLite, S3 for low cost diskless type workloads and Postgres because just use Postgres)
It’s nice to try and out innovate Kafka, but I fear the network effect can’t be beaten unless the alternative is 10x better.
Something like Warpstream’s architecture[2] had a shot at dethroning Kafka, but critically even they adopted the Kafka API. Sure enough, Apache Kafka introduced a competing feature[3] within two years of warpstreams launch too.
[1] - https://github.com/tansu-io/tansu [2] - https://www.warpstream.com/ [3] - https://topicpartition.io/blog/kip-1150-diskless-topics-in-a...
Walrus isn’t trying to replace Kafka, but it does beat Kafka in a few narrow areas. It’s a lightweight Rust-based distributed log with a fast WAL engine and modern I/O (io_uring), so the operational overhead is much lower than running a full Kafka stack. If you just want a simple, fast log without JVM tuning, controllers, or the entire Kafka ecosystem, Walrus is a lot easier to run. Kafka still wins on ecosystem, connectors, and massive scale, but Walrus is appealing for teams that want the core idea without the complexity. Really impressed by the direction here, great work!!.
There's also Iggy https://github.com/apache/iggy
Never tried it, but looks promising
Thank you for the mention! BTW, we're currently working on VSR (Viewstamped Replication) to provide the proper clustering :)
Looks like it has a solid amount of contributors. Exciting! Some other attempts like Fluvio seem to have lost momentum.
iggy is amazing
As someone who myself worked on a hobby-level Rust based Kafka alternative that used Raft for metadata coordination for ~8 months: nice work!
Wasn't immediately clear to me if the data-plane level replication also happens through Raft or something home-rolled? Getting consistency and reliability right with something home-rolled is challenging.
Notes:
- Would love to see it in an S3-backed mode, either entirely diskless like WarpStream or as tiered storage.
- Love the simplified API. If possible, adding a Kafka compatible API interface is probably worth it to connect to the broader ecosystem.
Best of luck!
Hi, the creator here, I think its a good idea to have S3 backed storage mode, its kinda tricky to do it for the 'active' block which we are currently writing to, but totally doable for historical data.
Also about the kafka API, I tried to implement that earlier, I had a sort of `translation` layer for that earlier, but it gets pretty complicated to maintain that because kafka is offset based, while walrus is message based.
TBH I don't think anyone can utilise S3 for the active segment, I didn't dig into Warpstream too much, but I vaguely recall they only offloaded to S3 once the segment was rolled.
The Developer Voices interview where Kris Jenkins talks to Ryan Worl is one of the best, and goes into a surprising amount of detail: https://www.youtube.com/watch?v=xgzmxe6cj6A
tl;dr they write to s3 once every 250ms to save costs. IIRC, they contend that when you keep things organized by writing to different files for each topic, it's the Linux disk cache being clever that turns the tangle of disk block arrangement into a clean view per file. They wrote their own version of that, so they can cheaply checkpoint heavily interleaved chunks of data while their in-memory cache provides a clean per-topic view. I think maybe they clean up later async, but my memory fails me.
I don't know how BufStream works.
The thing that really stuck with me from that interview is the 10x cost reduction you can get if you're willing and able to tolerate higher latency and increased complexity and use S3. Apparently they implemented that inside Datadog ("Labrador" I think?), and then did it again with WarpStream.
I highly recommend the whole episode (and the whole podcast, really).
s3 charges per 1,000 Update requests, not sure how it's sustainable to do it every 250ms tbh, especially in multi tenant mode where you can have thousands of 'active' blocks being written to
Guess it beats doing it every 250ms for every topic…
It says on the github page
So I guess that's a "yes" to raft?GP asked about data plane consensus, not metadata/control plane.
They asked about data plane replication - e.g., leader -> followers. Unless I misunderstood them.
Why. Just why. Rewrite for sake of rewrite. I'm clapping. It's enough.
I never understood the popularity of Kafka. It's just a queue with persistent storage(ie. not in-memory queu with ram-size limited capacity) after all.
A queue with persistent storage is like a ledger whose entries don't vanish when you read them, or a git branch whose commits stick around for longer than 24-72 hours.
It's popular because it didn't have any competition while it built up its ecosystem. And even though there's competitors now, I haven't had time to check them out, and they still brand themselves as "Kafka-alternatives".
Most of the other ones at the time it was pop and the data was gone. You had to jump thru some hoops to make it work as persistent. Not 'hard' but just more annoying. Kafka has that out of the box. Where kafka starts to come apart is how to set it up. Its configuration is a bit tedious to setup.
For Kafka alternative written in C++ there's Redpanda [1],[2].
Redpanda claim of better performance but benchmarks showed no clear winner [3].
It will be interesting to test them together on the performance benchmarks.
I've got the feeling it's not due to programming language implementation of Scala/Java (Kafka), C++ (Redpanda) and Rust (Walrus).
It's the very architecture of Kafka itself due to the notorious head of line problem (check the top most comments [4].
[1] Redpanda – A Kafka-compatible streaming platform for mission-critical workloads (120 comments):
https://news.ycombinator.com/item?id=25075739
[2] Redpanda website:
https://www.redpanda.com/
[3] Kafka vs. Redpanda performance – do the claims add up? (141 comments):
https://news.ycombinator.com/item?id=35949771
[4] What If We Could Rebuild Kafka from Scratch? (220 comments):
https://news.ycombinator.com/item?id=43790420
In the current benchmarks, I only have Kafka and rocksdb wal, will surely try to add redpanda there as well, curious how walrus would hold up against seastar based systems.
I don't see any mentions of p99 latency in the benchmark results. Pushing gigabytes per second is not that difficult on modern hardware. Doing so with reasonable latency is what's challenging. Also, instead of using custom benchmarks it's better to just use the OMB (open-messaging benchmark).
> It's the very architecture of Kafka itself due to the notorious head of line problem
Except a consumer can discard an unprocessable record? I'm not certain I understand how HOL applies to Kafka, but keen to learn more :)
> Except a consumer can discard an unprocessable record?
It's not the unproccessable records that are the problem it is the records that are very slow to process (for whatever reason).
Or it’s I/O-bound.
coo coo ca choo
We need Rust alternative not written in Rust
Nice! How does it compare to Redpanda, NATS, etc?
[dead]
[dead]
[flagged]