Show HN: Omnara – Run Claude Code from anywhere
github.comHey ya’ll, Ishaan and Kartik here. We're building Omnara (https://omnara.com/), an “agent command center” that lets you launch and control Claude Code from anywhere: terminal, web, or mobile — and easily switch between them.
Run 'pip install omnara && omnara', and you'll have a regular Claude Code session. But you can continue that same session from our web dashboard (https://omnara.com/) or mobile app (https://apps.apple.com/us/app/omnara-ai-command-center/id674...).
Check out a demo here: https://www.loom.com/share/03d30efcf8e44035af03cbfebf840c73.
Before Omnara, we felt stuck watching Claude Code think and write code, waiting 5-10 minutes just to provide input when needed. Now with Omnara, I can start a Claude Code session and if I need to leave my laptop, I can respond from my phone anywhere. Some places I've coded from include my bed, on a walk, in an Uber, while doing laundry, and even on the toilet.
There are many new Claude Code wrappers (e.g., Crystal, Conductor), but none keep the native Claude Code terminal experience while allowing interaction outside the terminal, especially on mobile. On the other hand, tools like Vibetunnel or Termius replicate the terminal experience but lack push notifications, clean UIs for answering questions or viewing git diffs, and easy setup.
We wanted our integration to fully mirror the native Claude Code experience, including terminal output, permissions, notifications, and mode switching. The Claude Code SDK and hooks don't support all of this, so we made a CLI wrapper that parses the session file at ~/.claude/projects and the terminal output to capture user and agent messages. We send these messages to our platform, where they're displayed in the web and mobile apps in real time via SSE. Our CLI wrapper monitors for input from both the Omnara platform and the Claude Code CLI, continuing execution when the user responds from either location. Our entire backend is open source: https://github.com/omnara-ai/omnara.
Omnara isn't just for Claude Code. It's a general framework for any AI agent to send messages and push notifications to humans when they need input. For example, I've been using it as a human-in-the-loop node in n8n workflows for replying to emails. But every Claude Code user we show it to gets excited about that application specifically so that’s why we’re launching that first :)
Omnara is free for up to 10 agent sessions per month, then $9/month for unlimited sessions. Looking forward to your feedback and hearing your thoughts and comments!
This is pretty cool and feels like we're heading in the right direction, the whole idea of being able to hop between devices while claude code is thinking through problems is neat, but honestly what excites me more is the broader pattern here, like we're moving toward a world where coding isn't really about sitting down and grinding out syntax for hours, it's becoming more about organizing tasks and letting ai agents figure out the implementation details.
I can already see how this evolves into something where you're basically managing a team of specialized agents rather than doing the actual coding, you set up some high-level goals, maybe break them down into chunks, and then different agents pick up different pieces and coordinate with each other, the human becomes more like a project manager making decisions when the agents get stuck or need direction, imho tools like omnara are just the first step toward that, right now it's one agent that needs your input occasionally, but eventually it'll probably be orchestrating multiple agents working in parallel, way better than sitting there watching progress bars for 10 minutes.
Exactly! My ideal vision for the future is that agents will be doing all grunt work/implementation, and we'll just be guiding them.
Can't wait til I'm coding on the beach (by managing a team of agents that notify me when they need me), but it might take a few more model releases before we get there lol
If you think you could do that on the beach, couldn't you do traditional software dev on the beach?
I actually think there's a chance it will shift away from that because it will shift the emphasis to fast feedback loops which means you are spending more of your time interacting with stakeholders, gathering feedback etc. Manual coding is more the sort of task you can do for hours on end without interruption ("at the beach").
> which means you are spending more of your time interacting with stakeholders, gathering feedback etc.
Jesus Christ, I really need to speed up development of my product. If this shifts to more meetings at wageslave, I’m going to kill myself.
How nice when just hung up with a demanding stakeholder who knows you can deliver a lot “instantly” you switch to your phone and your “agents” are just stuck into some weird stuff that they cannot debug.
That must be a nice situ on the beach.
What happens is the status quo changes. Like what happened with Dev/Ops. If you find yourself with the time to lead agents on a beach retreat you might find yourself pulled into more product design / management meetings instead. AI/Dev like DevOps. Wearing more hats as a result. Maybe I'm wrong though.
someone at the leadership is also thinking how he/she can lower head count by removing the agent master
I did exactly that all this summer at the beach with Claude code. Future is already here!
Seems like your vision is to let AI take over your livelihood. That’s an unusually chipper way to hand over the keys unless you have a lifetime of wealth stashed away.
There is enormous money and effort in making AI that can do that, so if it's possible it is eventually going to happen. The only question is whether you're part of the group making the replacement or the group being replaced.
It depends on what their livelihood is.
If their livelihood is solving difficult problems, and writing code is just the implementation detail the gotta deal with, then this isn’t gonna do much to threaten their livelihood. Like, I am not aware of any serious SWE (who actually designs complex systems and implements them) being genuinely worried about their livelihood after trying out AI agents. If anything, that makes them feel more excited about their work.
But if someone’s just purely codemonkeying trivial stuff for their livelihood, then yeah, they should feel threatened. I have a feeling that this isn’t what the grandparent comment user does for a living tho.
What will you have to offer when coding is so easy at that point?
I still think that human taste is important even if agents become really good at implementing everything and everyone's just an idea guy. Counter argument: if agents do become really good at implementation, then I'm not sure if even human taste would matter if agents could brute force every possibility and launch it into the market.
Maybe I'll just call it a day and chill with the fam
> it's becoming more about organizing tasks and letting ai agents figure out the implementation details ... different agents pick up different pieces and coordinate with each other
This is exactly what I have been working on for the past year and a half. A system for managing agents where you get to work at a higher abstraction level, explaining (literally with your voice) the concepts & providing feedback. All the agent-agent-human communication is on a shared markdown tree.
I haven't posted it anywhere yet, but your comment just describes the vision too well, I guess it's time to start sharing it :D see https://voicetree.io for a demo video. I have been using it everyday for engineering work, and it really is feeling like how you describe; my job is now more about organizing tasks, explaining them well, and providing critique, but just through talking to the computer. For example, when going through the git diffs of what the agents wrote, I will be speaking out loud any problems I notice, resulting in voice -> text -> markdown tree updates and these will send hook notifications to claude code so they automatically address feedback.
Cool demo! The first thing that sprung to mind after seeing it, was an image of a busy office floor filled with people talking into their headsets, not selling or buying stocks, but actually programming. If it’s a blessed or cursed image I’ll let you decide.
Haha, blursed one might say. In seriousness though, the social avoidance of wanting to talk to a computer around others will likely be the largest bottleneck to adoption for this sort of tech. May need to initially frame it as for work from home engineers.
Luckily the other side to this project doesn't require any user behavioural changes. The idea is to convert chat histories into a tree format with the same core algorithm, and then send only the relevant sub-tree to the LLM, reducing input tokens and context bloat, thereby also improving accuracy. This would then also unlock almost infinite length LLM chats. I have been running this LLM context retrieval algo against a few benchmarks, GSM-infinite, nolima, and longbench-v2 benchmarks, the early results are very promising, ~60-90% reduced tokens and increased accuracy against SOTA, however only on a subset of the full benchmark datasets.
completed the form
> moving toward a world where coding isn't really about sitting down and grinding out syntax
Love the idea of "coding" while walking/running outside. For me those outside activities help me clear my mind and think about tough problems or higher level stuff. The thought of directing agents to help persist and refine fleeting thoughts/ideas/insights, flesh out design/code, etc is intriguing
I do a bit of that now, I'll mostly use Claude code at home, and set Jules on some tasks from my phone while exercising. Reviewing code is tedious though, and I don't see it getting too much better.
On the code review part, that's also because we are using languages designed for humans. Once we design the programming languages for the LLM, then you design it in such a way that code review by humans and AI is easy.
Same with project org, if you organize the project for LLM efficiency instead of human efficiency then you simplify some parts that the llm have issues with.
Yeah exactly, this is awesome, I’ve always wondered while waiting for AI operations to complete why I’m “tied” to my machine and can’t just shut my laptop while it worked and see what it’d done later. This is so cool
But why should it take time at all? Newer developer tooling (especially some of the rust tools e.g. UV) are lightning fast.
Wouldn't it be better if you asked for it and rather than having to manage workers it was just... Done
Yes it would be good if we lived in a world where ai magically knew exactly what we wanted even before we did and implemented everything perfectly first time in a way we’d have no issues with or tweaks we’d like it to make ever. I agree.
One big question I have, in the era of Claude Code (and advancements yet to come) — is why should a hacker submit to using tools behind a SaaS offering … when one can just roll their own tools? I may be mistaken, but I don’t think there is any sort of moat here.
Truly — this is an excellent and accessible idea (bravo!), but if I can whittle away at a free and open source version, why should I ever consider paying for this?
This is exactly what I thought when picking a customer support software last month. After hiring my first support person and being unable to decide between Intercom/Front/HelpScout/Zendesk I finally just vibe coded my own helpdesk in a few days with the just the features I needed - perfectly integrated into my SaaS, and best of all, free.
Doesn't the vibe coded solution just mean you need to spend time maintaining that code that isn't your core business? Unless a bespoke customer support is crucial to it?
Yes, but the cost of building and maintaining code has gone down so fast that it might actually be worth it. Plus, we get bespoke features that we would never get otherwise. And you have to spend developer time maintaining a good integration with an external product anyway.
I’d love to hear more about how this works. It’s whatever you built, integrated with your email stack? Because I’m super sassed out.
Yes it's just a wrapper on top of Gmail with Inbox Zero philosophy (each email is a support ticket). I only needed 3 features for my helpdesk:
1. an AI email drafter which used my product docs and email templates as context (eventually I plan to add "tools" where the AI can lookup info in our database)
2. a simple email client with a sidebar with customer contextual info (their billing plan, etc.) and a few simple buttons for the operator to take actions on their account
3. A few basic team collaboration features, notes, assigning tickets to operators, escalating tickets...
It took about 2 days to build the initial version, and about 2 weeks to iron out a number of annoying AI slop bugs in the beginning. But after a month of use it's now pretty stable, my customer support hire is using it and she's happy.
Very cool, thanks. Somewhat related I vibed up a simple docs/support chat bot that uses the markdown files for an astro starlight docs (all of these similar chat bot tools are like $50 a month): https://star-support-demo.vercel.app/en/getting-started
repo: https://github.com/agoodway/star-support-demo
Not the op, but I think about that. Here's what I came to, for the moment:
* LLM's are lousy at bugs
* Apps are a bit like making a baby. Fun in the moment, but a lifetime support commitment
* Supporting software isn't fun, even with an LLM. Burnout is common in open source.
* At the end of the day, it is still a lot of work, even guiding an LLM
* Anything hosted is a chore. Uptime, monitoring, patching, backing up, upgrading, security, legal, compliance, vulnerabilities
I think we'll see github littered with buggy, unsupported, vibe coded one-offs for every conceivable purpose. Now, though, you literally have no idea what you're looking at or if it is decent.
Claude made four different message passing implementations in my vibe coded app. I realized this once it was trying to modify the wrong one during a fix. In other words, claude was falling over trying to support what it made, and only a dev could bail it out. I am perfectly capable of coding this myself, but you have two choices at the moment--invest the labor, or get crap. But, then we come to "maybe I should just pay for this instead of burning my time and tokens."
In regards to the duplication of code — yes I’ve found this to be a tremendous problem.
One technique which appears to combat this is to do “red team / blue team Claude”
Red team Claude is hypercritical, and tries to find weaknesses in the code. Blue team Claude is your partner, who you collaborate with to setup PRs.
While this has definitely been helpful for me finding “issues” that blue team Claude will lie to you about — hallucinations are still a bit of an issue. I mostly put red team Claude into ultrathink + task mode to improve the veracity of its critiques.
Yeah exactly.
I’ve been using Tailscale ssh to a raspberry pi.
With Termix on iOS.
I can do all the same stuff on my own. Termix is awesome (I’m not affiliated)
Also see solutions which don’t require a central server like Vibetunnel.
+1 for vibetunnel
similar: blink + tailscale + zellij + devcontainers
smithclay is being polite because this is someone else’s thread, but he wrote this (which I’m literally playing with right now): https://clay.fyi/blog/iphone-claude-code-context-coding/
this chain of replies reminds me of the famous HN comment about Dropbox - a good sign for Omnara!
And relying on (another) 3rd party provider that indirectly has access to your code….
I do not how it is implemented, but if I can press ‘continue’ from my phone, someone else could enter other commands… Like export database…
Thanks! I think the main reason to pay right now would be for convenience. A user wouldn't have to worry about hosting their own frontend/backend and building their own mobile app. And eventually, we want to have different agent providers host their agents for use on our platform, but that's further out.
Correct - but if this is such a game changer in development speed and the market is already validated that this kind of platform is useful then step 1 is build enough of a clone of the platform to start iterating with it and then ... TO THE MOON! It's entirely a having-the-best-vision moat, which is a moat, but one that's principally protected by trademark lawsuits.
Because then you don’t have to whittle away, and you’re free to blame someone else if anything goes wrong.
Maybe that is more for a general engineer than a Hacker though - hacker to me implies some sort of joy in doing it yourself rather than optimizing.
I like to be able to tweak things to my liking, and this typically leads me to make my own versions of things.
Probably a bad habit.
The answer here might be: "you're not our market" (which is totally fine! but slightly confusing, because presumably people _using agents like Claude Code_ are ... more advanced than the uninitiated)
Yeah, I would say that most Claude Code users are pretty technical, but I was surprised to see that there's a decent number of non-technical users using Claude Code to completely vibe code applications for their personal use. Those users seem to love tools like Codex (the openai cloud UI one, not the CLI) and things like Omnara, where there's no setup
Makes sense! Thanks for discussing.
I mean, you could say this about almost literally any software product ever to be honest. Feel free I guess? People like to pay for convenience and support so they don’t have to build everything themselves.
https://news.ycombinator.com/item?id=9224
This doesn't contribute to the conversation ... without further elaboration on what your point is, I'm assuming that you're pointing out that my question is analogous to previous (good to ask!) questions about market and user model for an "eventually very big" application.
Not very enlightening: just because Dropbox became big in one environment, doesn't mean the same questions aren't important in new spaces.
Well, this is a classic here at HN.
So every time someone comes around with a sentence like 'but if I can whittle away at a free and open source version, why should I ever consider paying for this?', the answer will be that Dropbox thread ;-)
Following on this offtopic - I wonder if there was ever another case of the Dropbox thread effect on HN? I don’t recall any other cases…
When you let Claude run free over changes big enough to have this thing be meaningful, are you really getting good enough code?
When I just set Claude loose for long periods, I get incomprehensible, brittle code.
I don't do maintenance so maybe that's the difference but I have not had good results from big, unsupervised changes.
Subagents has changed this for me. I routinely run half hour to hour long tasks with no human intervention, and actually get good results at the end (most of the time, not all of the time).
The reason isn’t that AI models have gotten better, although they clearly have, but that using subagents (1) keeps context clear of false starts and errors that otherwise poison the AI’s view of the project, and (2) by throwing in directives to run subagents that keep the main agent aligned (e.g. code review agents), it gets nudged back on course a surprisingly high percentage of the time.
Would you elaborate a bit on how you use subagents? I tend to use them sporadically, for example for it to research something or to analyse the code base a bit. But I'm not yet letting it run for long.
Sure. First of all, although I do spend a lot of time interacting with Claude Code in chat format, that is not what I am talking about here. I have setup Claude Code with very specific instructions for use of agents, which I'll get to in a second.
First of all, there's a lot of collections of subagent definitions out there. I rolled my own, then later found others that worked better. I'm currently using this curated collection: https://github.com/VoltAgent/awesome-claude-code-subagents
CLAUDE.md has instructions to list `.agents/agents/**/*.md` to find the available agents, and knows to check the frontmatter yaml for a one-line description of what each does. These agents are really just (1) role definitions that prompts the LLM to bias its thinking in a particular way ("You are a senior Rust engineer with deep expertise in ..." -- this actually works really well), and (2) a bunch of rules and guidelines for that role, e.g. in the Rust case to use thiserror and strum crates to avoid boilerplate in Error enums, rules for how to satisfy the linter, etc. Basic project guidelines as they relate to Rust dev.
Secondly, my CLAUDE.md for the project has very specific instructions about how the top-level agent should operate, with callouts to specific procedure files to follow. These live in `.agent/action/**/*.md`. For example, I have a git-commit.md protocol definition file, and instructions in CLAUDE.md that "when the user prompts with 'commit' or 'git commit', load git-commit action and follow the directions contained within precisely." Within git-commit.md, there is a clear workflow specification in text or pseudocode. The [text] is my in-line comments to you and not in the original file:
""" You are tasked with committing the currently staged changes to the currently active branch of this git repository. You are not authorized to make any changes beyond what has already been staged for commit. You are to follow these procedures exactly.
1. Check that the output of `git diff --staged` is not empty. If it is empty, report to the user that there are no currently staged changes and await further instructions from the user.
2. Stash any unstaged changes, so that the worktree only contains the changes that are to be committed.
3. Run `./check.sh` [a bash script that runs the full CI test suite locally] and verify that no warnings or errors are generated with just the currently staged changes applied.
- If the check script doesn't pass, summarize the errors and ask the user if they wish to launch the rust-engineer agent to fix these issues. Then follow the directions given by the user.
4. Run `git diff --staged | cat` and summarize the changes in a git commit message written in the style of the Linux kernel mailing list [I find this to be much better than Claude's default commit message summaries].
5. Display the output of `git diff --staged --stat` and your suggested git commit message to the user and await feedback. For each response by the user, address any concerns brought up and then generate a new commit message, as needed or instructed, and explicitly ask again for further feedback or confirmation to continue.
6. Only when the user has explicitly given permission to proceed with the commit, without any accompanying actionable feedback, should you proceed to making the commit. Execute 'git commit` with the exact text for the commit message that the user approved.
7. Unstash the non-staged changes that were previously stashed in step 2.
8. Report completion to the user.
You are not authorized to deviate from these instructions in any way. """
This one doesn't employ subagents very much, and it is implicitly interactive, but it is smaller and easier to explain. It is, essentially, a call center script for the main agent to follow. In my experience, it does a very good job of following these instructions. This particular one addresses a pet peeve of mine: I hate the auto-commit anti-feature of basically all coding assistants. I'm old-school and want a nice, cleanly curated git history with comprehensible commits that take some refining to get right. It's not just OCD -- my workflow involves being able to git bisect effectively to find bugs, which requires a good git history.
...continued in part 2
...
I also have a task.md workflow that I'm actively iterating on, and is the one that I get it working autonomously for a half hour to an hour and am often surprised at finding very good results (but sometimes very terrible results) at the end of it. I'm not going to release this one because, frankly, I'm starting to realize there might be a product around this and I may move on that (although this is already a crowded space). But I don't mind outlining in broad strokes how it works (hand-summarized, very briefly):
""" You are a senior software engineer in a leadership role, directing junior engineers and research specialists (your subagents) to perform the task specified by the user.
1. If PLAN.md exists, read its contents and skip to step 4.
2. Without making any tool calls, consider the task as given and extrapolate the underlying intent of the user. [A bunch of rules and conditions related to this first part -- clarify the intent of the user without polluting the context window too much]
3. Call the software-architect agent with the reformulated user prompt, and with clear instructions to investigate how the request would be implemented on the current code base. The agent is to fill its context window with the portions of the codebase and developer documentation in this repo relevant to its task. It should then generate and report a plan of action. [Elided steps involving iterating on that plan of action with the user, and various subagents to call out to in order to make sure the plan is appropriately sequenced in terms of dependent parts, chunked into small development steps, etc. The plan of action is saved in PLAN.md in the root of the repository.]
4. While there are unfinished todos in the PLAN.md document, repeat the following steps:
a) Call rust-engineer to implement the next todo and/or verify completion of the todo.
b) Call each of the following agents with instructions to focus on the current changes in the workspace. If any actionable items are found in the generated report that are within the scope of the requested task, call rust-engineer to address these items and then repeat:
- rust-nit-checker [checks for things I find Claude gets consistently wrong in Rust code]
- test-completeness-checker [checks for missing edge cases or functionality not tested]
- code-smell-checker [a variant of the software architect agent that reports when things are generally sus]
- [... a handful of other custom agents; I'm constantly adjusting this list]
- dirty-file-checker [reports any test files or other files accidentally left and visible to git]
c) Repeat from step a until you run through the entire list of agents without any actionable, in-scope issues identified in any of the reports & rust-engineer still reports the task as fully implemented.
d) Run git-commit-auto agent [A variation of the earlier git commit script that is non-interactive.]
e) Mark the current todo as done in PLAN.md
5. If there are any unfinished todo in PLAN.md, return to step 4. Otherwise call software-architect agent with the original task description as approved by the user, and request it to assess whether the task is complete, and if not to generate a new PLAN.md document.
6. If a new PLAN.md document is generated, return to step 4. Otherwise, report completion to the user. """
That's my current task workflow, albeit with a number of items and agent definitions elided. I have lots of ideas for expanding it further, but I'm basically taking an iterative and incremental approach: every time Claude fumbles the ball in an embarrassing way (which does happen!), I add or tweak a rule to avoid that outcome. There are a couple of key points:
1) Using Rust is a superpower. With guidance to the agent about what crates to use, and with very strict linting tools and code checking subagents (e.g. no unsafe code blocks, no #[allow(...)] directives to override the linter, an entire subagent dedicated to finding and calling out string-based typing and error handling, etc.) this process produces good code that largely works and does what it was requested to do. You don't have to load the whole project in context to avoid pointer or use-after-free issues, and other things that cause vibe coded project to fail at a certain complexity. I don't see this working in a dynamic language, for example, even though LLMs are honestly not as good at Rust as they are in more prominent languages.
2) The key part of the task workflow is the long list of analysts to run against the changes, and the assumption that works well in practice that you can just keep iterating and fixing reported issues (with some of the elided secret sauce having to do with subagents to evaluate whether an issue is in scope and needs to be fixed or can be safely ignored, and keeping on eye out for deviations from the requested task). This eventual completeness assumption does work pretty well.
3) At some point the main agent's context window gets poisoned, or it reaches the full context window and compacts. Either way this kills any chance of simply continuing. In the first case (poisoning) it loses track of the task and ends up caught in some yak shaving rabbit hole. Usually it's obvious when you check in that this is going on, and I just nuke it and start over. In the latter case (full context window) the auto-compaction also pretty thoroughly destroys workflow but it usually results in the agent asking a variation on "I see you are in the middle of ... What do you want to do next?" before taking any bad action to the repo itself. Clearing the now poisoned context window with "/reset" and then providing just "task: continue" gets it back on track. I have a todo item to automate this, but the Claude Code API doesn't make it easy.
4) You have to be very explicit about what can and cannot be done by the main agent. It is trained and fine-tuned to be an interactive, helpful assistant. You are using it to delegate autonomous tasks. That requires explicit and repeated instructions. This is made somewhat easier by the fact that subagents are not given access to the user -- they simply run and generate reports for the calling agent. So I try to pack as much as I can in the subagents and make the main agent's role very well defined and clear. It does mean that you have to manage out of band communication between agents (e.g. the PLAN.md document) to conserve context tokens.
If you try this out, please let me know how it goes :)
yeah that’s a fair experience, we’ve seen similar when leaving Claude unsupervised for too long. The way we use Omnara, it’s more about staying in the loop for those moments when Claude needs clarification or a quick decision, so you can keep it on track without babysitting the terminal the whole time
For the skeptics: using Claude Code from your phone is kind of great. Think this sort of solution is excellent once you've figured out a good workflow.
Open-sourced my own duct-taped way* of doing this with free/open-source stuff a few weeks ago, recommend you give this kind of Claude on the go workflow a try during your next flight delay / train ride / etc.
*https://github.com/smithclay/claudetainer
If you make iOS apps you can also set up an Xcode Cloud pipeline so the result gets pushed to your phone via TestFlight.
Awesome stuff, mobile coding (imo mobile everything) is definitely the future
yeah especially as models get better
This looks awesome!
I can't seem to authenticate because it had a localhost URL in the address when I was first authenticating. Now my work laptop has blocked the site completely to sign in or anything at all due to the activity of rapidly trying to bypass a site with no certificate because of that pass-on or something. Bummer, this computer has some more juice vs mine!
Might be an issue with "Sign In With Apple" if no one else has reported.
This is the link I got
https://omnara.com/cli-auth?callback=http%3A//localhost%3A58...
Same. For all the praise ad upvoting, I can’t get it working (yet)
This is neat but I gotta ask - what’s your moat against Anthropic just launching the same thing a week from now?
Codex already works from your phone, I imagine Anthropic is well on its way to ship Claude Code across devices/apps too..
Why do calendar apps and todo list apps make millions even though you have Google Calendar and Apple Reminders?
There are plenty of opportunities for building a good product even if the big platform copies you. For example in this case I can think of an easy differentiator: make it work with other agents and IDEs, not just Claude Code. Plenty of other ways to specialize by adding features not included in vanilla big company products.
Which calendar app makes millions exactly? If you’re talking Calendly, they had multiple years of headstart against Google (plus the pandemic boom). Basic calendaring apps don’t really make for VC-backable business.
That said, I don’t think that comparison makes sense anyway. The barrier of entry on AI apps (and by competitors cloning apps with AI) is enough these days that you can guarantee anything minimally viable will be cloned immediately.
Plus some of these features are quite literally the roadmap of OpenAI/Google/Anthropic. Competing with giants building the exact product they’re actively building rarely works. Anthropic isn’t “copying you” - they’re literally building this.
Doesn’t have to be VC backable. Todoist is the classic bootstrapped todo list app making north of $20 million a year. Fantastical for iOS and a lot of cute calendar apps make very good incomes for lifestyle business.
Sure Anthropic might have this on the roadmap and release next week. But apps like this can literally make hundreds of thousands of dollars in a few weeks — well worth the effort for a few months work I would say.
Todoist is a huge outlier. They had an early mover advantage (it was one of the first mobile todo apps). A few other Todo players also managed to keep an audience, even after Apple and Google rolled their (still half baked in 2025) alternatives - multiple years later.
It’s a very, very different story.
I’m obviously not saying anyone should stop building apps or dismissing this or any other app from being potentially successful. It’s just a fundamentally different scenario than the early days of mobile, particularly for thin LLM wrappers
We're in the early days of LLMs and people are making 6 figures with silly AI wrappers while me and you argue on HN. Go out there and ship :)
That's true. I wish I could make good memes or could sell shock-value crap like "cheat on everything" with a straight face... limiting my potential there
Cool. I'm a vibetunnel user and this looks like a better UI. However, I like that vibetunnel keeps all of my data local. Does this have remote access to my codebase and session? I'm guessing that's hard here because of the notifications? Or do I misunderstand how the data flows?
You're correct, that's one pro for vibetunnel/mobile SSH clients - they're a direct connection to your machine. For our platform, the messages flow through our server, which enables some use cases like push notifications and easier setup/reliability, but for a tradeoff of the data not being local.
In the era of data being gold, it’s quite useful to gather CC usage and even more things. What kind of data do you gather and have access to ? Are you compliant to anything regarding data ? (GDPR or else)
This looks super slick! The mobile first coding agent workflow really feels like a fundamental shift in how developers work. It is sort of like Rich Hickey's hammock driven development taken to its ideal form. While you are on the go and have an idea, rather than writing it down in your todo list you can kick off an agent and have a prototype PR waiting for you next time you are at your desk.
Once you start running coding agents async you realize that prototyping becomes much cheaper and it is easier to test out product ideas and land on the right solution to a problem much quicker.
I've been coding like this for the past few months and can't imagine life without being able to invoke a coding agent from anywhere. I got so excited by it we started building https://www.terragonlabs.com so we could do this for any coding agent that crops up.
Cool, but I wonder... is this really a feasible workflow?
The way I use LLMs is, I enter a very specific query, and then I check the output, meticulously reviewing both the visual output and the code before I proceed. Component by component, piece by piece.
Otherwise, if you just "let it rip", I find that errors compound, and what you get isn't reliable, isn't what you intended, increases technical debt, or is just straight up dangerous.
So how can you do this from a smartphone? Don't you need your editor, and a way to run your code to review it carefully, before you can provide further input to Claude Code? Basically, how can you iterate from a phone?
nice launch. the "pick up the same claude code session on your phone bit" resonates. I’ve wanted that too, but with self-hosting and a few creature comforts.
I hacked a tiny web client around the claude CLI that I use daily:
https://github.com/sunpix/claude-code-web
- works on the go via PWA
- voice input (whiser) auto reading messages TTS
- drag and drop images with preview/remove
- hotkeys
Ah that's neat, whisper hooked up like that would be handy in the car when I can’t type
> I can start a Claude Code session and if I need to leave my laptop, I can respond from my phone anywhere
I've been looking for this for some time now. This is amazing if it delivers.
Like others (smithclay, sst/opencode) have said about aiming for a similar feature, I had plans to make a mobile app for Talkito[0][1] which primarily adds voice TTS/ASR and WhatsApp/slack interactions to Claude Code.
This looks like exactly what I was envisioning so congrats on getting out there first! LMK if you want to add voice controls to this.
[0]: https://github.com/robdmac/talkito
[1]: https://talkito.com
This is cool! We've had a bunch of people request for voice control, they've used the native keyboard voice control for now, but it's not great at getting contextual recognition of words (especially technical things). It's on the roadmap, so I'll reach out when we get that started!
Talkito looks very slick! You added voice, sms, and WhatsApp, which I’ve just been wishing for. I’ll have to give this a shot!
Thank you! The SMS is feature is untested so I don't mention it much. I think unlike the others (TTS/ASR/Slack/Whatsapp) SMS will require a paid account on twilio. Twilio can’t absorb the carriers network costs for free like they can with WhatsApp sandboxing. If it's critical for you id be happy to work with you to get that feature working properly!
Also looking more closely at yours I notice the Apache 2.0 license but that doesn't prevent a company taking your work and running it as a SAAS which seems to be how you yourself want to monetize. For this reason I went with AGPL-3.0 so I recommend looking into that.
Good point, I'll take a look at that
I love the concept (and was going to hack something for myself).
I can’t get through onboarding. The main omnara command just exits complaining of a missing session id and doing “serve” asks you to set up sessions on a page that doesn’t exist?
Nit: why is pipx required if you’re using uv?
Honestly it should be easy to do this kind of thing with a CLI app like Claude Code and yet you have to get pretty involved in interpreting its complex terminal usage. I'm starting to consider the recent trend of very "fancy" CLI utilities that treat the terminal as a kind of GUI as an anti-pattern. I mean, how much easier would it be to write a wrapper around CC if it were just a readline prompt.
Yet these fancy terminal applications have become really trendy. I've been using Crush lately and I quite like it, but it's annoying to me that I can't copy and pasted from the terminal when it's running, and scrolling the buffer doesn't work as expected either, it somehow scrolls some kind of internal "window" instead. Again, making copy & pasting annoyingly difficult.
Is anyone making a good agent system that is just "> Output", "$ input" without trying to get crazy with ANSI escape sequences? Some color is nice, but I think that should be the end of it.
I wonder if one can reconstruct the UI on top of the JSON input/output, without losing the agentic smarts that in Claude Code.
Anyway, that's how software should be written in an ideal world. A protocol to communicate with weakly-connected user interfaces (plural!).For Claude specifically, how exactly does the --ide flag work? That might be useful too.
Congratulations on the launch!
Main question I have since your backend is open source, is there a way to self host and point the mobile app at our own servers?
Thanks! We're going to make the mobile app and frontend open source too, I just haven't had the time to do it properly yet. Maybe I can email you the source code - if you're interested you can email me at kartik@omnara.com. Otherwise, we'll open source the mobile/frontend in the coming weeks and you can check it out there.
I do really like the idea of going for more walks throughout my work day, and just checking in on Claude with my phone.
Nice one, I thought I need this and wanted to build something like that too couple weeks ago.
But the more I worked with Claude, I felt like *I am the bottleneck* and not the waiting times. Also waiting for more than (really max) 5 minutes is for my features just not happening.
I think remote claude code is nice if you start a completely new app with loads of features that will take a long time OR for checking pull requests (the remote execution is more important here)
These tools sound like a good idea, but then I try them and they always fall over at the same place: My problem isn't running the agents, I have an SSH terminal that supports tabs on my phone. My problem is QAing and reviewing the code all these agents write, and none of these tools solves that.
Assuming you're using GitHub or similar, make pushing branches and creating PRs part of their prompts, and review on the GitHub app or equivalent? Seem like an orthogonal problem to those LLMs tools.
I can review the code that way, but not the outputs. I could have it write tests and run them, but that's usually fiddly for web apps.
If it's a webapp, have the CI pipeline create a temporary env and deploy the branch to it?
Yeah, that's basically the only solution I can think of, but requires quite a bit of infra work. I guess that's what LLMs are for, huh...
Checkout the PRs to your local machine and test it there?
That kind of defeats the purpose of running the agents in the cloud, no?
I'm just using claude-code on termux on my s42 ultra with some mcp tools i built in rust - which thus runs on aarch64-linux-android. Very handy to get rust analyzer, webdriver, github cli etc on your phone, so i can get some small stuff done during commute.
Me too, using Claude Code with Termux. I rented a VPS and now I'm sshing into it. Great experience!
This is awesome. Should have android. This is why I use termius and ssh. I can be in and out of anything with Claude. Just a large pain in the ass with input lag and terminals with the keyboard.
Try mosh and see if it helps you with input lag issues. Iirc it processes or buffers the input locally instead of waiting for the server to respond, so it feels faster.
Very cool! Would love an integration with Twilio / phone and text-to-voice and voice-to-text.
Start an agent, receive a call when a response from the user is needed, provide instruction, repeat.
Use case would be to continue the work hands-free while biking or driving.
A fellow posted his project called “talkito” that is close to this.
Thanks for the mention! Yeah I first wanted Claude Code to be more hands free when doing things around the house so added voice.. then I thought it would be great if we could go out and get messages read out by Siri on the AirPods when there is an update, and reply that way, so I added WhatsApp support specifically for this purpose. I'd be down for adding a phone option or Signal alternative but I'd be curious to know if the WhatsApp feature solves your problem? https://github.com/robdmac/talkito
Can it control Codex, too? The ability to switch between Claude, Qwen, Gemini, Codex, and sst/OpenCode *CLIs would be pretty incredible!
* All of these are trivially installable via `npm install -g ...`
at the end he says "not just claude code. any agents"
Lneat! My use case is swayed towards the none wrapper paths so the telemetry is contained or non-existent.
Basically, tunnelling to my mac so I can run my local mistral workflow/git/project builds yet with a gui like yours.
This method of devleopment is definitely the future.
I've been having a lot of success with Google's Jules (https://jules.google.com/) which has the added benefit of running the agent on their VMs and being able to execute scripts (such as unit tests, linting, playwright, etc). The website works great on mobile and has notification support.
With the Google AI Pro subscription you get 100 tasks a day(!) included, it's a fantastic deal.
Love the idea*!
Currently trying it and the output from claude code output doesn't appear on my phone though? Sometimes it outputs nothing, sometimes it outputs what appears to be a bunch of xml tags for tool calls I am assuming are meant to be parsed. But the notifications are working well which is nice.
(* though I have some security concerns about this as juicy target vs just rolling my own)
I agree with the general sentiment here that this is the future of coding for a lot of tasks. But in terms of a business case for your product I'm really struggling to see how this beats Claude code action? Which integrates directly with GitHub, at no additional cost, and I can use an oauth token to use my subscription.
Hey Ishaan here (co-founder), totally fair point, claude code actions are great for GitHub workflows. We see Omnara fitting in when you want to keep a live session going across devices (terminal ↔ web ↔ mobile) and outside GitHub too
I see, thanks for explaining and congrats on the launch! After re-reading the description, the ability to use other frameworks might become a USP too.
Just a random remark, what's annoying and a pain point in my workflow are definitely proper development environments for agents . Not just runtimes but also managing secrets etc. Maybe an avenue to explore and use in marketing copy.
This sort of thing should be run locally over something like tailscale or ngrok - direct peer to peer communication between phone and laptop. No way I'm sending my code to your central servers.
For now I'll just stick with a VNC solution for my macbook.
Could you say some more about your current solution? I've tried basically every tool out there (VNC, tmux, VibeTunnel, many Claude Code in the browser) solutions and haven't found one that works really well yet. Do you just VNC to your computer with something like Screens 5? I've found it works OK from my iPad but phone is pretty suboptimal.
This is awesome, I was trying to build this as well. I'd love a windows version if that's on the roadmap.
Thanks! There's nothing inherently macos-specific about this, I just need to get my hands on a Windows machine to test it out in case there's some path issues. I'll try to do that soon and update you
Looks like someone in our github reported that the termios library we use isn't compatible with windows (https://github.com/omnara-ai/omnara/issues/72), so might take a bit longer to find a workaround for this
Cool! I'm working on a similar personal tool for GeminiCLI.
This is lovely, I was literally wishing for this two nights ago. I'll give it a try. Good luck!
Can you support GitHub code spaces and GitHub copilot? Easiest workflow for me is code spaces/copilot given the pricing and ease of making new dev environments.
This would be a killer product in this setting as copilot is quite “chatty”
What does the privacy situation look like? Do you get to see what we're working on?
Technically, all the messages are stored in our DB (anything that's visible in our dashboard is stored in our DB), so if we wanted to, we could see the messages flowing through Omnara. If you delete a chat instance, the messages are deleted immediately, no soft deleting.
But yes this is a good point, it's a big reason we open sourced our backend. We've thought about doing client side encryption before sending messages to our servers, but that probably won't be implemented in the near-term.
Landing page slow, flashes and refresh automatically after few seconds in iOS brave
Thanks for mentioning this, we'll look into it
Seems copying text in the app doesn't work? In that case it is useless. A quick way to copy an entire chat and a selection messages is a requirement for me to be able to use it.
We've noticed that using terminal emulators (e.g. warp, alacritty) lead to issues with pasting things into Omnara, do you happen to be using a terminal emulator? If so we can look into it. But with the native terminal, pasting should work in both the terminal and the web/mobile apps
I mean copying from the iOS app.
sst/opencode has plans to build a mobile app.
https://github.com/sst/opencode/issues/176
I recall watching a stream where the authors imagined instructing the agent to do a piece of work and then getting notified on your phone when it is done or being able to ask it to iterate on your phone.
Oh interesting, we've gotten some requests for supporting opencode as well. Opencode would much easier to work with than Claude Code since the backend and frontend are separate and open source. Maybe we can beat them to the mobile support :)
Nice work.
Is anyone working on collaborative Claude Code-ing with coworkers in Slack/Discord?
Pretty cool, but where is the Android app?
Coming very soon, it's been a long process with the Android store. Thank god for react native and expo
Can't wait to try would love to test this - can you share an apk?
i run claude code on a hetzner server with a directive that 5 times per hour it trigger a "continue, if dome read claude.md and continue from there" after the 5th time i get a notification email that it has nothing more to do. works ok.
Nice setup, clever way to keep it moving without much manual intervention. Curious, do you review the logs in between, or is it more of a yolo “continue” each time?
yolo + it has its own user with sudo access - a watcher is looking over each repo and any file changes get atomic commited to a github / special branch - so everything is reverseable
Ok now this is genius, and how I’ve wanted AI agents to work for a while now. Gonna try this out!
This looks awesome. Will this work when using Claude Code via VSCode.
I know one of our users is able to use it with Claude Code via Cursor, so I assume that it's able to work the same way with VSCode
I need this, and I need it for seats, but I need it on-prem or it will never get approved. You should sell an iOS subscription based version that allows changing the api/auth url so we can self host. Plenty of companies would be willing to do the infra/config schlep in exchange for holding onto their data, and I mean if you charge 100 bucks per month per seat for the app, can I really complain?
this is nice. I was using byobu/screen and an android client w/ ddns. this is a much cleaner solution that I wish I had when I was using the claude-code seriously.
...but as I will now say at the end of every claude-centric post I make until a CSR gets back to me : I'm now approaching a week of zero CSR responses to a very valid question about a $200.00usd/mo account -- so I hope Omnara eventually matures to the point of supporting many different AI provider options; even if claude-code is the soup du jour.
Having fantastic tooling and effort around a company that is disinterested in its' userbase is a lot like the mental anguish I feel when I consider all of the tools and systems that were at one point reliant on now-gone Google services. What a waste. I'm sure the people involved learned plenty and that it was a personal growth experience for them, but boy do I hate seeing good code thrown away so routinely.
I have a bit of a feeling like that around claude-code/cursor-specific things right now. It reminds me of the work I put into Google Wave a hundred years ago.
Right on
Been using cursors background agents to do this via mobile app. I was expecting a mobile app by anthropic by now. Wonder what will happen to this project when that happens?
Really cool, I actually haven't met many people using cursor background agents. I do think Anthropic is working on a web app for Claude Code (saw it in some thread somewhere), so I'm sure they're also working on a mobile app. In that case, the value for Omnara + Claude Code diminishes, but we're able to support any agent, so if every company has their own app, but a user can use all the same agents from a single platform, hopefully they'll choose the single platform.
termius and tmux I don’t see the point
It annoys me that this isn't available on android. I'm looking for an open source replit alternative. The closest I got is to manage github coding agent with preview environments. Bolt.new doesn't work on mobile AFAIK.
we’ve been trying hard to get the Android version out, Google’s been giving us a tough time before approving it for the Play Store. I can send you an internal app link if you’d like; just share your email (I’m at ishaan@omnara.com)
So you've reinvented SSH+screen except slower and much less flexible?
"Don't be snarky."
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
https://news.ycombinator.com/newsguidelines.html
Ask questions out of curiosity. Don't cross-examine.
https://news.ycombinator.com/showhn.html
I'd say it's more flexible! At least we're trying to head in that direction, since the SDK allows you to hook up any agent to our platform, not just CLI agents. And eventually we want people to be able to add their own frontend components for their agents, which would make the experience more custom than a terminal UI. Although yes, if you just want a 1:1 CLI agent on your phone, then mobile SSH clients might make more sense.