vrighter 8 hours ago

So how does it save me from having to read the man page?

How do I know if the command is safe to execute when I can't trust LLM output?

Also, for "safe" commands, how do I know that it is doing what I asked it to do (without reading man pages) when I can't trust LLM output?

Anything your script returns is "untrusted input" to me which requires careful scrutinization. Which means this adds more work, not lessens it. While also making running commands cost real money!

  • nestorD 6 hours ago

    I don't use it to avoid reading man pages. Rather, as often with LLMs, this is a faster way to do things I already know how to do. Looking at commands I run in various situations and typing them for me, faster than I can remember the name of a flag i use weekly with a pdf processing tool or type 5 consecutive shell commands.

    Money wise, my full usage so far (including running purposely large inputs/outputs to stress test it) has cost me.... 19c. And I am not even using the cheapest model available. But, you could also run it with a local model.

iamdamian 2 days ago

Tucked at the bottom of the README:

> Privacy note: Shelly sends your requests, including your recent shell history, to Anthropic's API.

  • nestorD 2 days ago

    Yes, it is API based and uses your last unique 100 shell commands as part of its prompt: it seemed important to remind users that this data does leave their machine. A fork using a local model should be fairly easy to set up.