LVB 3 days ago

I'm always curious what folks use for their database for things like this. Even though I like SQLite--a lot--my preference has become that the app is generally separate and mostly stateless. Almost always the data is the most important thing, so I like being able to expand/replace/trash the app infra at will with no worries.

Thought about maybe running a Postgres VPS, but I've enjoyed using neon.tech more than I expected (esp. the GUI and branching). I guess the thing that has crept in: speed/ease is really beating out my ingrained cheapness as I've gotten older and have less time. A SaaS DB has sped things up. Still don't like the monthly bills & variability though.

  • mtlynch 3 days ago

    >Almost always the data is the most important thing, so I like being able to expand/replace/trash the app infra at will with no worries.

    Have you used SQLite with Litestream? That's the beauty of it. You can blow away the app and deploy it somewhere else, and Litestream will just pull down your data and continue along as if nothing happened.

    At the top of this post, I show a demo of attaching Litestream to my app, and then blowing away my Heroku instance and redeploying a clean instance on Fly.io, and Litestream ports all the data along with the new deployment:

    https://mtlynch.io/litestream/

    • LVB 3 days ago

      I'm currently using SQLite + Litestream with one app, though it's strictly Litestream as a backup/safety net and I'd be manually standing the thing back up if it came to building the server anew, as that's not automated.

      If anything, I'd probably end up looking at a dedicated PG VPS. I've started to get used to a few Postgres conveniences over SQLite, especially around datetimes, various extensions, and more sophisticated table alterations (without that infamous SQLite 12-step process), etc. So that's been an evolution, too, compared to my always-SQLite days.

    • Cheer2171 3 days ago

      > No, my app never talks to a remote database server.

      > It’s a simple, open-source tool that replicates a SQLite database to Amazon’s S3 cloud storage.

      That was a very long walk to get to that second quote. And it makes the first quote feel deceptive.

      • mtlynch 3 days ago

        Thanks for the feedback!

        Can you share a bit more about why you feel it's deceptive?

        The point I was trying to make is that database servers are relatively complex and expensive. S3 is still a server, but it's static storage, which is about as simple and cheap as it gets for a remote service.

        Was it that I could have been clearer about the distinction? Or was the distinction clear but feels like not a big difference?

    • rkwz 3 days ago

      This is a well written guide, thanks!

      • mtlynch 3 days ago

        Cool, glad to hear it was useful!

  • j45 3 days ago

    It's trivial to run mysql (or Perforce variant) or Postgres, with some minor caching for simple apps.

    I'm not sure what you are hitting that would go past the capacity of a small vps.

    Independent VPs for DB make sense, but if the requests are reasonably cached, you can get away with it (and beef up the backups) especially if it's something non-critical.

    • LVB 3 days ago

      Definitely considering a dedicated Postgres VPS. I've not looked yet, but I'd like to locate a decent cookbook around this. I've installed Postgres on a server before for playing around, and it was easy enough. But there are a lot of settings, considerations around access and backups and updates, etc. I suspect these things aren't overly thorny, but some of the guides/docs can make it feel that way. We'll see, as it's an area of interest, for sure.

      • lucw 2 days ago

        I went through this around a year ago. I wanted to postgres for django apps, and I didn't want to pay the insane prices required by cloud providers for a replicated setup. I wanted a replicated setup on hetzner VMs and I wanted full control over the backup process. I wanted the deployment to be done using ansible, and I wanted my database servers to be stateless. If you vaporize both my heztner postgres VMs simultaneously, I lose one minute of data. (If I just lose the primary I probably lose less than a second of data due to realtime replication).

        I'll be honest it's not documented as well as it could, some concepts like the archive process and the replication setup took me a while to understand. I also had trouble understanding what roles the various tools played. Initially I thought I could roll my own backup but then later deployed pgBackrest. I deployed and destroyed VMs countless times (my ansible playbook does everything from VM creation on proxmox / hetzner API to installing postgres, setting up replication).

        What is critical is testing your backup and recovery. Start writing some data. Blow up your database infra. See if you can recover. You need a high degree of automation in your deployment in order to gain confidence that you won't lose data.

        My deployment looks like this: * two Postgres 16 instances, one primary, one replica (realtime replication) * both on Debian 12 (most stable platform for Postgres according to my research) * ansible playbooks for initial deployment as well as failover * archive file backups to rsync.net storage space (with zfs snapshots) every minute * full backups using pgBackrest every 24hrs, stored to rsync.net, wasabi, and hetzner storage box.

        As you can guess, it was kind of a massive investment and forced me to become a sysadmin / DBA for a while (though I went the devops route with full ansible automation and automated testing). I gained quite a bit of knowledge which is great. But I'll probably have to re-design and seriously test at the next postgres major release. Sometimes I wonder whether I should have just accepted the cost of cloud postgres deployments.

        • LVB 2 days ago

          I've got a less robust version of this (also as Ansible -> Hetzner) that I've toyed with. I'm often tempted to progress it, but I've realized it is a distraction. I say that about me, and not too negatively. I know that I want to get some apps done, but the sysadmin-y stuff is pretty fun and alluring but it can chew up a lot of time.

          Currently I'm viewing the $19 plan from Neon as acceptable (I just look in my Costco cart for comparison) for me now. Plus, I'm getting something for my money beyond not having to build it myself: branching. This has proved way handier than I'd expected as a solo dev and I use it all the time. A DIY postgres wouldn't have that, at least not as cleanly.

          If charges go much beyond the $19 and it is still just me faffing about, I'll probably look harder at the DIY PG. OTOH if there is good real world usage and/or $ coming in, then it's easier to view Neon as just a cost of business (within reason).

  • jwells89 3 days ago

    Spinning up a VPS for things like this is tempting to me too, but not having done significant backend work in over a decade my worry would be with administering it — namely keeping it up to date, secure, and configured correctly (initial setup is easy). What's the popular way of handling that these days?

    • ggpsv 2 days ago

      Every case is different but as a baseline, you could use Ubuntu or Debian for automatic security upgrades via unattended-upgrades[0], harden ssh by allowing only pubkey authentication, disallow all public incoming connections in the firewall except for https traffic if you're serving a public service, everything else (ssh, etc) can go over wireguard (tailscale makes this easy). Use a webserver like nginx or caddy for tls termination, serving static assets, and proxying requests to an application listening on localhost or wireguard.

      [0]: https://wiki.debian.org/UnattendedUpgrades

pentagrama 3 days ago

I tested the app and found it awesome that it doesn't require account creation! You just get a private link, share it with the group, and when they open the link, it asks who they are to 'log in' as themselves. Of course, users could game the system by logging in as other members, but I think it's a compromise the developer made, knowing the user base and how frictionless it makes the user experience. Neat.

  • culi 3 days ago

    I've used this really neat website called when2meet[0] for community organizing heavily. You make an event, get a url and share it. Users choose their name and they can even add a completely optional password to prevent impersonation

    I'm heavily inspired by it and working on an app for book clubs to host "elections" to choose their next book to read using a variety of voting systems (ranked choice, approval, scored, first past the post, etc).

    [0] https://www.when2meet.com/

    • sdenton4 2 days ago

      +1 for when2meet. It is gloriously cruft-free.

  • itsthejb 3 days ago

    kittysplit.com has had this feature set for a decade. I try to promote it whenever I can

mherrmann 3 days ago

Re your question on saving costs: If you run it on a single Linux VPS, then I suspect you can get the costs down to 5-10$ per month.

One thing I find interesting is the growth chart: It's linear. But given that the app clearly has some traction, and is viral in nature, how come it isn't exponential?

  • binwiederhier 3 days ago

    I was thinking exactly this. I am the maintainer of ntfy.sh and my costs are $0 at the moment because DogotalOcean is paying for it 100% because it is open source. It would be around $100, though I must admit it's quite oversized. However, my volume is much much higher than what is described in the blog.

    I suspect that the architecture can be improved to get the cost down.

  • mrngm 2 days ago

    It usually never is exponential, see e.g. https://longform.asmartbear.com/exponential-growth/ that shows examples from "hypergrowth" companies where it's perceived these companies grew exponentially, but the growth actually followed a more quadratic form. About halfway through the article, the author shows another model that's more likely to fit these growth patterns: logistic growth. After initial rapid growth, followed by a period of linear growth, it eventually flattens out, indicating it's at "carrying capacity", or "market saturation".

  • mkrd 2 days ago

    Same thought, I am absolutely blown away by how much vercel overcharges. I host a similar application on netcup (like hetzner) for 3$ per month, and when it was on HN, it easily handled over 10k requests per hour

  • JamesonNetworks 3 days ago

    Best I’ve been able to do is around $22 a month on DO, would love to hear alternatives that are cheaper

    • Saris 2 days ago

      DO is quite expensive, Vultr is solid, Hetzner is too and is even cheaper.

    • moffkalast 3 days ago

      Pi 5 + Cloudflare

      • abound 3 days ago

        I run a homelab that isn't too far from this, but I wouldn't recommend it without a few caveats/warnings:

        - Don't host anything media-heavy (e.g. video streaming)

        - Make sure you have reasonable upload speeds (probably 10+ Mbps min)

        - Even behind Cloudflare, make sure you're comfortable with the security profile of what you're putting on the public internet

        The min upload speed is mostly about making sure random internet users (or bots) don't saturate your home internet link.

        • moffkalast 2 days ago

          Oh yeah definitely don't try this unless you have fiber and your ISP isn't too twitchy.

          My suggestion is mainly for static site hosting since the Pi only needs to update the cloudflare cache when you make changes, and it should be able to handle a small db and few background services if you need them.

      • eklavya 3 days ago

        Any guides or blogs on how to do that?

        • moffkalast 2 days ago

          Loads, but it'll depend on what you want to do exactly. I think this should be the approximate list of things:

          - domain at Cloudflare set up to cache requests (this will take the brunt of the site traffic)

          - static IP at home (call your ISP)

          - port forwarding on your router to the Pi for 80 and any other ports you need, maybe a vlan if you're feeling like isolating it

          - a note on the Pi that says "don't unplug, critical infra"

          - the same setup on the Pi as you'd do on a cloud server, ssh with key pairs, fail2ban, etc.

renewiltord 3 days ago

This is cool, dude. Thank you for sharing. Irrespective of the actual numbers I’m always curious how people fund projects like this.

One thing I’ve been interested in is the idea of decentralized handling for this. That is, the project is funded in and every month if its bills don’t get paid it dies. If it receives enough to go over it buys T-bonds for the appropriate duration and then burns them down over time.

Perhaps in the past it would have to be automated but I wonder if in the near future a limited AI agent could be the server and you leave him alone to do his thing.

raybb 2 days ago

I love spliit and use it regularly. Like many times per week.

One thing that drives me crazy is it works really poorly on slow mobile connections. I'd really love to try to add local first or offline first support but I know it would be a significant change. However, even just caching pages like the add expense page would be a nice improvement.

weinzierl 3 days ago

The pay only what you use model is nice when your revenue also scales with use. For my projects I wish there were plans with higher fixed cost and risk only in availability and not in cost.

  • sam0x17 3 days ago

    The only downside with that I've found is people and orgs tend to overestimate their future usage of X across the board, so profits rarely match expectations with pay-as-you-go, and tier-based pricing will easily overcome you by capturing more $$ from the market. Some notable exceptions are things like file storage where people tend to underestimate what they will need I find.

xyst 2 days ago

This is a fantastic app. Thanks for sharing the breakdown.

If banks would get their head out of their ass, this would be a native feature with ach/zelle/fednow as the backbone. Organizer creates group in bank app and can invite other users which may have different bank accounts. Payment requests satisfied through 1-click. No more manually checking if Cashapp/bank accounts/venmo have received payment then checking off the expense in third party app

Dachande663 3 days ago

I love the idea of this but, given the traffic numbers, this could run on a $4 Digital Ocean droplet and have the same result. They've burnt over a grand just to use vercel. Maybe I'm just older but I don't understand the logic here. A basic VPS, setup once, would have the same result and would be neutral in cost (it's how I run my own little free apps). Maybe the author is lucky enough that $100/mo doesn't really affect them or they're happy for it to pay for the convenience (my assumption).

  • scastiel 3 days ago

    Running a database accessed that many times on a $4 Digital Ocean droplet? I'd be very curious to see that ;)

    The web hosting costs basically nothing. Most of the cost comes from the database.

    • ndriscoll 3 days ago

      6k visits per week * 5 page views per visit is one view per 20 seconds on average. Even very modest hardware with naively written application code should have no problem handling thousands of CRUD database queries per second (assuming every query doesn't need a table scan or something).

      Modern computers are mind-bogglingly powerful. An old laptop off eBay can probably handle the load for business needs for all but the very largest corporations.

      • horsawlarway 3 days ago

        So many people don't seem to understand how efficient modern machines are.

        As someone who is literally using old laptops to host things from my basement on my consumer line (personal, non-commercial) and a business line (commercial)...

        I can host this for under 50 bucks a year, including the domain and power costs, and accounting for offsite backup of the data.

        I wish people understood just how much the "cloud" is making in pure profit. If you're already a software dev... you can absolutely manage the complexity of hosting things yourself for FAR cheaper. You won't get five 9s of reliability (not that you're getting that from any major cloud vendor anyways without paying through the nose and a real SLA) but a small UPS will easily get you to 99% uptime - which is absolutely fine for something like this.

        • immibis 3 days ago

          As DHH said somewhere, it's incredible that the modern cloud stack has managed to get PROGRAMMERS to be scared of COMPUTERS. Seriously, what's with that? That shouldn't even be possible?

          If you can understand programming, you can understand Linux. Might take a while to be really confident, but do you need incredible confidence when you have backups? :)

          • GreenWatermelon 3 days ago

            Not just somewhere, but in Rails World 2024 Opening keynote, and it was absolutely hilarious!

            Especially with that meme he showed about vercel is laws +500% markup lmaoo

            Don't be afraid of computers, don't be the pink elephant!

        • beeboobaa3 3 days ago

          The problem is that my coworkers are morons who seem incapable of remembering to run a simple `explain analyze` on their queries. They'd rather just write monstrosities that kindasorta work without giving a single damn about performance.

          It seems like computers are getting more capable, but developers are becoming less capable at roughly the same pace.

          • tbrownaw 3 days ago

            > It seems like computers are getting more capable, but developers are becoming less capable at roughly the same pace.

            "Andy giveth, and Bill taketh away."

            Computers keep getting faster (personified as Andy Grove, from Intel), and software keeps getting slower (Bill Gates, from Microsoft).

          • 9dev 3 days ago

            > It seems like computers are getting more capable, but developers are becoming less capable at roughly the same pace.

            And that makes perfect sense. Why should humans inconvenience themselves to please the machine? If anyone’s at fault, it’s the database for not being smart enough to optimize the query on its own.

            • ndriscoll 3 days ago

              At my last job, we had architects pushing to make everything into microservices despite how absolutely horrible that idea is for performance and scalability (and understanding and maintainability and operations and ability for developers to actually run/test the code). The database can't do anything to help you when you split your queries onto different db instances on different VMs for no reason.

              I heard we had a 7 figure annual compute spend, and IIRC we only had a few hundred requests per second peak plus some batch jobs for a few million accounts. A single $160 N100 minipc could probably handle the workload with better reliability than we had if we hadn't gone down that particular road to insanity.

              • ffsm8 3 days ago

                > ... microservices despite how absolutely horrible that idea is for performance and scalability

                Heh, remind me of a discussion I had with a coworker roughly 6 month ago. I tried to explain to them that the ability to scale each microservices separately almost never improves the actual performance of the platform as a whole - after all, you still need to have network calls between each service and could've also just started the monolith twice. And that would've most likely even needed less RAM too, even if each instance will likely consume more - after all, you now need less applications running to serve the same request.

                This discussion took place in the context of a b2e saas platform with very moderate usage, almost everything being plain CRUD. Like 10-15k simultaneous users making data entries etc.

                I'm always unsure how I should feel after such discussions. On the one hand, I'm pretty sure he probably thinks that I'm dumb for not getting microservices. On the other hand... Well... ( ꈍ ᴗ ꈍ )

            • beeboobaa3 3 days ago

              [flagged]

              • 9dev 3 days ago

                That’s besides the point I’m making. Technology should develop towards simplifying humanity’s life, not making it more complicated. It’s a good thing we don’t have to use clever loop constructs anymore, because compilers do it for us. It’s a good thing we don’t have to obsess about the right varchar size anymore, because Postgres‘ text does the right thing anyway.

                It’s a systemic problem. You’re going to loose the battle against human nature: Ever noticed how, after moving from a college dorm into a house, people suddenly manage to fill all the space with things? It’s not like the dorm was enough to fit everything they ever needed, but they had to accommodate themselves to it. This constraint is artificial, exhausting to keep up, and, if gone, will no longer be adhered to.

                If a computer suddenly becomes more powerful, developers aren’t going to keep up their good habits performance optimisation, because they had those only out of necessity in the first place.

                • beeboobaa3 2 days ago

                  > Technology should develop towards simplifying humanity’s life, not making it more complicated

                  I agree with this statement for normal people. Not for software developers. You're just begging for stagnation. Your job is literally dealing with computers and making them do neat stuff. When you refuse to do that because "computers should be making my life easier" you should really find another line of employment where you're a consumer of software, not a producer.

      • tmpz22 3 days ago

        You're right but I'll play devil's advocate for teaching purposes:

        * Usage won't be uniformly distributed and you may need to deal with burst traffic for example when a new version is released and all your users are pulling new config data.

        * Your application data may be very important to your users and keeping it on a single server is a significant risk.

        * You're users may be geographically distributed such that a user on the other side of the world may have a severely degraded experience.

        * Not all traffic is created equal and, especially paired with burst traffic, could have one expensive operation like heavy analytical query from one user cause timeouts for another user.

        Vercel does not solve all of these problems, but they are problems that may be exasperated by a $4 droplet.

        All said I still highly encourage developers to not sell their soul to a SaaS product that could care less about them and their use case and consider minimal infrastructure and complexity in order to have more success with their projects.

        • Quothling 3 days ago

          Is this really playing the devil's advocate though? I know this is a simplification but Stack Overflow launched on a couple of IIS servers and rode their exponential growth rather well. Sure they added more than "a couple" of web servers and improved their SQL server quite a bit, but as far as I recall they didn't even shift to CDN until five or six years after they grew. Eventually they moved into the cloud, but Spliit doesn't even have a fraction of the traffic SO did in its early days. As such I don't think any of the challenges you mention are all that relevant in the context aside from having backup. Perhaps also some redundancy by having two $4 droplets?

          Is the author even getting paid for their services though? If they aren't then why would they care? I don't mean that as rude as it sounds, but why would they pay that much money so people can use their product for free?

        • ffsm8 3 days ago

          * that's just static files. Even a $4 droplets will hardly ever get into issues serving that, even with hundreds of simultaneous requests.

          * Okay, I guess that means we should use 2? So that's $8 now.

          * Vercel really doesn't help you there beyond serving static files from cdn. That hardly matters at this scale, you should keep in mind that you "only" add about 100ms of latency by serving from the other side of the globe. While that has an impact, it's not really that much. And you can always use another cdn too. They're very often free for html/js/css

          * Burst traffic is an issue, especially trolls that just randomly DOS your public servers for shits and giggles. That's pretty much the only one vercel actually helps you against. But so would others, they're not the only ones providing that service, and most do it for free.

          Frankly, the only real and valid reason is the previously mentioned: they've likely got the money and don't mind spending it for the ecosystem. And if they like it... Who are we to interfere? Aside from pointing out how massively they're overpaying, but they've gotta be able to handle that if they're willing to publish an article like this

          • jasonm23 3 days ago

            People use Vercel ... because...

            ...haven't worked it out yet, all I can come up with is "they don't know any better".

            Surely that can't be true?

        • hypeatei 2 days ago

          > may be geographically distributed such that a user on the other side of the world may have a severely degraded experience.

          Okay, am I crazy or can you not really solve this without going full on multi-region setup of everything? Maybe your web server is closer to them but database requests are still going back to the "main" region which will have latency.

          • fulafel a day ago

            Some serverless DB services claim to offer transparent geo replication (eg AWS DynamoDB, and MS Cosmos which known for being expensive though).

            But also most apps don't need low latency.

          • sdenton4 2 days ago

            Personally I'm digging a hole through the center of the earth to send data via pulsing laser to the far side. But other people can choose to waste their money on multi region relocation, sure.

      • npsomaratna 3 days ago

        My understanding is that DO VPS’ are underpowered (as are VPS offerings from most other VPS vendors). Dollar for dollar, bare metal stuff from Hetzner, OVH, etc are far more powerful.

        That said, I completely agree-a $4/month DO VPS can run MySQL, and should easily handle this load; in fact I’ve handled far bigger loads in practice.

        On a tangent: any recommendations for good US-based bare metal providers (with a convenience factor comparable to OVH, etc)?

        • diggan 3 days ago

          > good US-based bare metal providers

          The times I've needed it, DataPacket (not based in US, but has servers in the US) and Vultr (based in the US) have both been good to me.

          • mminer237 3 days ago

            In a trademark travesty, I must ask DataPacket.com or DataPacket.net?

            • diggan 3 days ago

              Sorry, I was referring to datapacket.com

          • npsomaratna 3 days ago

            Thank you so much. I’ll take a look at these.

        • mxuribe 3 days ago

          Hetzner is of course not U.S. based, but has expanded to have 2 U.S. sites (Oregon i think, and Virginia)....so that could be an option maybe. Caveat: i have not leveraged Hetzner in the U.s.....so can not speak to their quality.

          • npsomaratna 3 days ago

            That’s news thank you. I’ll check this out.

            • mxuribe 3 days ago

              Uh, actually at a quick glance, seems the U.s. sites are more for their cloud offering and maybe not bare metal servers.....i think (sadly): https://www.hetzner.com/cloud

    • fs0c13ty00 3 days ago

      My open source service, lrclib.net, handles approximately 200 requests per second at peak (yes you read that right, it's approximately 12000 requests per minute) on a simple €13 Hetzner cloud server (4 AMD based VCPU, 8GB RAM). I'd love to write a blog post about how I made it possible sometime in the future, but basically, I cheated by using Rust together with SQLite3 and some caching.

      I was surprised by the cost of Vercel in that blog post too, which is why I dislike all kinds of serverless/lambda/managed services. For me, having a dozen people subscribing to $1-$2/month sponsorship on GitHub Sponsors is enough to cover all the costs. Even if no one donates, I’d still have no trouble keeping the project running on my own.

    • diggan 3 days ago

      > Running a database accessed that many times on a $4 Digital Ocean droplet?

      How many times per second is the DB actually accessed? As far as I can tell my the metrics, they're doing ~1.7 requests/minute, you'll have a hard time finding a DB that couldn't handle that.

      In fact, I'd bet you'd be able to host that website (the database) in a text file on disk without any performance issues whatsoever.

    • Dachande663 3 days ago

      I didn't mean it quite so insultingly, but yes, even a very modest server would handle that kind of load easily. You're not particularly high throughput (a few requests per second?) and I imagine the database is fairly efficient (you're not storing pages of text or binary blobs). I think you'd be pleasently surprised by what a little VPS can do.

    • codazoda 3 days ago

      I think it would be fine. I run a little private analytics service for my own websites. That service isn't as busy but handles ~11k requests per month. It logs to a SQLite database. It does this on a little Raspberry Pi 400 in my home office and it's not too busy. The CPU sits at 1% to 3% on average. Obviously there are a lot of differences in my setup but I would think you could handle 10x the traffic with a small VPS without any trouble at all.

      You can read a little bit more about my analytics setup here:

      https://joeldare.com/private-analtyics-and-my-raspberry-pi-4...

    • avree 3 days ago

      It’s surprising that you ask for advice on this topic in your blog, but then are very dismissive (complete with sarcastic wink) to the advice?

    • explain 3 days ago

      Running the 800th most popular website in the world (25-50M pageviews per day) on a 1GB VPS (Spring Boot, MariaDB, Redis)

      Very possible.

    • klabb3 3 days ago

      You could run it on Cloudflare workers for free with plenty to spare. You get 5M reads/100k writes per day on D1.

      OTOH If you want managed Postgres it seems like you always have to pay a fairy high minimum.

    • trollied 3 days ago

      "that many times". It's nearly zero traffic.

    • Saris 2 days ago

      They have under 1k visits per day, unless it's a really heavy app for some reason just about any basic VPS should handle a Webserver + DB for that just fine.

      It does feel like the tech community as a whole has forgotten how simple and low resource usage hosting most things is, maybe due to the proliferation of stuff like AWS trying to convince us that we need all this crazy stuff to do it?

    • bdlowery 3 days ago

      https://f5bot.com/ was free for like 8 years and it processed hundreds of thousands of db records a day, and it barely cost anything.

    • tosh 3 days ago

      [flagged]

Onavo 3 days ago

What they need is a payment provider integration so you can ACH or credit card pay immediately. That can also be a monetisation option for them.

  • Reubend 3 days ago

    There are dozens of other apps that do that already, and I don't think this one needs to follow. Staying open source, free, and convenient for cash transactions is better in my opinion.

  • dabeeeenster 3 days ago

    You just invented money laundering

    • Onavo 3 days ago

      Not really, the KYC is usually done on the payment layer depending on which payment platform you use. If you are doing your own ACH, yes, you will need KYC. But if you are using something like stripe connect or dots.dev then KYC is their problem.

jedberg 3 days ago

I'll email this to you, but you could save a ton of money using a serverless database solution like Supabase or NeonDB.

BigBalli 2 days ago

I would considering Firebase for database (best effort/cost ratio I found) or self-hosting the db.

diggan 3 days ago

Do I read something wrong, or does the stats amount to ~400 daily visitors with ~2500 page views per day? That's about ~1.7 requests per minute... And they pay $115/month for this?

I'm 99% sure I'm reading something wrong, as that's incredible expensive unless this is hosting LLM models or something similar, but it seems like it's a website for sharing expenses?

  • Vegenoid 3 days ago

    I think this is just the natural conclusion of the new generation of devs being raised in the cloud and picking a scalable serverless PaaS like Vercel as the default option for any web app.

    A more charitable reading is that they pick the technologies that the jobs they want are hiring for, even if they don’t make sense for this simple application.

    • diggan 3 days ago

      > I think this is just the natural conclusion of the new generation of devs being raised in the cloud and picking a scalable serverless PaaS like Vercel as the default option for any web app.

      I'm not sure, I'm also "new generation of devs" I suppose, cloud had just entered the beginning of the hype cycle when I started out professionally. Most companies/individuals at that point were pushing for "everything cloud" but after experiencing how expensive it really is, you start to look around for alternatives.

      I feel like that's just trying to have a "engineering mindset" rather than what generation you belong to.

      • mxuribe 3 days ago

        > ...after experiencing how expensive it really is, you start to look around for alternatives...

        One would think that to be the common sense case...but, in corporate America - at least the last handful of companies that i worked at - some companies are *only now just getting work loads up to the cloud now*...so they have not yet felt the cost pain....Or, in other cases, other firms are living in the cloud, have seen the exorbitant costs, but move waaaaay toooo sloooow to migrate workloads off cloud (or hybridize them in smart ways for their business)....Or, in even other cases that i have seen, instead of properly analyzing function and costs of cloud usage - and truly applying an engineering mindset to matters - some of these so called IT leaders (who are too busy with powerpoint slides) will simply layoff people and "achieve savings" that way.

        Welcome to being a technologist employed at one of several/many American corporations in 2024!

      • Vegenoid 3 days ago

        Certainly, I just mean that we are hitting a point where there can be professional devs, with multiple years of experience at tech companies successfully building software, who have only ever known and worked with a PaaS to deploy an app.

        • consteval 3 days ago

          It's frustrating too because deployment technologies and tools continue to get better and better. It's never been easier to deploy an application + database to some arbitrary computer. You can do it declaratively, no SSH, no random shell scripts, no suspicious fiddling.

          Also, sidenote: but for small stuff you can just deploy in your home. I've done it before. It's really not that scary, and odds are you have a computer laying around. The only "spooky" part is relying on my ISP router. I don't trust that thing, but that can be fixed.

          • majoe 2 days ago

            >It's never been easier to deploy an application + database to some arbitrary computer. You can do it declaratively, no SSH, no random shell scripts, no suspicious fiddling.

            May I ask, what you are using?

            • consteval 5 hours ago

              Ansible + Docker. The only "catch" is you still have to manage the host. It's trivial with Debian stable, and really the goal is to have a little on the host as possible and as much containerized as possible, so you can automate.

    • x0x0 3 days ago

      Or they're optimizing for not being a sysadmin, which some people can't do and even some of the people who can find to be very ungratifying work. For a project that runs on this person's enthusiasm, that seems not crazy.

      It's certainly possible to spin up your own db backup scripts, monitor that, make sure it gets offsite to an s3 bucket or something, set yourself a calendar reminder to test that all once a month, etc... but if I had to write out a list of things that I enjoy doing and a list of things that I don't, that work would feature heavily on the "yeah, but no" list.

      • Vegenoid 3 days ago

        Yes, if you don't want to do that work and are happy to pay someone else to take care of it, then that is great. But if you like making free web apps, relying on a PaaS will get expensive.

      • immibis 3 days ago

        If you become a sysadmin, not only do you save $100 per month but you can also add it to your CV.

        DHH (Rails founder) thinks you should dare to connect a server to the internet: https://world.hey.com/dhh/dare-to-connect-a-server-to-the-in...

        (I already submitted this once, but given the discussion here, I think it's worth posting again, if my rate limit allows it)

        • ryandrake 3 days ago

          > The merchants of complexity thrive when they can scare you into believing that even the simplest things are too dangerous to even attempt by yourself these days.

          Awesome first sentence! I know I'm going to agree with the article just by that. This applies to so many things in life, too. We're been taught that so many things people routinely did in the past are now scary and impossible.

        • x0x0 3 days ago

          > you can also add it to your CV

          That can backfire and give an employer the idea you want to do that work though. I not only hate it, but nobody gives a damn until stuff breaks and then everyone is mad. You rarely get rewarded for stuff silently sitting there and working.

          edit: to be clear, I think doing it yourself once is great experience. And I've run small web apps on a single server, all the way from supervisord -> nginx -> passenger -> rails with pg and redis. I'd rather build features or work on marketing.

    • joshdavham 3 days ago

      > new generation of devs being raised in the cloud

      I unfortunately sorta put myself in this category where my PaaS of choice is Firebase. For this cost-splitting app however, what would you personally recommend if not Vercel? Would you recommend something like a Digital Ocean Droplet or something else? What are the best alternatives in your opinion?

      • Vegenoid 3 days ago

        Yes, I believe a Droplet or VPS (virtual private server) from some other provider would be sufficient. Digital Ocean isn't the cheapest, but it's pretty frictionless, slick, and has a lot of good tutorial articles about setting up servers.

        You'd have a Linux machine (the VPS) that would have at least 3 programs running (or it is running Docker, with these programs running inside containers):

        - Node.js

        - the database (likely MySQL or PostgreSQL)

        - Nginx or Apache

        You'd set up a DNS record pointing your domain at the VPS's IP address. When someone visits your website, their HTTP requests will be routed to port 80 or 443 on the VPS. Nginx will be listening on those ports, and forward (aka proxy) the requests to Node, which will respond back to Nginx, which will then send the response back to the user.

        There are of course security and availability concerns that are now your responsibility to handle and configure correctly in order to reach the same level of security and availability provided by a good PaaS. That's what you're paying the PaaS for. However, it is not too difficult to reach a level of security and availability that is more than sufficient for a small, free web app such as this one.

        • maccard 3 days ago

          I don’t think that the difference is $110/month, but surely reading that you realise there’s a lot more going on there than “point vercel at a git repo and you’re done”. I don’t know how long it would take me to install docker and configure the above, but it’s certainly a few hours. I tried vercel for the first time a few weeks ago, and I had a production ready site online with a custom domain in about 5 minutes.

          I’ve commented here before that on AWS (which I’m fairly familiar with) I could set up ECS with a load balancer and have a simple web app with rds running in about 30 minutes, and literally never have to touch the infra again.

          • TRiG_Ireland 3 days ago

            I'm an old-school PHP web developer, and my immediate thought is to go to OVH or similar and get a VPS running Ubuntu. A quick run of sudo apt install lamp-server^ and I'm ready to go.

        • wonger_ 3 days ago

          Could you continue on about security and availability? This is exactly the gentle intro I've been looking for.

          I'm guessing rate limiting, backups, and monitoring are important, but I'm not sure how to go about it.

          • mrngm 2 days ago

            I'm not entirely on the same page as the parent comment regarding "[t]hat's what you're paying a good PaaS for" in terms of security and availability. If the platform is down, having a service level agreement (SLA) is nice, but worthless because your application is also unavailable. Depending on how integrated your application is with said platform, migrating to another platform is difficult. If the platform cut corners regarding customer data separation (you know, because you can be cheaper than the competition), your users' passwords may be next on HIBP (haveibeenpwned.com).

            This is of course a rather pessimistic view on platforms. Perhaps the sweet spot, where the parent commenter is probably referring to, is something where you have more control over the actual applications running, exposed network services, etc., such as a virtual machine or even dedicated hardware. This does require more in-depth knowledge of the systems involved (a good guideline, but I'm unsure where I picked this up, is to have knowledge of 1 abstraction layer above and below the system where you're involved in). This also means you'll need to invest a lot of time in your own platform.

            If you're looking for a gentle intro into security and availability, have a look at the OWASP Top Ten[0] that shows ten subjects on web application security with prevention measures and example attacks. A more deep dive in security concepts can be found on the Arch Linux wiki[1]; it also focuses on hardening computer systems, but for a start look at 1. Concepts, 2. Passwords, 5. Storage, 6. User setup, 11. Networks and Firewall. From 14. See Also, perhaps look into [2], not necessarily for the exact steps involved (it's from 2012), but for the overall thought process.

            As for availability in an internet-accessible service, look into offering your services from multiple, distinct providers that are geographically separate. Automate the setup of your systems and data distribution, such that you can easily add or switch providers should you need to scale up. Have at least one external service regularly monitor your publicly-accessible infrastructure. Look into fail-over setups using round robin DNS, or multiple CDNs.

            But I suppose that's just the tip of the iceberg.

            [0] https://owasp.org/Top10/ [1] https://wiki.archlinux.org/title/Security [2] https://www.debian.org/doc/manuals/securing-debian-manual/in...

            • Vegenoid 2 days ago

              > I'm not entirely on the same page as the parent comment regarding "[t]hat's what you're paying a good PaaS for" in terms of security and availability. If the platform is down, having a service level agreement (SLA) is nice, but worthless because your application is also unavailable.

              > If the platform cut corners regarding customer data separation (you know, because you can be cheaper than the competition), your users' passwords may be next on HIBP (haveibeenpwned.com).

              This all applies to running on a VPS in the cloud too. You have to own much more of the stack to avoid this than is usually realistic for one person running a free web app.

              What I mean about the security and availability being provided for you is that you don't have to worry about configuring a firewall, configuring SSH and Nginx, patching the OS, etc.

          • Vegenoid 2 days ago

            TBH there's more that goes into it than I really want to type out here. LLMs are a good resource for this kind of thing, they generally give correct advice. A quick overview:

            Security looks like:

            - Ensure SSH (the method by which you'll access the server) is secured. Here is a good article of steps to take to secure SSH on a new server (but you don't have to make your username 16 random characters like the article says): https://hiandrewquinn.github.io/til-site/posts/common-sense-...

            - Have a firewall running, which will prevent incoming network connections until you explicitly open ports on the firewall. This helps prevent lack of knowledge and/or misconfiguration of other programs on the server from burning you. The easiest firewall is ufw ("uncomplicated firewall"). Here is a DigitalOcean article that goes into more depth than you probably need at first, or ask Claude/ChatGPT some questions about ufw: https://www.digitalocean.com/community/tutorials/how-to-set-...

            - Keep the OS and programs (esp. Nginx/Apache and Node) up to date.

            Availability looks like:

            - Have a backup of important data (the database). You can set up a 'cron job' that will run a shell script on a schedule that dumps the database to a file (ex. mysqldump) and then copies that file into your backup destination, which could be some cloud storage or another VPS. If you can, backing up to 2 separate destinations is better than one, keeping a history of backups is good, and doing "health checks" of the backup system and the backups is good (meaning periodically check that the backup system is working as intended and that you could restore from a backup if needed)

            - Ability to respond to outages, or failure of the host (the server/VPS). This means either having another machine that can be failed over to (probably overkill if you don't have paying customers and an SLA), or you are able to spin up a new server and deploy the app quickly if the server gets borked somehow and goes down. To do that you have some options: have a clear list of instructions that you can manually perform relatively quickly (slowest and most painful), or have automated the deployment process. This is what something like Ansible is for, or you can just use shell scripts. Using Docker can speed up and simplify deployment, since you're building an image that can then be deployed on a new server pretty simply. You will of course also need the backup of the data that you've hopefully been taking.

            - Rate limiting may not be necessary depending on the popularity of your site, but it can be useful or necessary and the simplest way is to put your website behind Cloudflare: https://developers.cloudflare.com/learning-paths/get-started...

            There are "better" techniques to do all of those that require more know-how, which can prevent and handle more failure scenarios faster or more gracefully, and would be used in a professional context.

  • roflmaostc 3 days ago

    Yeah, I'm confused too. Running some sort of VPS would totally do the job, no?

    • diggan 3 days ago

      I'm fairly sure you could host this on a last-gen Raspberry PI at home, if you live close to where your users are :)

      • Aachen 3 days ago

        Definitely don't need a last gen. Someone else did the math upthread and came to one request every 20 seconds, which if you factor in burstiness and that if you have a particularly bad burst that slows down the system a little, the next request will take even longer etc. (ask me how I learned that lesson), it's probably good to budget for it handling multiple requests per second. For this application, my understanding is you've got a handful of people in your group that you're splitting a couple of expenses with, so data processing is small beans and that'll definitely run on a first gen Pi if you optimise it properly, or perhaps a 2nd-3rd gen if you don't want to spend the time

trevor-e 3 days ago

I've been going down the VPS rabbit hole lately since I have some toy projects I want to host and really don't like the unpredictable pricing model of these "pay as you go" providers like Vercel. E.g. I really love Supabase but it's hard to justify jumping straight to the $25/month plan in combination with Vercel costs.

I was surprised how extremely easy it is to get set up with Coolify on a Hetzner VPS, which has preset install options for NextJS + Supabase + Posthog + many others. And I get the standard autodeploy on commit functionality. The open-source versions are missing some features, and I don't get the slick Vercel admin interface, but for a pet project it works great. I'm also by no means a sysadmin expert, but with ChatGPT it's pretty easy to figure things out now.

hahahacorn 3 days ago

The inefficiency is bonkers but understandable. I could host this app for like ~$60/year, generously, with little to no devops work. It's painful to see the creator paying out of pocket for such a great project because the Vercel marketing introduced such massive inefficiencies to the ecosystem.

Even less when I pay for a dedicated machine running all of my hobby projects. Gratuitous Kamal 2 plug. Run your personal projects all on one machine.

  • JamesonNetworks 3 days ago

    Where would you host this for $60 a year?

    • misiek08 3 days ago

      I would use hosting with SSH access. I am based in Poland so we have MyDevil.net But also you can just rent VPS for 5$, but you have to care about setting everything up.

      First thing I thought while reading was Firebase - it's interesting how much it would cost there.

    • hahahacorn 3 days ago

      hetzner cpx11 in Ashburn - 150 ms latencies to Europe are totally fine for this use case. with 15k groups and 162k expenses (guesstimating 30k users, email logs per-expense, etc.) , you're not even pushing 2 gigabytes of disk space (conservatively), nor are you doing anything computationally expensive or stressful for the DB under normal load. With decent app & db design, like proper indexing, 2 vCPUs and 2gb RAM is more than enough.

tempfile 3 days ago

So that's how vercel makes their money.

  • skwee357 3 days ago

    That’s, and a bunch of twitter “indie hackers” who get traffic spikes that result in hundreds of dollars bills. Seriously, just get a VPS

  • rozap 3 days ago

    Their marketing team needs a raise.

rikafurude21 3 days ago

For reference, 100 dollars a month gets you this bare metal server on hetzner: Intel® Core™ i9-13900, 64 GB DDR5 ECC, 2 x 1.92 TB

... Should be more than enough to handle 2 requests per minute, could probably handle 100x of that.

  • ndriscoll 3 days ago

    My i5-6600k at home can handle ~15k requests per second for a toy social media app with postgresql assembling the xml to send to the client (though I've done some batching optimization and used rust for my application server to hit that). Passmark cpubenchmark suggests a 13900 should be 6-8x more capable than that.

    So it should be able to handle somewhere in the ballpark of 2,000,000x the required load, or maybe 100,000x without the application level optimization.

    (TLS reduces this by a factor of ~10 if you're doing handshakes each time. Despite what blogs claim, as far as I can tell, if your CPU doesn't have QAT, TLS is very expensive)

    • wongarsu 3 days ago

      If you're on Hetzner you can get a load balancer with TLS termination for $5/month. It's hidden in the cloud category but fully supports dedicated servers.

      Of course doing SSL on the server itself is more secure, but if that's a performance bottleneck the load balancer can be a cost effective compromise

      • kkielhofner 3 days ago

        Yes Cloudflare and all of that but they’ll do it for free.

        Then you get to determine gains you may get from caching and other potential optimizations from one of the best eyeball connected providers in the world. Oh plus the ability to fend off the largest DDoS attacks ever seen.

        Cloudflare tunnels enable you to do all of this through an encrypted tunnel without exposing the machine/services to the internet at all. Cloudflare will still MITM all traffic but so does Hetzner (obviously). At least with the tunnel the connection is persistent so you don’t incur TLS handshaking, etc CPU overhead with each client connection.

        Bonus points - you can move hosting providers without any hassle, configure hosting provider redundancy (Hetzner + whoever), all of that stuff.

carlosjobim 3 days ago

Yet another testimony to how utterly few people are willing to pay for what they use in the abuse system called "open source". People, start charging for your work, and leave the freeloaders behind!

> A short disclaimer: I don’t need donations to make Spliit work. I am lucky enough to have a full-time job that pays me enough to live comfortably and I am happy to give some of the money I earn to the community.

And this is why open source will finally die, because being comfortably employed while still having surplus time and energy to work for free is an increasingly rare thing among the younger generations.

A better way to "give back to the community", instead of making open source software, would be to purchase software from other indie developers.

  • aniviacat 3 days ago

    > People, start charging for your work, and leave the freeloaders behind!

    We already have a profit-oriented market. And we have empirical evidence that profit-oriented markets do not like open source (for their primary products).

    > being comfortably employed while still having surplus time and energy to work for free is an increasingly rare thing among the younger generations.

    edit: remved anecdote

    The cost of living will never rise so much that the upper 50% can't easily make enough money. (Otherwise what? The other 150 million people go homeless?)

    And unless our industry sees a major shift, which I don't see happening, software engineers will continue being comfortably in the upper 50%.

    • carlosjobim 3 days ago

      > We already have a profit-oriented market. And we have empirical evidence that profit-oriented markets do not like open source (for their primary products).

      That's a given. If you open source your code, other developers will steal it and sell your software. Just like billion dollar tech companies are the main benefiters of open source today, that some guy made for free. Excuse me, I meant for $42 in donations.

      • immibis 3 days ago

        That's why I make all my software AGPL now.

        I haven't published any software at all recently. But if I did (anything non-trivial), it would be AGPL. Or even SSPL.

        Permissive licensing (MIT, BSD, Unlicense, public domain, etc) is a scam to make you work for companies for free - if your software is worth anything to them, that is. They told developers they should use MIT licenses so more people would use their software. That's true. They didn't ask whether that was a good thing.

        • samatman 3 days ago

          If you don't want to give away software, and it sounds like you don't, then. Don't.

          Perhaps you're under the impression that I blindly click the button for a permissive license? No. I read it first. I know what it allows. That's why I choose it.

          I think it's nice when companies make money, for the record. Pays for houses, puts food on the table, sends kids to summer camp and college. Some of them even make a lot of money. That's fine too.

          If they want to use my software in the process, more power to them. That's why I put my name next to the copyright notice, on a license which says in plain English that they can do that.

          • immibis 3 days ago

            Did you know that every dollar a company makes gets taken away from someone? It's zero-sum if you aren't close to Jerome Powell. Why assume the dollar is better in the hands of the company owner than whoever had it before?

            • samatman 2 days ago

              Every dollar a company makes is given to them by someone, in exchange for something else.

              I go to the supermarket, they don't take my money. I pull it out of my pocket and swipe it into their coffers. They have food, you see. Which I can eat. Unlike money.

              Strange how a game you describe as zero sum has built the prosperity of the modern world. I wonder if something is missing from your understanding of how that game is actually played.

            • aniviacat 2 days ago

              The economy is not a zero sum game. We (non-politicians/non-billionares) have significantly more resources than we had 100 years ago, and we will have significantly more resources in 100 years than we do now. And open source developers are a small part of why.

              • immibis 2 days ago

                The economy is not a zero sum game because we produce more stuff. Money, however, is a zero sum game unless you are Jerome Powell. We produce more stuff because we work hard to produce it, not because venture capitalists have larger bank accounts.

                Especially in relation to open-source software, this should be obvious. Software exists because someone wrote it, not because a company owner was paid for access to it. Programmers may be paid to write software. However when comparing two worlds, one in which a programmer wrote some software for free, and another in which a programmer was paid $1 by a venture capitalist who received $5 from a customer who is now out $5, it's not at all clear that the second world is better. Especially since in the second world, the customer has to keep paying and has to contend with software full of ads that make it slow.

                • samatman a day ago

                  This is straightforwardly, nakedly, embarrassingly illiterate and wrong.

                  Money has velocity. The faster it moves around, the more there is. If you're a waiter and you get tipped the same serial-numbered twenty dollar bill five times, you've earned a hundred bucks, not twenty.

                  When economic productivity increases, there's more to buy, and more people doing and making valuable things which others want to pay them for. This increases the velocity of money, it moves around faster, so it isn't zero sum.

                  This is taught early in any course of study in economics. Since you don't know the most basic and fundamental facts about the subject, it's not surprising that your conclusions make negative sense.

                  You could spend two weeks of evenings on YouTube and never again reveal your ignorance in such a naked way. I highly recommend this. You're making perpetual-motion class arguments in a place where people know the second law of thermodynamics. Step your game up.

      • aniviacat 3 days ago

        When I write open source libraries I consider the ones benefitting to be the general public.

        Even if my libraries were used only by mega corporations (which they aren't) there would still be a benefit to the public: If companies have lower cost, they will charge lower prices, benefitting customers / the general public. (And yes, they will lower prices. Most markets are not monopolies.)

        • carlosjobim 3 days ago

          Open source never benefits the general public, because open source developers never make a product polished and user-friendly enough to be usable by the general public.

          Instead, open source mainly benefits other developers. But at the end of the chain there has to be a product that is of use for non-developers. Because developing isn't for developments sake. And the person who makes that product reaps all the monetary benefits from the work that the others have made.

          If FOSS people made complete products which were end user friendly, I'd buy the argument of benefitting the general public.

          • aniviacat 3 days ago

            > developing isn't for developments sake.

            [citation needed]

            > the person who makes that product reaps all the monetary benefits from the work that the others have made

            Which means that they can offer their product for a lower price, which then benefits the general public.

            Companies being able to operate cheaper / more efficiently does benefit the general public, as long as the market isn't a monopoly. And as per my above comment, most markets are not monopolies.

            > open source developers never make a product polished and user-friendly enough to be usable by the general public

            I've been using Audacity, Gimp, Inkscape, uBlock Origin, and many others long before I knew what FOSS means. Spliit is also pretty cool ;)

          • AlienRobot 3 days ago

            That's a very interesting perspective, thanks. :-)

  • lccerina 3 days ago

    "Open source will finally die" said on a website likely running on some linux-based server, with some JS frontend, some open source/commercially licensed DB, and communicating with protocols regulated by a non-profit organization. Also in the future maybe reading this page from a device using a RISC-V processor. Sure.

    • carlosjobim 3 days ago

      I hope it brings a tear of joy to the corner of the eyes of those selfless FOSS programmers that they've done their share to help Y Combinator be worth $600 000 000 000. That money is surely better spent on people who deserve it better.

  • jampekka 3 days ago

    If anything is dying it's proprietary software. Which is great for all of us because open source is vastly more efficient system.

  • renewiltord 3 days ago

    Are you the guy with the https://osspledge.com/ billboards around San Francisco? Haha. They’re funny. I enjoyed the art. If it’s actually you, I’d be curious who the illustrator is or if you used generative AI.

    • carlosjobim 3 days ago

      I'm not that guy, I'm against open source and free loading. Why would multi million dollar CEOs give anything to FOSS programmers when they're developing their crucial infrastructure for free?

      Work for free for huge companies so they can make billion dollar profits while at the same time demanding unionization. Refusing to sell your work to consumers who are willing to pay, yet happy to provide free tech support to free loaders who wouldn't give you a cent. What's the logic?

      • quesera 3 days ago

        > What's the logic?

        Some of us like making things, and are happy to share our excess production with the world.

        Like any other good work, it does not require acknowledgement or reciprocation, and the benefits are not part of a zero-sum economy where the giver is harmed by any action of the receiver.

        You're on record as being vehemently anti-OSS. Why does it offend you so much that other people prioritize forms of compensation differently than you do?

        • carlosjobim 3 days ago

          > You're on record as being vehemently anti-OSS.

          That's true, I'm the chief anti-OSS crusader on HN and online. I'll give it a rest after this thread, to breathe and give all a chance to recover strength.

          > Some of us like making things, and are happy to share our excess production with the world.

          Selling those things is still sharing with the world. Most paid software is cheap to purchase.

          If FOSS was an eco system where end users had the common(?) courtesy to donate just a little bit to at least one of the projects they use, then I'd have nothing to say. But whenever I use any FOSS code and donate, I usually find myself alone with two or three other people who have donated.

          Unlike most other professions, programming is something most people start with as a hobby in young years. So maybe they don't value their own hard work and effort, even though they've matured past the young hobbyist phase? And then they get misguided by open source activists to labour for free.

          A young artist who publishes their songs online for free in the hopes of becoming famous, will still retain copyright on those works. No record label can come around and start selling those songs without even letting the artist know. Much less stealing and selling the songs of a well-established artist if he/she decides to release music for free.

          I just don't like free loading, and I don't like enablers either.

          • quesera 3 days ago

            Selling a thing comes with greater obligations than giving it away.

            I am unwilling to accept those obligations, in most cases.

            I am, however, perfectly happy to share some of the work that I do back into an ecosystem which I have benefited from. I also volunteer for organizations I care about, and I pick up litter in public parks. :)

            I do not believe that I am being exploited. The Internet is and always has been built on open source -- and as bad as the Internet is, it would be worse if it didn't exist or if it was a proprietary network.

            I think you're taking a real problem (funding of valuable work) and exploding it into an argument against open source, which just doesn't follow for me.

            I do 100% support finding a way to monetarily compensate people who do valuable work and contribute it to the world. Theoretically. Practically, it gets messy real quickly and I don't see a good broad solution.

            • carlosjobim 2 days ago

              > I am unwilling to accept those obligations, in most cases.

              This is the argument I keep hearing every time a discussion about open source boils down, and I think it is wrong. Because in truth there is no big commitment if you sell some software for $10 or $20. In worst case if it doesn't work for the customer, you give a refund. When you go out to buy a sandwich or a couple of beers for $10, do you think they are worried about any commitment? No, it's "Here you go, enjoy!". You won't have any more obligations than you are willing to take on, just like open source.

              > I also volunteer for organizations I care about, and I pick up litter in public parks.

              Would you pick up litter that a mega-corp is dumping in the woods, while they keep dumping more and laughing at you?

              • quesera 2 days ago

                > there is no big commitment if you sell some software for $10 or $20

                This ends up not being true. It creates headaches and contracts both explicit and implied. It creates legal requirements and a for-consideration nexus that is far too complicated to contemplate at this level. Also moral obligation, tax liability, _customers_ to serve. No thank you.

                Money changes everything. I don't need that overhead in my life. I've done it before (accepting donations only), and I won't do it again.

                > Would you pick up litter that a mega-corp is dumping

                If a megacorp was diminishing my enjoyment of the park by their litter, then yes sure, if it was of a magnitude that I could solve myself.

                I'd also encourage the application of whatever legal and financial penalties might be available -- just like conflicting use of open source. If a license is violated, then pursue for damages. If the license allows the use in question (e.g. BSD, MIT), then that's a decision made by the licensor.

                • carlosjobim a day ago

                  What is the big headache? I'm curious to know, because I can't see it. I started my first business at a very young age, and had a lot of people around me in my life who tore up heaven and earth, really went ballistic, because in their world you work for somebody else - preferably the government - and receive a salary and that's it. To try to start a small business was one of the worst sins, and surely the IRS and competitors and employees would sue me out of existence just for having a business.

                  I still don't know what it was (is) with these people? Maybe a religious worshipping of the government and a fear of the IRS that are greater than the fear of God? Thinking that if you make a slight mistake, you'll be imprisoned for life. That was the impression they give. And when developers talk about the big headache of charging for a piece of software, I can't help but thin back to that.

                  The truth is – and you know it also – that if you sell software for $10, $20 or even $100, there is no contract nor much headache. You can give the money back to a customer who isn't satisfied and that's it. You can have your customer service as minimal as you prefer. You can also legally earn quite a lot of money on it as a side business before having to think about taxes or incorporation. And when that day comes, well congratulations, now you're supporting yourself as an independent developer!

                  The headache is only in your head.

  • singpolyma3 3 days ago

    To be fair they're not heavily soliciting donations, and even actively say that they don't need them. So it's not surprising people don't prioritise giving anything. Many users probably haven't even thought of it

anticorporate 3 days ago

Not to be "that guy" but...

To clarify some confusion in this thread, it might be helpful to distinguish "open source" (the application) from "free" (this hosted instance of the application). Munging the two together might lead to some incorrect conclusions. Running a "free" application for others is going to have certain costs. The cost of running an "open source" application is going to depend entirely on the resources that application consumes, which, if run privately, might be a lot less.