tjoff a day ago

Industry will do absolutely anything, except making lightweight sites.

We had instant internet in the late 90s, if you were lucky enough to have a fast connection. The pages were small and there were barely any javascript. You can still find such fast loading lightweight pages today and the experience is almost surreal.

It feels like the page has completely loaded before you even released the mousebutton.

If only the user experience were better it might have been tolerable but we didn't get that either.

  • OtomotO a day ago

    I am currently de-javascripting a React app of some project I am working on.

    It's a blast. It's faster and way more resilient. No more state desync between frontend and backend.

    I admit there is a minimum of javascript (currently a few hundred lines) for convenience.

    I'll add a bit more to add the illusion this is still a SPA.

    I'll kill about 40k lines of React that way and about 20k lines of Kotlin.

    I'll have to rewrite about 30k lines of backend code though.

    Still, I love it.

    • pushupentry1219 a day ago

      Honestly I used to be on the strict noscript JavaScript hate train.

      But if your site works fast. Loads fast. With _a little_ JS that actually improves the functionality+usability in? I think that's completely fine. Minimal JS for the win.

      • OtomotO a day ago

        Absolutely.

        I want the basic functionality to work without JS.

        But we have a working application and users are not hating it and used to it.

        We rely on modals heavily. And for that I added (custom) JS. It's way simpler than alternatives and some things we do are not even possible without JS/WASM (via JS apis to manipulate the DOM) today.

        I am pragmatic.

        But as you mentioned it, personally I also use NoScript a lot and if a site refuses to load without JS it's a hard sell to me if I don't know it already.

      • selimnairb a day ago

        Building a new app at work using Web Components and WebSockets for dynamism. I’m using Bulma for CSS, which is still about 300KiB. However, the site loads instantly. I’m not using a Javascript framework or bundler or any of that (not even npm!), just vanilla Javascript. It’s a dream to program and I love not having the complexity of a framework taking up space in my brain.

      • starspangled a day ago

        What do you use that good javascipt for? And what is the excessive stuff that causes slowness and bloat? I'm not a web programmer, just curious.

        • _heimdall 20 hours ago

          My rule of thumb is to render HTML where the state actually lives.

          In a huge majority of cases I come across that is on the server. Some things really are client-side only though, think temporary state responding to user interactions.

          Either way I also try really hard to make sure the UI is at least functional without JS. There are times that isn't possible, but those are pretty rare in my experience.

        • graemep 20 hours ago

          Two examples that come up a lot for me:

          1. filtering a drop down list by typing rather than scrolling through lots of options to pick one 2. Rearranging items with drag and drop

          The excessive stuff is requiring a whole lot of scripts and resources to load before you display a simple page of information.

          • LtWorf 17 hours ago

            Doesn't the combo box input field already do this?

    • NetOpWibby a day ago

      Nature is healing. Love to see it.

  • kodama-lens a day ago

    When I was finishing university I bought into the framework-based web-development hype. I thought that "enterprise" web-development has to be done this way. So I got some experience by migrating my homepage to a static VUE.JS version. Binding view and state by passing the variables name as a sting felt off, extending the build env seemed unnecessary complex and everything was slow and has to be done a certain way. But since everyone is using this, this must be right I thought.

    I got over this view and just finished the new version of my page. Raw HTML with some static-site-generator templating. The HTML size went down 90%, the JS usage went down 97% and build time is now 2s instead of 20s. The user experience is better and i get 30% more hits since the new version.

    The web could be so nice of we used less of it.

    • mmcnl 13 hours ago

      Choose the right tool for the job. Every engineering decision is a trade-off. No one blames the hammer when it's used to insert a screw into a wall either.

      SPA frameworks like Vue, React and Angular are ideal for web apps. Web apps and web sites are very different. For web apps, initial page load doesn't matter a lot and business requirements are often complex. For websites it's exactly the opposite. So if all you need is a static website with little to no interactivity, why did you choose a framework?

  • pjmlp a day ago

    Lightweight sites don't make for shinny CVs.

    Even on the backend, now the golden goose is to sell microservices, via headless SaaS products connected via APIs, that certainly is going to perform.

    https://macharchitecture.com/

    However if those are the shovels people are going to buy, then those are the ones we have to stockpile, so is the IT world.

    • Zanfa a day ago

      My feeling is that the microservice fad has passed… for now. But I’m sure it’ll be resurrected in a few years with a different name.

      • pjmlp a day ago

        Nah, it is only really taking off now in enterprise consulting, with products going SaaS and what used to extension points via libraries, is now only possible via Webhooks and API calls, that naturally have to be done somewhere, either microservices or serverless.

      • _heimdall 20 hours ago

        I've come across quite a few job postings in the last could weeks looking for senior engineers with experience migrating monoliths to micro services. Not sure if the fad is still here or if those companies are just slow to get onboard.

        There are still good uses for micro services. Specific services can gain a lot from it, the list of those types of services/apps is pretty short in my experience though.

      • greenchair 12 hours ago

        yes it has for early adopters but there are still lots of dinosaurs out there just now trying it out.

  • wlll 20 hours ago

    My personal projects are all server rendered HTML. My blog (a statically rendered Hugo site) has no JS at all, my project (Rails and server rendered HTML) has minimal JS that adds some nice to have stuff but nothing else (it works with no JS). I know they're my sites, but the experience is just so much better than most of the rest of the web. We've lost so much.

    • mmcnl 13 hours ago

      I have two websites written in JS that render entirely server-side. They are blazing fast, minimal in size and reach 100/100 scores on all criteria with Lighthouse. On top of that they're highly interactive, no build step required to publish a new article.

  • Flex247A a day ago

    Example of an almost instant webpage today: https://www.mcmaster.com/

    • loufe a day ago

      And users clearly appreciate it. I was going over some bolt types with a design guy at my workplace yesterday for a project and his first instinct is to pull up the McMaster-Carr site to see what was possible. I don't know if we even order from them, since we pass through purchasing folks, but the site is just brilliantly simple and elegant.

    • 8n4vidtmkvmk 18 hours ago

      Someone did an analysis of that site on tiktok or YouTube. It's using some tricks to speed things up, like preloading the html for the next page on hover and then replacing the shell of the page on click. So pre-rendering and prefetching. Pretty simple to do and effective apparently.

  • nbittich 19 hours ago

    Tried that on my website (bittich.be), it's only 20ish kb gzipped. I could have done better if I didn't use tailwind css :(

cletus a day ago

At Google, I worked on a pure JS Speedtest. At the time, Ookla was still Flash-based so wouldn't work on Chromebooks. That was a problem for installers to verify an installation. I learned a lot about how TCP (I realize QUIC is UDP) responds to various factors.

I look at this article and consider the result pretty much as expected. Why? Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace. TCP has flow-control and sequencing. QUICK makes you manage that yourself (sort of).

Now there can be good reasons to do that. TCP congestion control is famously out-of-date with modern connection speeds, leading to newer algorithms like BRR [1] but it comes at a cost.

But here's my biggest takeaway from all that and it's something so rarely accounted for in network testing, testing Web applications and so on: latency.

Anyone who lives in Asia or Australia should relate to this. 100ms RTT latency can be devastating. It can take something that is completely responsive to utterly unusable. It slows down the bandwidth a connection can support (because of the windows) and make it less responsive to errors and congestion control efforts (both up and down).

I would strongly urge anyone testing a network or Web application to run tests where they randomly add 100ms to the latency [2].

My point in bringing this up is that the overhead of QUIC may not practically matter because your effective bandwidth over a single TCP connection (or QUICK stream) may be MUCH lower than your actual raw bandwidth. Put another way, 45% extra data may still be a win because managing your own congestion control might give you higher effective speed over between two parties.

[1]: https://atoonk.medium.com/tcp-bbr-exploring-tcp-congestion-c...

[2]: https://bencane.com/simulating-network-latency-for-testing-i...

  • klabb3 a day ago

    I did a bunch of real world testing of my file transfer app[1]. Went in with the expectation that Quic would be amazing. Came out frustrated for many reasons and switched back to TCP. It’s obvious in hindsight, but with TCP you say “hey kernel send this giant buffer please” whereas UDP is packet switched! So even pushing zeroes has a massive CPU cost on most OSs and consumer hardware, from all the mode switches. Yes, there are ways around it but no they’re not easy nor ready in my experience. Plus it limits your choice of languages/libraries/platforms.

    (Fun bonus story: I noticed significant drops in throughput when using battery on a MacBook. Something to do with the efficiency cores I assume.)

    Secondly, quic does congestion control poorly (I was using quic-go so mileage may vary). No tuning really helped, and TCP streams would take more bandwidth if both were present.

    Third, the APIs are weird man. So, quic itself has multiple streams, which makes it non-drop in replacement with TCP. However, the idea is to have HTTP/3 be drop-in replaceable at a higher level (which I can’t speak to because I didn’t do). But worth keeping in mind if you’re working on the stream level.

    In conclusion I came out pretty much defeated but also with a newfound respect for all the optimizations and resilience of our old friend tcp. It’s really an amazing piece of tech. And it’s just there, for free, always provided by the OS. Even some of the main issues with tcp are not design faults but conservative/legacy defaults (buffer limits on Linux, Nagle, etc). I really just wish we could improve it instead of reinventing the wheel..

    [1]: https://payload.app/

    • eptcyka a day ago

      One does not need to send and should not send one packet per syscall.

      • jacobgorm a day ago

        On platforms like macOS that don’t have UDP packet pacing you more or less have to.

      • tomohawk a day ago

        On linux, there is sendmmsg, which can send up to 1024 packets each time, but that is a far cry from a single syscall to send 1GB file. With GSO, it is possible to send even more datagrams to call, but the absolute limit is 64KB * 1024 per syscall, and it is fiddly to pack datagrams so that this works correctly.

        You might think you can send datagrams of up to 64KB, but due to limitations in how IP fragment reassembly works, you really must do your best to not allow IP fragmentation to occur, so 1472 is the largest in most circumstances.

        • Veserv 19 hours ago

          Why does 1 syscall per 1 GB versus 1 syscall per 1 MB have any meaningful performance cost?

          syscall overhead is only on the order of 100-1000 ns. Even at a blistering per core memory bandwidth of 100 GB/s, just the single copy fundamentally needed to serialize 1 MB into network packets costs 10,000 ns.

          The ~1,000 syscalls needed to transmit a 1 GB file would incur excess overhead of 1 ms versus 1 syscall per 1 GB.

          That is at most a 10% overhead if the only thing your system call needs to do is copy the data. As in it takes 10,000 ns total to transmit 1,000 packets meaning you get 10 ns per packet to do all of your protocol segmentation and processing.

          The benchmarks in the paper show that the total protocol execution time for a 1 GB file using TCP is 4 seconds. The syscall overhead for issuing 1,000 excess syscalls should thus be ~1/4000 or about 0.025% which is totally irrelevant.

          The difference between the 4 second TCP number and the 8 second QUIC number can not be meaningfully traced back to excess syscalls if they were actually issuing max size sendmmsg calls. Hell, even if they did one syscall per packet that would still only account for a mere 1 second of the 4 second difference. It would be a stupid implementation for sure to have such unforced overhead, but even that would not be the actual cause of the performance discrepancy between TCP and QUIC in the produced benchmarks.

      • intelVISA 21 hours ago

        Anyone pushing packets seriously doesn't even use syscalls...

    • astrange a day ago

      > (Fun bonus story: I noticed significant drops in throughput when using battery on a MacBook. Something to do with the efficiency cores I assume.)

      That sounds like the thread priority/QoS was incorrect, but it could be WiFi or something.

  • skissane a day ago

    > Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace

    That’s not an inherent property of the QUIC protocol, it is just an implementation decision - one that was very necessary for QUIC to get off the ground, but now it exists, maybe it should be revisited? There is no technical obstacle to implementing QUIC in the kernel, and if the performance benefits are significant, almost surely someone is going to do it sooner or later.

    • conradev a day ago

      Looks like it’s being worked on: https://lwn.net/Articles/989623/

      • throawayonthe 17 hours ago

        also looks like current quic performance issues are a consideration, tested in section 4. :

        > The performance gap between QUIC and kTLS may be attributed to:

          - The absence of Generic Segmentation Offload (GSO) for QUIC.
          - An additional data copy on the transmission (TX) path.
          - Extra encryption required for header protection in QUIC.
          - A longer header length for the stream data in QUIC.
    • lttlrck a day ago

      For Linux that's true. But Microsoft never added SCTP to Windows; not being beholden to Microsoft and older OS must have been part of the calculus?

      • skissane a day ago

        > But Microsoft never added SCTP to Windows

        Windows already has an in-kernel QUIC implementation (msquic.sys), used for SMB/CIFS and in-kernel HTTP. I don’t think it is accessible from user-space - I believe user-space code uses a separate copy of the same QUIC stack that runs in user-space (msquic.dll), but there is no reason in-principle why Microsoft couldn’t expose the kernel-mode implementation to user space

      • astrange a day ago

        No one ever uses SCTP. It's pretty unclear to me why any OSes do include it; free OSes seem to like junk drawers of network protocols even though they add to the security surface in kernel land.

        • j1elo 21 hours ago

          SCTP is exactly how you establish a data communication link with the very modern WebRTC protocol stack (and is rebranded to "WebRTC Data Channels"). Granted, it is SCTP-over-UDP. But still.

          So yes, SCTP is under the covers getting a lot more use than it seems, still today. However all WebRTC implementations usually bring their own userspace libraries to implement SCTP, so they don't depend on the one from the OS.

        • supriyo-biswas a day ago

          The telecom sector uses SCTP in lots of places.

        • kelnos a day ago

          Does anyone even build SCTP support directly into the kernel? Looks like Debian builds it as a module, which I'm sure I never have and never will load. Security risk seems pretty minimal there.

          (And if someone can somehow coerce me into loading it, I have bigger problems.)

          • jeroenhd a day ago

            Linux and FreeBSD have had it for ages. Anything industrial too. Solaris, QNX, Cisco IOS.

            SCTP is essential for certain older telco protocols and in certain protocols developed for LTE it was added. End users probably don't use it much, but the harsware their connections are going through will speak SCTP at some level.

          • rjsw 21 hours ago

            I added it to NetBSD and build it into my kernels, it isn't enabled by default though.

            Am part way through adding NAT support for it to the firewall.

        • lstodd a day ago

          4g/LTE runs on it. So you use it too, via your phone.

          • astrange a day ago

            Huh, didn't know that. But iOS doesn't support it, so it's not needed on the AP side even for wifi calling.

        • spookie a day ago

          And most of those protocols can be disabled under sysctl.conf.

  • bdd8f1df777b a day ago

    As a Chinese whose latency to servers outside China often exceeds 300ms, I'm a staunch supporter of QUIC. The difference is night and day.

  • pests a day ago

    The Network tab in the Chrome console allows you to degrade your connection. There are presets for Slow/Fast 4G, 3G, or you can make a custom present where you can specify download and upload speeds, latency in ms, a packet loss percent, a packet queue length and can enable packet reordering.

    • lelandfe a day ago

      There's also an old macOS preference pane called Network Link Conditioner that makes the connections more realistic: https://nshipster.com/network-link-conditioner/

      IIRC, Chrome's network simulation just applies a delay after a connection is established

      • mh- a day ago

        I don't remember the details offhand, but yes - unless Chrome's network simulation has been rewritten in the last few years, it doesn't do a good job of approximating real world network conditions.

        It's a lot better than nothing, and doing it realistically would be a lot more work than what they've done, so I say this with all due respect to those who worked on it.

    • youngtaff a day ago

      Chrome’s network emulation is a pretty poor simulation of the real world… it throttles on a per request basis so can’t simulate congestion due to multiple requests in flight at the same time

      Really need something like ipfw, dummynet, tc etc to do it at the packet level

  • attentive a day ago

    > I look at this article and consider the result pretty much as expected. Why? Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace. TCP has flow-control and sequencing. QUICK makes you manage that yourself (sort of).

    This implies that user space is slow. Yet, some(most?) of the fastest high-performance TCP/IP stacks are made in user space.

    • formerly_proven a day ago

      If the entire stack is in usermode and it's directly talking to the NIC with no kernel involvement beyond setup at all. This isn't the case with QUIC, it uses the normal sockets API to send/recv UDP.

    • WesolyKubeczek a day ago

      You have to jump contexts for every datagram, and you cannot offload checksumming to the network hardware.

  • Tade0 a day ago

    I've been tasked with improving a system where a lot of the events relied on timing to be just right, so now I routinely click around the app with a 900ms delay, as that's the most that I can get away with without having the hot-reloading system complain.

    Plenty of assumptions break down in such an environment and part of my work is to ensure that the user always knows that the app is really doing something and not just being unresponsive.

  • pzmarzly a day ago

    > I look at this article and consider the result pretty much as expected. Why? Because it pushes the flow control out of the kernel (and possibly network adapters) into userspace. TCP has flow-control and sequencing. QUICK makes you manage that yourself (sort of).

    I truly hope the QUIC in Linux Kernel project [0] succeeds. I'm not looking forward to linking big HTTP/3 libraries to all applications.

    [0] https://github.com/lxin/quic

  • reshlo a day ago

    > Anyone who lives in Asia or Australia should relate to this. 100ms RTT latency can be devastating.

    When I used to (try to) play online games in NZ a few years ago, RTT to US West servers sometimes exceeded 200ms.

    • albertopv a day ago

      I would be surprised if online games use TCP. Anyway, physics is still there and light speed is fast, but that much. In 10ms it travels about 3000km, NZ to US west coast is about 11000km, so less than 60ms is impossible. Cables are probably much longer, c speed is lower in a medium, add network devices latency and 200ms from NZ to USA is not that bad.

      • reshlo 3 hours ago

        The total length of the relevant sections of the Southern Cross Cable is 12,135km, as it goes via Hawaii.

        The main reason I made my original comment was to point out that the real numbers are more than double what the other commenter called “devastating” latency.

        https://en.wikipedia.org/wiki/Southern_Cross_Cable

      • Hikikomori a day ago

        Speed of light in fiber is about 200 000km/s. Most of the latency is because of distance, modern routers have a forwarding latency of tens of microseconds, some switches can start sending out a packet before fully receiving it.

    • indrora a day ago

      When I was younger, I played a lot of cs1.6 and hldm. Living in rural New Mexico, my ping times were often 150-250ms.

      DSL kills.

      • somat a day ago

        I used to play netquake(not quakeworld) at up to 800 ms lag, past that was too much for even young stupid me.

        For them that don't know the difference. netquake was the original strict client server version of quake, you hit the forward key it sends that to the server and the server then sends back where you moved. quakeworld was the client side prediction enhancement that came later, you hit forward, the client moves you forwards and sends it to the server at the same time. and if there are differences it gets reconciled later.

        For the most part client side prediction feels better to play. however when there are network problems, large amounts of lag, a lot of artifacts start to show up, rubberbanding, jumping around, hits that don't. Pure client server feels worse, every thing gets sluggish, and mushy but movement is a little more predictable and logical and can sort of be anticipated.

        I have not played quake in 20 years but one thing I remember is at past 800ms of lag the lava felt magnetic, it would just suck you in, every time.

  • ec109685 a day ago

    For reasonably long downloads (so it has a chance to calibrate), why don't congestion algorithms increase the number of inflight packets to a high enough number that bandwidth is fully utilized even over high latency connections?

    It seems like it should never be the case that two parallel downloads will preform better than a single one to the same host.

    • dan-robertson a day ago

      There are two places a packet can be ‘in-flight’. One is light travelling down cables (or the electrical equivalent) or in memory being processed by some hardware like a switch, and the other is sat in a buffer in some networking appliance because the downstream connection is busy (eg sending packets that are further up the queue, at a slower rate than they arrive). If you just increase bandwidth it is easy to get lots of in-flight packets in the second state which increases latency (admittedly that doesn’t matter so much for long downloads) and the chance of packet loss from overly full buffers.

      CUBIC tries to increase bandwidth until it hits packet loss, then cuts bandwidth (to drain buffers a bit) and ramps up and hangs around close to the rate that led to loss, before it tries sending at a higher rate and filling up buffers again. Cubic is very sensitive to packet loss, which makes things particularly difficult on very high bandwidth links with moderate latency as you need very low rates of (non-congestion-related) loss to get that bandwidth.

      BBR tries to do the thing you describe while also modelling buffers and trying to keep them empty. It goes through a cycle of sending at the estimated bandwidth, sending at a lower rate to see if buffers got full, and sending at a higher rate to see if that’s possible, and the second step can be somewhat harmful if you don’t need the advantages of BBR.

      I think the main thing that tends to prevent the thing you talk about is flow control rather than congestion control. In particular, the sender needs a sufficiently large send buffer to store all unacked data (which can be a lot due to various kinds of ack-delaying) in case it needs to resend packets, and if you need to resend some then your send buffer would need to be twice as large to keep going. On the receive size, you need big enough buffers to be able to fill up those buffers from the network while waiting for an earlier packet to be retransmitted.

      On a high-latency fast connection, those buffers need to be big to get full bandwidth, and that requires (a) growing a lot, which can take a lot of round-trips, and (b) being allowed by the operating system to grow big enough.

    • toast0 a day ago

      I've run a big webserver that served a decent size apk/other app downloads (and a bunch of small files and what nots). I had to set the maximum outgoing window to keep the overall memory within limits.

      IIRC, servers were 64GB of ram and sendbufs were capped at 2MB. I was also dealing with a kernel deficiency that would leave the sendbuf allocated if the client disappeared in LAST_ACK. (This stems from a deficiency in the state description from the 1981 rfc written before my birth)

      • dan-robertson a day ago

        I wonder if there’s some way to reduce this server-side memory requirement. I thought that was part of the point of sendfile but I might be mistaken. Unfortunately sendfile isn’t so suitable nowadays because of tls. But maybe if you could do tls offload and do sendfile then an OS could be capable of needing less memory for sendbufs.

    • gmueckl a day ago

      Larger windows can reduce the maximum number of simultaneous connections on the sender side.

    • Veserv a day ago

      You can in theory. You just need a accurate model of your available bandwidth and enough buffering/storage to avoid stalls while you wait for acknowledgement. It is, frankly, not even that hard to do it right. But in practice many implementations are terrible, so good luck.

  • superjan a day ago

    As an alternative to simulating latency: How about using a VPN service to test your website via Australia? I suppose that when it easier to do, it is more likely that people will actually do this test.

    • sokoloff a day ago

      That’s going to give you double (plus a bit) latency as your users in Australia will experience.

      • codetrotter a day ago

        Rent a VPS or physical server in Australia. Then you will have approx the same latency accessing that dev server, that the Australians have reaching servers in your country.

  • api a day ago

    A major problem with TCP is that the limitations of the kernel network stack and sometimes port allocation place absurd artificial limits on the number of active connections. A modern big server should be able to have tens of millions of open TCP connections at least, but to do that well you have to do hacks like running a bunch of pointless VMs.

    • toast0 a day ago

      > A modern big server should be able to have tens of millions of open TCP connections at least, but to do that well you have to do hacks like running a bunch of pointless VMs.

      Inbound connections? You don't need to do anything other than make sure your fd limit is high and maybe not be ipv4 only and have too many users behind the same cgnat.

      Outbound connections is harder, but hopefully you don't need millions of connections to the same destination, or if you do, hopefully they support ipv6.

      When I ran millions of connections through HAproxy (bare tcp proxy, just some peaking to determine the upstream), I had to do a bunch of work to make it scale, but not because of port limits.

jrpelkonen 2 days ago

Curl creator/maintainer Daniel Stenberg blogged about HTTP/3 in curl a few months ago: https://daniel.haxx.se/blog/2024/06/10/http-3-in-curl-mid-20...

One of the things he highlighted was the higher CPU utilization of HTTP/3, to the point where CPU can limit throughput.

I wonder how much of this is due to the immaturity of the implementations, and how much this is inherit due to way QUIC was designed?

  • dan-robertson a day ago

    Two recommendations are for improving receiver-side implementations – optimising them and making them multithreaded. Those suggest some immaturity of the implementations. A third recommendation is UDP GRO, which means modifying kernels and ideally NIC hardware to group received UDP packets together in a way that reduces per-packet work (you do lots of per-group work instead of per-packet work). This already exists in TCP and there are similar things on the send side (eg TSO, GSO in Linux), and feels a bit like immaturity but maybe harder to remedy considering the potential lack of hardware capabilities. The abstract talks about the cost of how acks work in QUIC but I didn’t look into that claim.

    Another feature you see for modern tcp-based servers is offloading tls to the hardware. I think this matters more for servers that may have many concurrent tcp streams to send. On Linux you can get this either with userspace networking or by doing ‘kernel tls’ which will offload to hardware if possible. That feature also exists for some funny stuff in Linux about breaking down a tcp stream into ‘messages’ which can be sent to different threads, though I don’t know if it allows eagerly passing some later messages when earlier packets were lost.

  • cj a day ago

    I’ve always been under the impression that QUIC was designed for connections that aren’t guaranteed to be stable or fast. Like mobile networks.

    I never got the impression that it was intended to make all connections faster.

    If viewed from that perspective, the tradeoffs make sense. Although I’m no expert and encourage someone with more knowledge to correct me.

    • dan-robertson a day ago

      I think that’s a pretty good impression. Lots of features for those cases:

      - better behaviour under packet loss (you don’t need to read byte n before you can see byte n+1 like in tcp)

      - better behaviour under client ip changes (which happen when switching between cellular data and wifi)

      - moving various tricks for getting good latency and throughput in the real world into user space (things like pacing, bbr) and not leaving enough unencrypted information in packets for middleware boxes to get too funky

    • fulafel a day ago

      That's how the internet works, there's no guaranteed delivery and TCP bandwidth estimation is based on when packets start to be dropped when you send too many.

  • therealmarv a day ago

    "immaturity of the implementations" is a funny wording here. QUIC was created because there is absolutely NO WAY that all internet hardware (including all middleware etc) out there will support a new TCP or TLS standard. So QUIC is an elegant solution to get a new transport standard on top of legacy internet hardware (on top of UDP).

    In an ideal World we would create a new TCP and TLS standard and replace and/or update all internet routers and hardware everywhere World Wide so that it is implemented with less CPU utilization ;)

    • api a day ago

      A major mistake in IP’s design was to allow middle boxes. The protocol should have had some kind of minimal header auth feature to intentionally break them. It wouldn’t have to be strong crypto, just enough to make middle boxes impractical.

      It would have forced IPv6 migration immediately (no NAT) and forced endpoints to be secured with local firewalls and better software instead of middle boxes.

      The Internet would be so much simpler, faster, and more capable. Peer to peer would be trivial. Everything would just work. Protocol innovation would be possible.

      Of course tech is full of better roads not taken. We are prisoners of network effects and accidents of history freezing ugly hacks into place.

      • kbolino 21 hours ago

        The only mechanism I can think of that could have been used for that purpose, and was publicly known about (to at least some extent) in the late 1970s, would be RSA. That is strong crypto, or at least we know it is when used properly today, but it's unlikely the authors of IP would have known about it. Even if they did, the logistical challenges of key distribution would have sunk its use, and they would almost certainly have fallen into one of the traps in implementing it that took years to discover, and the key sizes that would have been practical for use ca 1980 would be easy to break by the end of the 1990s.

        Simply put, this isn't a road not taken, it's a road that didn't exist.

      • tsimionescu a day ago

        I completely disagree with this take.

        First of all, NAT is what saved the Internet from being forked. IPv6 transition was a pipe dream at the time it was first proposed, and the vast growth in consumers for ISPs that had just paid for expensive IPv4 boxes would never have resulted in them paying for far more expensive (at the time) IPv6 boxes, it would have resulted in much less growth, or other custom solutions, or even separate IPv4 networks in certain parts of the world. Or, if not, it would have resulted in tunneling all traffic over a protocol more amenable to middle boxes, such as HTTP, which would have been even worse than the NAT happening today.

        Then, even though it was unintentional, NAT and CGNAT are what ended up protecting consumers from IP-level tracking. If we had transitioned from IPv4 directly to IPv6, without the decades of NAT, all tracking technology wouldn't have bothered with cookies and so on, we would have had the trivial IP tracking allowed by the one-IP-per-device vision. And with the entrenched tracking adware industry controlling a big part of the Internet and relying on tracking IPs, the privacy extensions to IPv6 (which, remember, came MUCH later in IPv6's life than the original vision for the transition) would never have happened.

        I won't bother going into the other kinds of important use cases that other middle boxes support, that a hostile IPv4 would have prevented, causing even bigger problems. NAT is actually an excellent example of why IPs design decisions that allow middle boxes are a godsend, not a tragic mistake. Now hopefully we can phase out NAT in the coming years, as it's served its purpose and can honorably retire.

        • api 19 hours ago

          The cost of NAT is much higher than you think. If computers could just trivially connect to each other then software might have evolved collaboration and communication features that rely on direct data sharing. The privacy and autonomy benefits of that are enormous, not to mention the reduced need for giant data centers.

          It’s possible that the cloud would not have been nearly as big as it has been.

          The privacy benefits of NAT are minor to nonexistent. In most of the developed world most land connections get one effectively static V4 IP which is enough for tracking. Most tracking relies primarily on fingerprints, cookies, apps, federated login, embeds, and other methods anyway. IP is secondary, especially with the little spies in our pockets that are most people’s phones.

          • tsimionescu an hour ago

            End to end connectivity without a third party server for discovery is either complicated for the end-user (manually specifying IPs, ports, etc) or it relies on inherently insecure techniques like multicast/broadcast. And once you introduce a third party server that both peers connect to, establishing a connection even through NAT is not that much harder. And yes, NAT does have some costs, but transitioning to IPv6 also does, and I don't think that the Internet justified that cost at the time IPv4 addresses first started running out. NAT's cost is much more diffuse and in the future.

            We'll see if this more direct communication actually happens as IPv6 becomes ubiquitous, but I for one doubt it. Especially since ISPs are not at all friendly to residential customers trying to run servers, often giving out dynamic prefixes or small subnets (/128s even!) even on IPv6. And I think the LTE network is decent evidence in support of my doubts: it was built from the ground up with IPv6-only internally, and there are no stable IP guarantees anywhere.

            As to the privacy benefits, those are real and have made IP tracking almost useless. Your public IP, even in the developed world, very commonly changes daily or weekly. Even worse for trackers, when it does change, it changes to an IP that someone else was using.

      • johncolanduoni a day ago

        Making IPv4 headers resistant to tampering wouldn't have helped with IPv6 rollout, as routers (both customer and ISP) would still need to be updated to be able to understand how to route packets with the new headers.

        • ajb a day ago

          The GP's point is that if middle boxes couldn't rewrite the header, NAt would be impossible. And if NAT were impossible, ipV4 would have died several years ago because NAT allowed more computers than addresses.

          • tsimionescu a day ago

            Very unlikely. Most likely NAT would have happened to other layers of the stack (HTTP, for example), causing even more problems. Or, the growth of the Internet would have stalled dramatically, as ISPs would have either increased prices dramatically to account for investments in new and expensive IPv6 hardware, or simply stopped acceptong new subscribers.

            • ajb a day ago

              Your first scenario is plausible, the second I'm not sure about. Due to the growth rate central routers had a very fast replacement cycle anyway, and edge devices mostly operated at layer 2, so didn't much care about IP. (Maybe the was done device in the middle that would have had a shorter lifespan?). I worked at a major router semiconductor vendor, and I can tell you that all the products supported IPv6 at a hardware level for many, many years before significant deployment and did not use it as a price differentiator. (Sure, they were probably buggy for longer than necessary, but that would have been shaken out earlier if the use was earlier). So I don't think the cost of routers was the issue.

              The problem with ipv6 in my understanding was that the transitional functions (nat-pt etc) were half baked and a new set had to be developed. It is possible that disruption would have occurred if that had to be done against an earlier address exhaustion date.

      • ocdtrekkie a day ago

        This ignores... a lot of reality. Like the fact that when IP was designed, the idea of every individual network device having to run its own firewall was impractical performance-wise, and decades later... still not really ideal.

        There's definitely some benefits to glean from a zero trust model, but putting a moat around your network still helps a lot and NAT is probably the best accidental security feature to ever exist. Half the cybersecurity problems we have are because the cloud model has normalized routing sensitive behavior out to the open Internet instead of private networks.

        My middleboxes will happily be configured to continue to block any traffic that refuses to obey them. (QUIC and ECH inclusive.)

        • codexon a day ago

          Even now, you can saturate a modern cpu core with only 1 million packets per second.

      • AndyMcConachie a day ago

        A major mistake of the IETF was to not standardize IPv4 NAT. Had it been standardized early on there would be fewer problems with it.

      • bell-cot a day ago

        > It would have forced IPv6 migration immediately (no NAT) and forced endpoints to be secured...

        There's a difference between "better roads not taken", and "taking this road would require that most of our existing cars and roads be replaced, simultaneously".

      • dcow a day ago

        Now that’s a horse of a different color! I’m already opining this alt reality. Middle-boxes and everyone touching them ruined the internet.

  • paulddraper a day ago

    Those performance results surprised me too.

    His testing has CPU-bound quiche at <200MB/s and nghttp2 was >900MB/s.

    I wonder if the CPU was throttled.

    Because if HTTP 3 impl took 4x CPU that could be interesting but not necessarily a big problem if the absolute value was very low to begin with.

lysace 2 days ago

> We find that over fast Internet, the UDP+QUIC+HTTP/3 stack suffers a data rate reduction of up to 45.2% compared to the TCP+TLS+HTTP/2 counterpart.

Haven't read the whole paper yet, but below 600 Mbit/s is implied as being "Slow Internet" in the intro.

  • cj a day ago

    In other words:

    Enable http/3 + quic between client browser <> edge and restrict edge <> origin connections to http/2 or http/1

    Cloudflare (as an example) only supports QUIC between client <> edge and doesn’t support it for connections to origin. Makes sense if the edge <> origin connection is reusable, stable, and “fast”.

    https://developers.cloudflare.com/speed/optimization/protoco...

    • dilyevsky a day ago

      Cloudflare tunnels work over quic so this is not entirely correct

  • Dylan16807 2 days ago

    Just as important is > we identify the root cause to be high receiver-side processing overhead, in particular, excessive data packets and QUIC's user-space ACKs

    It doesn't sound like there's a fundamental issue with the protocol.

  • dathinab a day ago

    They also mainly identified a throughput reduction due to latency issues caused by ineffective/too many syscalls in how browsers implement it.

    But such a latency issue isn't majorly increasing battery usage (compared to a CPU usage issue which would make CPUs boost). Nor is it an issue for server-to-server communication.

    It basically "only" slows down high bandwidth transmissions on end user devices with (for 2024 standards) very high speed connection (if you take effective speeds from device to server, not speeds you where advertised to have bough and at best can get when the server owner has a direct pairing agreement with you network provider and a server in your region.....).

    Doesn't mean the paper is worthless, browser should improve their impl. and it highlights it.

    But the title of the paper is basically 100% click bait.

    • ec109685 a day ago

      How is it clickbait? The title implies that QUIC isn't as fast as other protocols over fast internet connections.

      • dathinab a day ago

        Because it's QUIC _implementations of browser_ not being as fast as the non quick impl of browsers on connections most people would not just call fast but very fast (in context of browser usage) while still being definitely 100% fast enough for all browser use case done today (sure it theoretically might reduce video bit rate, that is, if it isn't already capped to a anyway smaller rate, which AFIK it basically always is).

        So "Not Quick Enough" is plain out wrong, it is fast enough.

        The definition of "Fast Internet" misleading.

        And even "QUIC" is misleading as it normally refers to the protocol while the benchmarked protocol is HTTP/3 over QUIC and the issue seem to be mainly in the implementations.

  • Aurornis 2 days ago

    Internet access is only going to become faster. Switching to a slower transport just as Gigabit internet is proliferating would be a mistake, obviously.

    • ratorx 2 days ago

      It depends on whether it’s meaningfully slower. QUIC is pretty optimized for standard web traffic, and more specifically for high-latency networks. Most websites also don’t send enough data for throughput to be a significant issue.

      I’m not sure whether it’s possible, but could you theoretically offload large file downloads to HTTP/2 to get best of both worlds?

      • pocketarc a day ago

        > could you theoretically offload large file downloads to HTTP/2

        Yes, you can! You’d have your websites on servers that support HTTP/3 and your large files on HTTP/2 servers, similar to how people put certain files on CDNs. It might well be a great solution!

      • kijin a day ago

        High-latency networks are going away, too, with Cloudflare eating the web alive and all the other major clouds adding PoPs like crazy.

    • tomxor 2 days ago

      In terms of maximum available throughput it will obviously become greater. What's less clear is if the median and worst throughput available throughout a nation or the world will continue to become substantially greater.

      It's simply not economical enough to lay fibre and put 5G masts everywhere (5G LTE bands covers less area due to being higher frequency, and so are also limited to being deployed in areas with a higher enough density to be economically justifiable).

      • nine_k a day ago

        Fiber is the most economical solution, it's compact, cheap, not susceptible to electromagnetic interference from thunderstorms, not interesting for metal thieves, etc.

        Most importantly, it can be heavily over-provisioned for peanuts, so your cable is future-proof, and you will never have dig the same trenches again.

        Copper only makes sense if you already have it.

        • tomxor a day ago

          Then why isn't it everywhere, it's been practical for over 40 years now.

          • nine_k a day ago

            It is everywhere in new development. I remember Google buying tons of "dark fiber" capacity from telcos like 15 years ago; that fiber was likely laid for future needs 20-25 years ago. New apartment buildings in NYC just get fiber, with everything, including traditional "cable TV" with BNC connectors, powered by it.

            But telcos have colossal copper networks, and they want to milk the last dollars from it before it has to be replaced, with digging and all. Hence price segmenting, with slower "copper" plans and premium "fiber" plans, obviously no matter if the building has fiber already.

            Also, passive fiber interconnects have much higher losses than copper with RJ45s. This means you want to have no more than 2-3 connectors between pieces of active equipment, including from ISP to a building. This requires more careful planning, and this is why wiring past the apartment (or even office floor or a single-family house) level is usually copper Ethernet.

          • BenjiWiebe 7 hours ago

            I think our phone lines (the only buried cable here that can do data) are probably >40 years old. They're still selling DSL over it.

    • jiggawatts 2 days ago

      Here in Australia there’s talk of upgrading the National Broadband Network to 2.5 Gbps to match modern consumer Ethernet and WiFi speeds.

      I grew up with 2400 baud modems as the super fast upgrade, so talk of multiple gigabits for consumers is blowing my mind a bit.

      • Kodiack a day ago

        Meanwhile here in New Zealand we can get 10 Gbps FTTH already.

        Sorry about your NBN!

        • wkat4242 a day ago

          Here in Spain too.

          I don't see a need for it yet though. I'm a really heavy user (it specialist with more than a hundred devices in my networks) and I really don't need it.

          • jiggawatts a day ago

            These things are nice-to-have until they become sufficiently widespread that typical consumer applications start to require the bandwidth. That comes much later.

            E.g.: 8K 60 fps video streaming benefits from data rates up to about 1 Gbps in a noticeable way, but that's at least a decade away form mainstream availability.

            • notpushkin a day ago

              The other side of this particular coin is, when such bandwidth is widely available, suddenly a lot of apps that have worked just fine are now eating it up. I'm not looking forward to 9 gigabyte Webpack 2036 bundles everywhere :V

              • wkat4242 19 hours ago

                Yeah for me it's mostly ollama models lol. It is nice to see it go fast. But even on my 1gbit it feels fast enough.

            • wkat4242 19 hours ago

              Yeah the problem here is also that I don't have the router setup to actually distribute that kind of bandwidth. 2.5Gbit max..

              And internal network is 1 Gbit too. So it'll take ) and cost) more than just changing my subscription.

              Also my TV is still 1080p lol

      • TechDebtDevin 2 days ago

        Is Australia's ISP infrastructure nationalized?

        • jiggawatts a day ago

          It's a long story featuring nasty partisan politics, corrupt incumbents, Rupert Murdoch, and agile upstarts doing stealth rollouts at the crack of dawn.

          Basically, the old copper lines were replaced by the NBN, which is a government-owned corporation that sells wholesale networking to telcos. Essentially, the government has a monopoly, providing the last-mile fibre links. They use nested VLANs to provide layer-2 access to the consumer telcos.

          Where it got complicated was that the right-wing government was in the pocket of Rupert Murdoch, who threatened them with negative press before an upcoming election. They bent over and grabbed their ankles like the good little Christian school boys they are, and torpedoed the NBN network technology to protect the incumbent Fox cable network. Instead of fibre going to all premises, the NBN ended up with a mix of technologies, most of which don't scale to gigabit. It also took longer and cost more, despite the government responsible saying they were making these cuts to "save taxpayer money".

          Also for political reasons, they were rolling it out starting at the sparse rural areas and leaving the high-density CBD regions till last. This made it look bad, because if they spent $40K digging up the long rural dirt roads to every individual farmhouse, it obviously won't have much of a return on the taxpayer's investment... like it would have if deployed to areas with technology companies and their staff.

          Some existing smaller telcos noticed that there was a loophole in the regulation that allowed them to connect the more lucrative tech-savvy customers to their own private fibre if it's within 2km of an existing line. Companies like TPG had the entire CBD and inner suburban regions of every major city already 100% covered by this radius, so they proceeded to leapfrog the NBN and roll out their own 100 Mbps fibre-to-the-building service half a decade ahead. I saw their unmarked white vans stealthily rolling out extra fibre at like 3am to extend their coverage area before anyone in the government noticed.

          The funny part was that FttB uses VDSL2 boxes in the basement for the last 100m going up to apartments, but you can only have one per building because they use active cross-talk cancellation. So by the time the NBN eventually got around to wiring the CBD regions, they got to the apartments to discover that "oops, too late", private telcos had gotten there first!

          There were lawsuits... which the government lost. After all, they wrote the legislation, they were just mad that they hadn't actually understood it.

          Meanwhile, some other incumbent fibre providers that should have disappeared persisted like a stubborn cockroach infestation. I've just moved to an apartment serviced by OptiComm, which has 1.1 out of 5 stars on Google... which should tell you something. They even have a grey fibre box that looks identical to the NBNCo box except it's labelled LBNCo with the same font so that during a whirlwind apartment inspection you might not notice that you're not going to be on the same high-speed Internet as the rest of the country.

          • dbaggerman a day ago

            To clarify, NBN is a monopoly on the last mile infrastructure which is resold to private ISPs that sell internet services.

            The history there is that Australia used to have a government run monopoly on telephone infrastructure and services (Telecom Australia), which was later privatised (and rebranded to Telstra). The privatisation left Telstra with a monopoly on the infrastructure, but also a requirement that they resell the last mile at a reasonable rate to allow for some competition.

            So Australia already had an existing industry of ISPs that were already buying last mile access from someone else. The NBN was just a continuation of the existing status quo in that regard.

            > They even have a grey fibre box that looks identical to the NBNCo box except it's labelled LBNCo with the same font

            Early in my career I worked for one of those smaller telcos trying to race to get services into buildings before the NBN. I left around the time they were talking about introducing an LBNCo brand (only one of the reasons I left). At the time, they weren't part of Opticomm, but did partner with them in a few locations. If the brand is still around, I guess they must have been acquired at some point.

            • jiggawatts a day ago

              I heard from several sources that what they do is give the apartment builder a paper bag of cash in exchange for the right to use their wires instead of the NBN. Then they gouge the users with higher monthly fees.

              • dbaggerman a day ago

                When I was there NBNCo hadn't really moved into the inner city yet. We did have some kind of financial agreement with the building developer/management to install our VDSL DSLAMs in their comms room. It wouldn't surprise me if those payments got shadier and more aggressive as the NBN coverage increased.

          • TechDebtDevin a day ago

            Thanks for the response! Very interesting. Unfortunately the USA is a tumor on this planet. Born and Raised, this place is fucked and slowly fucking the whole world.

  • nh2 a day ago

    In Switzerland you get 25 Gbit/s for $60/month.

    In 30 years it will be even faster. It would be silly to have to use older protocols to get line speed.

    • 77pt77 a day ago

      Now do the same in Germany...

  • wkat4242 a day ago

    For local purposes that's certainly true. It seems that quic trades a faster connection establishment for lower throughput. I personally prefer tcp anyway.

  • nine_k a day ago

    Gigabit connections are widely available in urban areas. The problem is not theoretical, but definitely is pretty recent / nascent.

    • Dylan16807 a day ago

      A gigabit connection is just one prerequisite. The server also has to be sending very big bursts of foreground/immediate data or you're very unlikely to notice anything.

  • Fire-Dragon-DoL 2 days ago

    That is interesting though. 1gbit is becoming more common

    • schmidtleonard 2 days ago

      It's wild that 1gbit LAN has been "standard" for so long that the internet caught up.

      Meanwhile, low-end computers ship with a dozen 10+Gbit class transceivers on USB, HDMI, Displayport, pretty much any external port except for ethernet, and twice that many on the PCIe backbone. But 10Gbit ethernet is still priced like it's made from unicorn blood.

      • Aurornis a day ago

        > Meanwhile, low-end computers ship with a dozen 10+Gbit class transceivers on USB, HDMI, Displayport, pretty much any external port except for ethernet, and twice that many on the PCIe backbone. But 10Gbit ethernet is still priced like it's made from unicorn blood.

        You really can’t think of any major difference between 10G Ethernet and all of those other standards that might be responsible for the price difference?

        Look at the supported lengths and cables. 10G Ethernet over copper can go an order of magnitude farther over relatively generic cables. Your USB-C or HDMI connections cannot go nearly as far and require significantly more tightly controlled cables and shielding.

        That’s the difference. It’s not easy to accomplish what they did with 10G Ethernet over copper. They used a long list of tricks to squeeze every possible dB of SNR out of those cables. You pay for it with extremely complex transceivers that require significant die area and a laundry list of complex algorithms.

        • schmidtleonard a day ago

          There was a time when FFE, DFE, CTLE, and FEC could reasonably be considered an extremely complex bag of tricks by the standards of the competition. That time passed many years ago. They've been table stakes for a while in every other serial standard. Wifi is beating ethernet at the low end, ffs, and you can't tell me that air is a kinder channel. A low-end PC will ship with a dozen transceivers implementing all of these tricks sitting idle, while it'll be lucky to have a single 2.5Gbe port and you'll have to pay extra for the privilege.

          No matter, eventually USB4NET will work out of the box. The USB-IF is a clown show and they have tripped over their shoelaces every step of the way, but consumer Ethernet hasn't moved in 20 years so this horse race still has a clear favorite, lol.

        • reshlo a day ago

          You explained why 10G Ethernet cables are expensive, but why should it be so expensive to put a 10G-capable port on the computer compared to the other ports?

          • kccqzy a day ago

            Did you completely misunderstand OP? The 10G Ethernet cables are not expensive. In a pinch, even your Cat 5e cable is capable of 10G Ethernet albeit at a shorter distance than Cat 6 cable. Even then, it can be at least a dozen times longer than a similar USB or HDMI or DisplayPort cable.

            • reshlo a day ago

              I did misunderstand it, because looking at it again now, they spent the entire post talking about how difficult it is to make the cables, except for the very last sentence where they mention die area one time, and it’s still not clear that they’re talking about die area for something that’s inside the computer rather than a chip that goes in the cable.

              > Look at the supported lengths and cables. … relatively generic cables. Your USB-C or HDMI connections cannot go nearly as far and require significantly more tightly controlled cables and shielding. … They used a long list of tricks to squeeze every possible dB of SNR out of those cables.

              • chgs a day ago

                Their point was those systems like hdmi, bits of usb-c etc put the complexity is very expensive very short cables.

                Meanwhile a 10g port on my home router will run over copper for far longer. Not that I’m a fan given the power use, fibre is much easier to deal with and will run for miles.

      • jsheard 2 days ago

        Those very fast consumer interconnects are distinguished from ethernet by very limited cable lengths though, none of them are going to push 10gbps over tens of meters nevermind a hundred. DisplayPort is up to 80gbps now but in that mode it can barely even cross 1.5m of heavily shielded copper before the signal dies.

        In a perfect world we would start using fiber in consumer products that need to move that much bandwidth, but I think the standards bodies don't trust consumers with bend radiuses and dust management so instead we keep inventing new ways to torture copper wires.

        • crote a day ago

          > In a perfect world we would start using fiber in consumer products that need to move that much bandwidth

          We are already doing this. USB-C is explicitly designed to allow for cables with active electronics, including conversion to & from fiber. You could just buy an optical USB-C cable off Amazon, if you wanted to.

          • Dylan16807 a day ago

            When you make the cable do the conversion, you go from two expensive transceivers to six expensive transceivers. And if the cable breaks you need to throw out four of them. It's a poor replacement for direct fiber use.

        • schmidtleonard a day ago

          Sure you need fiber for long runs at ultra bandwidth, but short runs are common and fiber is not a good reason for DAC to be expensive. Not within an order of magnitude of where it is.

          • Dylan16807 a day ago

            These days, passive cables that support ultra bandwidth are down to like .5 meters.

            For anything that wants 10Gbps lanes or less, copper is fine.

            For ultra bandwidth, going fiber-only is a tempting idea.

      • michaelt 2 days ago

        Agree that a widespread faster ethernet is long overdue.

        But bear in mind, standards like USB4 only support very short cables. It's impressive that USB4 can offer 40 Gbps - but it can only do so on 1m cables. On the other hand, 10 gigabit ethernet claims to go 100m on CAT6A.

        • crote a day ago

          USB4 does support longer distances, but those cables need active electronics to guarantee signal integrity. That's how you end up with Apple's $160 3-meter cable.

          • chgs a day ago

            A 3m 100g dac is 1/3 the price

      • nijave 2 days ago

        2.5Gbps is becoming pretty common and fairly affordable, though

        My understanding is right around 10Gbps you start to hit limitations with the shielding/type of cable and power needed to transmit/send over Ethernet.

        When I was looking to upgrade at home, I had to get expensive PoE+ injectors and splitters to power the switch in the closet (where there's no outlet) and 10Gbps SFP+ transceivers are like $10 for fiber or $40 for Ethernet. The Ethernet transceivers hit like 40-50C

        • crote a day ago

          The main issue is switches, really. 5Gbps USB NICs are available for $30 on Amazon, or $20 on AliExpress. 10Gbps NICS are $60, so not exactly crazy expensive either.

          But switches haven't really kept up. A simple unmanaged 5-port or 8-port 2.5GigE isn't too bad, but anything beyond that gets tricky. 5GigE switches don't seem to exist, and you're already paying $500 for a budget-brand 10GigE switch with basic VLAN support. You want PoE? Forget it.

          The irony is that at 10Gbps fiber suddenly becomes quite attractive. A brand-new SFP+ NIC can be found for $30, with DACs only $5 (per side) and transceivers $30 or so. You can get an actually-decent switch from Mikrotik for less than $300.

          Heck, you can even get brand-new dualport SFP28 NICs for $100, or as little as $25 on Ebay! Switch-wise you can get 16 ports of 25Gbps out of a $800 Mikrotik switch: not exactly cheap, but definitely within range for a very enthusiastic homelabber.

          The only issue is that wiring your home for fiber is stupidly expensive, and you can't exactly use it to power access points either.

          • maccard a day ago

            > The only issue is that wiring your home for fiber is stupidly expensive

            What do you mean by that? My home isnt wired for ethernet. I can buy 30m of CAT6 cable for £7, or 30m of fibre for £17. For a home use, that's a decent amount of cable, and even spending £100 on cabling will likely run cables to even the biggest of houses.

            • hakfoo a day ago

              Isn't the expensive part more the assembly aspect? For Cat 6 the plugs and keystone jacks add up to a few dollars per port, and the crimper is like $20. I understand building your own fibre cables-- if you don't want to thread them through walls without the heads pre-attached, for example-- involves more sophisticated glass-fusion tools that are fairly expensive.

              A rental service might help there, or a call-in service-- the 6 hours of drilling holes and pulling fibre can be done by yourself, and once it's all cut to rough length, bring out a guy who can fuse on 10 plugs in an hour for $150.

              • maccard a day ago

                Thanks - I genuinely didn't know. I assumed that you could "just" crimp it like CAT6, but a quick google leads me to spending quite a few hundred pounds on something like this[0].

                That said;

                > A rental service might help there, or a call-in service-- the 6 hours of drilling holes and pulling fibre can be done by yourself, and once it's all cut to rough length, bring out a guy who can fuse on 10 plugs in an hour for $150.

                If you were paying someone to do it (rather than DIY) I'd wager the cost would be similar, as you're paying them for 6 hours of labour either way.

                [0] https://www.cablemonkey.co.uk/fibre-optic-tool-kits-accessor...

              • Dylan16807 a day ago

                If you particularly want to use a raw spool, then yes that's an annoying cost. If you buy premade cables for an extra $5 each then it's fine.

                • hakfoo a day ago

                  A practical drawback to premade cables is the need for a larger hole to accommodate the pre-attached connector. There's also a larger gap that needs to be plugged around the cable to prevent leaks into he wall.

                  My ordinary home-centre electric drill and an affordable ~7mm masonry bit lets me drill a hole in stucco large enough to accept bare cables with a very narrow gap to worry about.

                • inferiorhuman a day ago

                  Where are you finding them for that cheap? OP is talking about 20GBP for a run of fiber. If I look at, for instance, Ubiquiti their direct attach cables start at $13 for 0.5 meter cables.

                  • Dylan16807 a day ago

                    I was looking at patch cables. Ubiquiti's start at $4.80

              • chgs a day ago

                My single mode keystones pass through were about the same price as cat5, and pre-made cables were no harder to run than un terminated cat5.

        • akira2501 a day ago

          Ironically.. 2.5 Gbps is created by taking a 10GBASE-T module and effectively underclocking it. I wonder if "automatic speed selection" is around the corner with modules that automatically connect at 100Mbps to 10Gbps based on available cable quality.

          • cyberax a day ago

            My 10G modules automatically drop down to 2.5G or 1G if the cable is not good enough. There's also 5G, but I have never seen it work better than 2.5G.

            • akira2501 a day ago

              Oh man. I've been off the IT floor for too long. Time to change my rhetoric, ya'll have been around the corner for a while.

              Aging has it's upsides and downsides I guess.

            • chgs a day ago

              I don’t think my 10g coppers will drop to 10m. 100m sure, but 10m rings a bell.

        • Dylan16807 a day ago

          > My understanding is right around 10Gbps you start to hit limitations with the shielding/type of cable and power needed to transmit/send over Ethernet.

          If you decide you only need 50 meters, that reduces both power and cable requirements by a lot. Did we decide to ignore the easy solution in favor of stagnation?

      • Fire-Dragon-DoL a day ago

        It passed it! Here there are offers up to 3gbit residential (Vancouver). I had 1.5 bit for a while. Downgraded to 1gbit because while I love fast internet, right now nobody in the home uses it enough to affect 1gbit speed

      • Dalewyn a day ago

        There is an argument to be made that gigabit ethernet is "good enough" for Joe Average.

        Gigabit ethernet is ~100MB/s transfer speed over copper wire or ~30MB/s over wireless accounting for overhead and degradation. That is more than fast enough for most people.

        10gbit is seemingly made from unicorn blood and 2.5gbit is seeing limited adoption because there simply isn't demand for them outside of enterprise who have lots of unicorn blood in their banks.

  • paulddraper a day ago

    > below 600 Mbit/s is implied as being "Slow Internet" in the intro

    Or rather, not "Fast Internet"

Tempest1981 2 days ago

From September:

QUIC is not quick enough over fast internet (acm.org)

https://news.ycombinator.com/item?id=41484991 (327 comments)

  • lysace 2 days ago

    My personal takeaway from that: Perhaps we shouldn't let Google design and more or less unilaterally dictate and enforce internet protocol usage via Chromium.

    Brave/Vivaldi/Opera/etc: You should make a conscious choice.

    • ratorx a day ago

      Having read through that thread, most of the (top) comments are somewhat related to the lacking performance of the UDP/QUIC stack and thoughts on the meaningfulness of the speeds in the test. There is a single comment suggesting HTTP/2 was rushed (because server push was later deprecated).

      QUIC is also acknowledged as being quite different from the Google version, and incorporating input from many different people.

      Could you expand more on why this seems like evidence that Google unilaterally dictating bad standards? None of the changes in protocol seem objectively wrong (except possibly Server Push).

      Disclaimer: Work at Google on networking, but unrelated to QUIC and other protocol level stuff.

      • lysace a day ago

        > Could you expand more on why this seems like evidence that Google unilaterally dictating bad standards?

        I guess I'm just generally disgusted in the way Google is poisoning the web in the worst way possible: By pushing ever more complex standards. Imagine the complexity of the web stack in 2050 if we continue to let Google run things. It's Microsoft's old embrace-extend-and-extinguish scheme taken to the next level.

        In short: it's not you, it's your manager's manager's manager's manager's strategy that is messed up.

        • ratorx a day ago

          This is making a pretty big assumption that the web is perfectly fine the way it is and never needs to change.

          In reality, there are perfectly valid reasons that motivate QUIC and HTTP/2 and I don’t think there is a reasonable argument that they are objectively bad. Now, for your personal use case, it might not be worth it, but that’s a different argument. The standards are built for the majority.

          All systems have tradeoffs. Increased complexity is undesirable, but whether it is bad or not depends on the benefits. Just blanket making a statement that increasing complexity is bad, and the runaway effects of that in 2050 would be worse does not seem particularly useful.

          • lysace a day ago

            Nothing is perfect. But gigantic big bang changes (like from HTTP 1.1 to 2.0) enforced by a browser mono culture and a dominant company with several thousands of individually well-meaning Chromium software engineers like yourself - yeah, pretty sure that's bad.

            • jsnell a day ago

              Except that HTTP/1.1 to HTTP/2 was not a big bang change on the ecosystem level. No server or browser was forced to implement HTTP/2 to remain interoperable[0]. I bet you can't point any of this "enforcement" you claim happened. If other browser implemented HTTP/2, it was because they thought that the benefits of H2 outweighed any downsides.

              [0] There are non-browser protocols that are based on H2 only, but since your complaint was explicitly about browsers, I know that's not what you had in mind.

              • lysace a day ago

                You are missing the entire point: Complexity.

                It's not your fault, in case you were working on this. It was likely the result a strategy thing being decided at Google/Alphabet exec level.

                Several thousand very competent C++ software engineers don't come cheap.

                • jsnell a day ago

                  I mean, the reason I was discussing those specific aspects is that you're the one brought them up. You made the claim about how HTTP/2 was a "big bang" change. You're the one who made the claim that HTTP/2 was enforced on the ecosystem by Google.

                  And it seems that you can't support either of those claims in any way. In fact, you're just pretending that you never made those comments at all, and have once again pivoted to a new grievance.

                  But the new grievance is equally nonsensical. HTTP/2 is not particularly complex, and nobody on either the server or browser side was forced to implement it. Only those who thought the minimal complexity was worth it needed to do it. Everyone else remained fully interoperable.

                  I'm not entirely sure where you're coming from here, to be honest. Like, is your belief that there are no possible tradeoffs here? Nothing can ever justify even such minor amounts of complexity, no matter how large the benefits are? Or do you accept that there are tradeoffs, and are "just" disagree with every developer who made a different call on this when choosing whether to support HTTP/2 in their (non-Google) browser or server?

                  • lysace a day ago

                    Edit: this whole comment is incorrect. I was really thinking about HTTP 3.0, not 2.0.

                    HTTP/2 is not "particularly complex?" Come on! Do remember where we started.

                    > I'm not entirely sure where you're coming from here, to be honest. Like, is your belief that there are no possible tradeoffs here? Nothing can ever justify even such minor amounts of complexity, no matter how large the benefits are? Or do you accept that there are tradeoffs, and are "just" disagree with every developer who made a different call on this when choosing whether to support HTTP/2 in their (non-Google) browser or server?

                    "Such minor amounts of complexity". Ahem.

                    I believe there are tradeoffs. I don't believe that HTTP/2 met that tradeoff between complexity vs benefit. I do believe it benefitted Google.

                    • jsnell a day ago

                      "We" started from you making outlandish claims about HTTP/2 and immediately pivoting to a new complaint when rebutted rather than admit you were wrong.

                      Yes, HTTP/2 is not really complex as far as these things go. You just keep making that assertion as if it was self-evident, but it isn't. Like, can you maybe just name the parts you think are unnecessary complex? And then we can discuss just how complex they really are, and what the benefits are.

                      (Like, sure, having header compression is more complicated than not having it. But it's also an amazingly beneficial tradeoff, so it can't be what you had in mind.)

                      > I believe there are tradeoffs. I don't believe that HTTP/2 met that tradeoff between complexity vs benefit.

                      So why did Firefox implement it? Safari? Basically all the production level web servers? Google didn't force them to do it. The developers of all of that software had agency, evaluated the tradeoffs, and decided it was worth implementing. What makes you a better judge of the tradoffs than all of these non-Google entities?

                      • lysace a day ago

                        Yeah, sorry, I mixed up 2.0 (the one that still uses TCP) with 3.0. Sorry for wasting your time.

        • bawolff a day ago

          > It's Microsoft's old embrace-extend-and-extinguish scheme taken to the next level.

          It literally is not.

          • lysace a day ago

            Because?

            Edit: I'm not the first person to make this comparison. Witness the Chrome section in this article:

            https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...

            • bawolff a day ago

              Well it may be possible to make the comparison in other things google does (they have done a lot of things) it makes no sense for quic/http3.

              What are they extending in this analogy? Http3 is not an extension of http. What are they extinguishing? There is no plan to get rid of http1/2, since you still need it in lots of networks that dont allow udp.

              Additionally, its an open standard, with an rfc, and multiple competing implementations (including firefox and i believe experimental in safari). The entire point of embrace, extend, extinguish is that the extension is not well specified making it dufficult for competitors to implement. That is simply not what is happening here.

              • lysace a day ago

                What I meant with Microsoft's Embrace, extend, and extinguish (EEE) scheme taken to the next level is what Google has done to the web via Chromium:

                They have several thousand C++ browser engineers (and as many web standards people as they could get their hands on, early on). Combined with a dominant browser market share, this has let them dominate browser standards, and even internet protocols. They have abused this dominant position to eliminate all competitors except Apple and (so far) Mozilla. It's quite clever.

                • bawolff a day ago

                  > They have abused this dominant position to eliminate all competitors except Apple and (so far) Mozilla.

                  But that's like all of them. Except edge but that was mostly dead before chrome came on the scene.

                  It seems like you are using embrace, extend, extinguish to just mean, "be succesful", but that's not what the term means. Being a market leader is not the same thing as embrace, extend, extinguish. Neither is putting competition out of business.

                • Dylan16807 a day ago

                  > What I meant with Microsoft's Embrace, extend, and extinguish (EEE) scheme taken to the next level is what Google has done to the web via Chromium

                  I think this argument is reasonable, but QUIC isn't part of the problem.

                • jauntywundrkind a day ago

                  Microsoft just did shit, whatever they wanted. Google has worked with all the w3c committees and other browsers with tireless commitment to participation, with endless review.

                  It's such a tired sad trope of people disaffected with the web because they can't implement it by themselves easily. I'm so exhausted by this anti-progress terrorism; the world's shared hypermedia should be rich and capable.

                  We also see lots of strong progress these days from newcomers like Ladybird, and Servo seems gearing up to be more browser like.

                  • lysace a day ago

                    Yes, Google found the loophole: brute-force standards complexity by hiring thousands of very competent engineers eager to leave their mark on the web and eager to get promoted. The only thing they needed was lots of money, and they had just that.

                    I think my message here is only hard to understand if your salary (or personal worth etc) depends on not understanding it. It's really not that complex.

                    • bawolff a day ago

                      > I think my message here is only hard to understand if your salary (or personal worth etc) depends on not understanding it. It's really not that complex.

                      Just because someone disagrees with you, doesn't mean they don't understand you.

                      However, if you think google is making standards unneccessarily complex, you should read some of the standards from the 2000s (e.g. SAML).

            • ratorx a day ago

              Contributing to an open standard seems to be the opposite of the classic example.

              Assume that change X for the web is positive overall. Currently Google’s strategy is to implement in Chrome and collect data on usefulness, then propose a standard and have other people contribute to it.

              That approach seems pretty optimal. How else would you do it?

              • lysace a day ago

                [flagged]

                • ratorx a day ago

                  How does this have any relevance to my comment?

                  • lysace a day ago

                    How does your comment have any relevance to what we are discussing throughout this thread?

        • yunohn a day ago

          This is one of those HN buzzword medley comments that has only rant, no substance.

          - MS embrace extend extinguish

          - Google is making the world complex

          - Nth level manager is messed up

          None of the above was connected to deliver a clear point, just thrusted into the comment to sound profound.

    • GuB-42 a day ago

      Maybe, but QUIC is not bad as a protocol. The problem here is that OSes are not as well optimized for QUIC as they are for TCP. Just give it time, the paper even has suggestions.

      QUIC has some debatable properties, like mandatory encryption, or the use of UDP instead of being a protocol under IP like TCP, but there are good reasons for it, related to ossification.

      Yes, Google pushed for it, but I think it deserves its approval as a standard. It is not perfect but it is practical, they don't want another IPv6 situation.

    • vlovich123 2 days ago

      So because the Linux kernel isn’t as optimized for QUIC as it has been for TCP we shouldn’t design new protocols? Or it should be restricted to academics that had tried and failed for decades and would have had all the same problems even if they succeeded? And all of this only in a data center environment really and less about the general internet Quic was designed for?

      This is an interesting hot take.

      • lysace 2 days ago

        I'm struggling to parse my comment in the way you seem to think it did. In what way did or would my comment restrict your ability to design new protocols? Please explain.

        • vlovich123 a day ago

          Because you imply in that comment that it should be someone other than Google developing new protocols while in another you say that the protocols are already too complex implying stasis is the preferred state.

          You’re also factually incorrect in a number of ways such as claiming that HTTP/2 was a Google project (it’s not and some of the poorly thought out ideas like push didn’t come from Google).

          The fact of the matter is that other attempts at “next gen” protocols had taken place. Google is the only one that won out. Part of it is because they were one of the few properties that controlled enough web traffic to try something. Another is that they explicitly learned from mistakes that the academics had been doing and taken market effects into account (ie not requiring SW updates of middleware boxes). I’d say all things considered Internet connectivity is better that QUIC got standardized. Papers like this simply point to current inefficiencies of today’s implementation - those can be fixed. These aren’t intractable design flaws of the protocol itself.

          But you seem to really hate Google as a starting point so that seems to color your opinion of anything they produce rather than engaging with the technical material in good faith.

          • lysace a day ago

            I don't hate Google. I admire it what for what it is; an extremely efficient and inherently scalable corporate structure designed to exploit the Internet and the web in the most brutal and profitable way imaginable.

            It's just that their interests in certain aspects don't align with ours.

  • chgs a day ago

    QUIC is all about an advertising company guarenteeing delivery of adverts to the consumer.

    As long as the adverts arrive quickly the rest is immaterial.

kachapopopow 2 days ago

This sounds really really wrong. I've achieved 900mbps speeds on quic+http3 and just quic... Seems like a bad TLS implementation? Early implementation that's not efficient? The CPU usage seemed pretty avg at around 5% on gen 2 epyc cores.

  • kachapopopow a day ago

    This is actually very well known: current QUIC implementation in browsers is *not stable* and is built of either rustls or in another similar hacky way.

    • vasilvv a day ago

      I'm not sure where rustls comes from -- Chrome uses BoringSSL, and last time I checked, Mozilla implementation used NSS.

    • AlienRobot a day ago

      Why am I beta testing unstable software?

      • stouset 16 hours ago

        You’re the one choosing to use it.

        • AlienRobot 15 hours ago

          Okay, which browser doesn't come with it enabled by default? Chrome, Vivaldi, and Firefox do. Am I supposed to use Edge?

      • FridgeSeal a day ago

        Because Google puts whatever they want in their browser for you to beta test and you’ll be pleased about it, peasant /s.

spott 2 days ago

Here “fast internet” is 500Mbps, and the reason is that quic seems to be cpu bound above that.

I didn’t look closely enough to see what their test system was to see if this is basic consumer systems or is still a problem for high performance desktops.

exabrial 2 days ago

I wish QUIC had a non-TLS mode... if I'm developing locally I really just want to see whats going over the wire sometimes and this adds a lot of un-needed friction.

  • guidedlight 2 days ago

    QUIC reuses parts of the TLS specification (e.g. handshake, transport state, etc).

    So it can’t function without it.

  • krater23 2 days ago

    You can add the private key of your server in wireshark and it will automatically decrypt the packets.

    • jborean93 a day ago

      This only works tor RSA keys and I believe ciphers that do not have forward secrecy. Quic is TLS 1.3 and all the ciphers in that protocol do forward secrecy so cannot be decrypted in this way. You’ll have to use a tool that provides the TLS session info through the SSLKEYLOGFILE format.

      • giuscri 17 hours ago

        Like which one?

lbriner 20 hours ago

Funny though, we all implicitly buy into "QUIC is the new http/2" or whatever because fast = good without really understanding the details.

It's like buying the new 5G cell phone because it is X times faster than 4G even though 1) My 4G phone never actually ran at the full 4G speed and 2) The problem with any connection is almost never due to the line speed of my internet connection but a misbehaving DNS server/target website/connection Mux at my broadband provider. "But it's 5G"

Same thing cracks me up when people advertise "fibre broadband" for internet by showing people watching the TV like the wind is blowing in their hair, because that's how it works (not!). I used to stream on my 8Mb connection so 300Mb might be good for some things but I doubt I would notice much difference.

p1necone 2 days ago

I thought QUIC was optimized for latency - loading lots of little things at once on webpages and video games (which send lots of tiny little packets - low overall throughput but highly latency senstive) and such. I'm not surprised that it falls short when overall throughput is the only thing being measured.

I wonder if this can be optimized at the protocol level by detecting usage patterns that look like large file transfers or very high bandwidth video streaming and swapping over to something less cpu intensive.

Or is this just a case of less hardware/OS level optimization of QUIC vs TCP because it's new?

  • zamalek a day ago

    It seems that syscalls might be the culprit (ACKs occur completely inside the kernel for TCP, where anything UDP acks from userspace). I wonder if BGP could be extended for protocol development.

ec109685 a day ago

Meanwhile fast.com (and presumably netflix cdn) is using http 1.1 still.

  • dan-robertson a day ago

    Why do you need multiplexing when you are only downloading one (video) stream? Are there any features of http/2 that would benefit the Netflix use case?

    • jeltz a day ago

      QUIC handles packet loss better. But I do not think there is any benefit from HTTP2.

      • dan-robertson a day ago

        Yeah I was thinking the same thing – in some video contexts with some video codecs you may care more about latency and may be able to get a video codec that can cope with packet loss instead of requiring retransmission – except it seemed it wouldn’t apply too much to Netflix where the latency requirement is lower and so retransmission ought to be fine.

        Maybe one advantage of HTTP/3 would be handling ip changes but I’m not sure this matters much because you can already resume downloads fine in HTTP/1.1 if the server supports range requests (which it very likely does for video)

jpambrun 21 hours ago

This paper seems to be neglecting to the effect of latency and packet loss. From my understanding, the biggest issue with TCP is the window sizing that gets cut every time a packet gets lost or arrives out of order, thus killing throughput. The latency makes that more likely to happen and makes the effect last longer.

This paper needs multiple latency simulations, some packet loss and latency jitter to have any value.

  • dgacmu 21 hours ago

    This is a bit of a misunderstanding. A single out of order packet will not cause a reduction; tcp uses three duplicate acks as a loss signal. So the packet must have been reordered to arrive after 3 later packets.

    Latency does not increase the chances of out of order packet arrival. Out of order packet arrival is usually caused by multipath or the equivalent inside a router if packets are handled by different stream processors (or the equivalent). Most routers and networks are designed to keep packets within a flow together to avoid exactly this problem.

    However, it is fair to say that traversing more links and routers probably increases the chance of out of order packet delivery, so there's a correlation in some way with latency, but it's not really about the latency itself - you can get the same thing in a data center network.

Thaxll a day ago

QUIC is pretty much what serious online games have been doing in the last 20 years.

andsoitis a day ago

Designing for resource-constrained systems typically comes with making tradeoffs.

Once the resource constraint is eliminared, you're no longer getting the benefit of that tradeoff but are paying the costs.

skybrian 2 days ago

Looking at Figure 5, Chrome tops out at ~500 Mbps due to CPU usage. I don't think many people care about these speeds? Perhaps not using all available bandwidth for a few speedy clients is an okay compromise for most websites? This inadvertent throttling might improve others' experiences.

But then again, being CPU-throttled isn't great for battery life, so perhaps there's a better way.

  • jeroenhd a day ago

    These caps are a massive pain when downloading large games or OS upgrades for me as the end user. 500mbps is still fast but for a new protocol looking to replace older protocols, it's a big downside.

    I don't really benefit much from http/3 or QUIC (I don't live in a remote area or host a cloud server) so I've already considered disabling either. A bandwidth cap this low makes a bigger impact than the tiny latency improvements.

10000truths 2 days ago

TL;DR: Nothing that's inherent to QUIC itself, it's just that current QUIC implementations are CPU-bound because hardware GRO support has not yet matured in commodity NICs.

But throughput was never the compelling aspect of QUIC in the first place. It was always the reduced latency. A 1-RTT handshake including key/cert exchange is nothing to scoff at, and the 2-RTT request/response cycle that HTTP/3-over-QUIC offers means that I can load a blog page from a rinky-dink server on the other side of the world in < 500 ms. Look ma, no CDN!

  • o11c a day ago

    There's also the fact that TCP has an unfixable security flaw - any random middleware can inject data (without needing to block packets) and break the connection. TLS only can add Confidentiality and Integrity, it can do nothing about the missing Availability.

    • ChocolateGod a day ago

      > There's also the fact that TCP has an unfixable security flaw - any random middleware can inject data (without needing to block packets) and break the connection

      I am unsure how this is a security flaw of TCP? Any middleman could block UDP packets too and get the same effect, or modify UDP packets in an attempt to cause the receiving application to crash.

      • o11c a day ago

        In order to attack UDP, you have to block all routes through which traffic might flow. This is hard; remember, the internet tries to be resilient.

        In order to attack TCP, all you have to do is spy on a single packet (very easy) to learn the sequence number, then you can inject a wrench into the cogs and the endpoints will reject all legitimate traffic from each other.

        • jeroenhd a day ago

          That's only true if you use the kernel TCP stack. You can replicate the slow QUIC stack and do everything in user mode to get control back over what packets you accept (i.e. reject any that don't fit your TLS stream).

    • suprjami a day ago

      What does that have to do with anything here? This post is about QUIC performance, not TCP packet injection.

      • o11c a day ago

        "Accept worse performance in order to fix security problems" is a standard tradeoff.

        • suprjami a day ago

          QUIC was invented to provide better performance for multiplexed HTTP/3 streams and the bufferbloat people love that it avoids middlebox protocol interference.

          QUIC has never been about "worse performance" to avoid TCP packet injection.

          Anybody who cares about TCP packet injection is using crypto (IPSec/Wireguard). If performant crypto is needed there are appliances which do it at wirespeed.

jvanderbot 2 days ago

Well latency/bandwidth tradeoffs make sense. After bufferbloat mitigations my throughout halved on my router. But for gaming while everyone is streaming, it makes sense to settle with half a gigabit.

kibwen a day ago

How does it compare to HTTP/1 on similar benchmarks?

AlienRobot a day ago

Anecdote: I was having trouble accessing wordpress.org. When I started using Wordpress, I could access the documentation just fine, but then suddenly I couldn't access the website anymore. I dual boot Linux, so it wasn't Windows fault. I could ping them just fine. I tried three different browsers with the same issue. It's just that when I accessed the website, it would get stuck and not load at all, and sometimes pages would just stop loading mid-way.

Today I found the solution. Disable "Experimental QUIC Protocol" in Chrome settings.

This makes me kind of worried because I've had issues accessing wordpress.org for months. There was no indication that this was caused by QUIC. I just managed to realize it because there was QUIC-related error in devtools that appeared only sometimes.

I wonder what other websites are rendered inaccessible by this protocol and users have no idea what is causing it.

superkuh 2 days ago

Since QUIC was designed for Fast Internet as used by the megacorporations like Google and Microsoft how it performs at these scales does matter even if it doesn't for a human person's end.

Without it's designed for use case all it does is slightly help mobile platforms that don't want to hold open a TCP connection (for energy use reasons) and bring in fragile "CA TLS"-only in an environment where cert lifetimes are trending down to single months (Apple etc latest proposal).

  • dathinab 2 days ago

    not really it's (mainly) designed by companies like Google to connect to all their end users

    Such a internet connection becoming so low latency that the latency of receiver side processing becomes dominant is in practice not the most relevant. Sure theoretically you can hit it with e.g. 5G but in practice even with 5G many real world situations won't. Most importantly a slow down of such isn't necessary bad for Google and co. as it only add limited amounts on strain on their services, infrastructure, internet and is still fast enough for most users to not care for most Google and co. use cases.

    Similar being slow due to receiver delays isn't necessary bad enough to cause user noticeable battery issues, on of the main reasons seem to many user<->kernel boundary crossings which are slow due to cache missues/ejections etc. but also don't boost your CPU clock (which is one of the main ways to drain your battery, besides the screen)

    Also like the article mentions the main issue is sub optimal network stack usage in browsers (including Chrome) not necessary a fundamental issue in the protocol. Which brings us to inter service communication for Google and co. which doesn't use any of the tested network stacks but very highly optimized stacks. I mean it really would be surprising if such network stacks where slow as there had been exhaustive perf. testing during the design of QUIC.

austin-cheney 2 days ago

EDITED.

I preference WebSockets over anything analogous to HTTP.

Commented edited because I mentioned performance conditions. Software developers tend to make unfounded assumptions/rebuttals of performance conditions they have not tested.

  • sleepydog a day ago

    QUIC is a reliable transport. It's not "fire and forget", there is a mechanism for recovering lost messages similar, but slightly superior to TCP. QUIC has the significant advantage of 0- and 1-rtt connection establishments which can hide latency better than TCP's 3-way handshake.

    Current implementations have some disadvantages to TCP, but they are not inherent to the protocol, they just highlight the decades of work done to make TCP scale with network hardware.

    Your points seem better directed at HTTP/3 than QUIC.

  • akira2501 2 days ago

    I'd use them more, but WebSockets are just unfortunately a little too hard to implement efficiently in a serverless environment, I wish there was a protocol that spoke to that environment's tradeoffs more effectively.

    The current crop aside from WebSockets all seem to be born from taking a butcher knife to HTTP and hacking out everything that gets in the way of time to first byte. I don't think that's likely to produce anything worthwhile.

    • austin-cheney a day ago

      That is a fair point. I wrote my own implementation of WebSockets in JavaScript and learned much in doing so, but it took tremendous trial and effort to get right. Nonetheless, the result was well worth the effort. I have a means to communicate to the browser and between servers that is real time with freedom to extend and modify it at my choosing. It is unbelievably more responsive than reliance upon HTTP in any of its forms. Imagine being able to execute hundreds of end-to-end test automation scenarios in the browser in 10 seconds. I can do that, but I couldn't with HTTP.

  • bawolff a day ago

    This is an insane take.

    Just to pick at one point of this craziness, you think that communicating over web sockets does not involve round trips????

  • Aurornis a day ago

    > QUIC is faster than prior versions of HTTP, but its still HTTP. It will never be fast enough because its still HTTP: > * String headers > * round trips > * many sockets, there is additional overhead to socket creation, especially over TLS

    QUIC is a transport. HTTP can run on top of QUIC, but the way you’re equating QUIC and HTTP doesn’t make sense.

    String headers and socket opening have nothing to do with the performance issues being discussed.

    String headers aren’t even a performance issue at all. The amount of processing done for when the most excessive use of string headers is completely trivial relative to all of the other processing that goes into sending 1,000,000,000 bits per second (Gigabit) over the internet, which is the order of magnitude target being discussed.

    I don’t think you understand what QUIC is or even the prior art in HTTP/2 that precedes these discussions of QUIC and HTTP/3.

    • austin-cheney a day ago

      > String headers aren’t even a performance issue at all.

      That is universally incorrect. String instructions require parsing as strings are for humans and binary is for machines. There is performance overhead to string parsing always, and it is relatively trivial to perf. I have performance tested this in my own WebSocket and test automation applications. That performance difference scales in logarithmic fashion provided the quantity of messages to send/receive. I encourage you to run your own tests.

      • jiggawatts a day ago

        Both HTTP/2 and HTTP/3 use binary protocol encoding and compressed (binary) headers. You're arguing a straw man that has little to do with reality.

  • quotemstr 2 days ago

    > * String headers > * round trips > * many sockets, there is additional overhead to socket creation, especially over TLS > * UDP. Yes, in theory UDP is faster than TCP but only when you completely abandon integrity.

    Have you ever read up on the technical details of QUIC? Every single of one of your bullets reflects a misunderstanding of QUIC's design.

    • Aurornis a day ago

      Honestly the entire comment is a head scratcher, from comparing QUIC to HTTP (different layers of the stack) or suggesting that string headers are a performance bottleneck.

      Websockets are useful in some cases where you need to upgrade an HTTP connection to something more. Some people learn about websockets and then try to apply them to everything, everywhere. This seems to be one of those cases.

  • FridgeSeal a day ago

    QUIC isn’t HTTP, QUIC is a protocol that operates at a similar level to UDP and TCP.

    HTTP/3 is HTTP over QUIC. HTTP protocols v2 and onwards use binary headers. QUIC, by design, does 0-RTT handshakes.

    > Yes, in theory UDP is faster than TCP but only when you completely abandon integrity

    The point of QUIC, is that it enables application/userspace level reconstruction with UDP levels of performance. There’s no integrity being abandoned here: packets are free to arrive out of order, across independent sub-streams, and the protocol machinery puts them back together. QUIC also supports full bidirectional streams, so HTTP/3 also benefits from this directly. QUIC/HTTP3 also supports multiple streams per client with backpressure per substream.

    Web-sockets are a pretty limited special case, built on-top of HTTP and TCP. You literally form the http connection and then upgrade it to web-sockets, it’s still TCP underneath.

    Tl;Dr: your gripes are legitimate, but they refer to HTTP/1.1 at most, QUIC and HTTP/3 are far more sophisticated and performant protocols.

    • austin-cheney a day ago

      WebSockets are not built on top of HTTP, though that is how they are commonly implemented. WebSockets are faster when HTTP is not considered. A careful reading of RFC6455 only mentions the handshake and its response must be a static string resembling a header in style of RFC2616 (HTTP), but a single static string is not HTTP. This is easily provable if you attempt your own implementation of WebSockets.

      • deathanatos a day ago

        … I mean, in theory someone could craft some protocol that just starts with speaking Websockets or starts with some other handshake¹, I suppose, but the overwhelming majority of the uses of websockets out there are going to be over HTTP, as that's what a browser speaks, and the client is quite probably a browser.

        > A careful reading of RFC6455 only mentions the handshake and its response must be a static string resembling a header in style of RFC2616 (HTTP), but a single static string is not HTTP.

        You're going to have to cite the paragraph, then, because that is most definitely not what RFC 6455 says. RFC 6455 says,

        > The handshake consists of an HTTP Upgrade request, along with a list of required and optional header fields.

        That's not "a single static string". You can't just say "are the first couple of bytes of the connection == SOME_STATIC", as that would not be a conforming implementation. (That would just be a custom protocol with its own custom upgrade-into-Websockets, as mentioned in the first paragraph, but if you're doing that, you might as well just ditch that and just start in Websockets.)

        ¹(i.e., I grant the RFC's "However, the design does not limit WebSocket to HTTP, and future implementations could use a simpler handshake", but making use of that to me that puts us solidly in "custom protocol" land, as conforming libraries won't interoperate.)

        • austin-cheney a day ago

          That is still incorrect. Once the handshake completes the browser absolutely doesn’t care about HTTP with regard to message processing over WebSockets. Therefore just achieve the handshake by any means and WebSockets will work correctly in the browser. The only browser specific behavior of any importance is that RFC6455 masking will occur on all messaging leaving the browser and will fail on all messaging entering the browser.

          > You can't just say

          I can say that, because I have my own working code that proves it cross browser and I have written perf tools to analyze it with numbers. One of my biggest learnings about software is to always conduct your own performance measurements because developers tend to be universally wrong about performance assumptions and when they are wrong they are frequently wrong by multiple orders of magnitude.

          As far as custom implementation goes you gain many liberties after leaving the restrictions of the browser as there are some features you don’t need to execute the protocol and there are features of the protocol the browser does not use.

          • deathanatos 17 hours ago

            > That is still incorrect. Once the handshake completes the browser absolutely doesn’t care about HTTP with regard to message processing over WebSockets.

            I never made any claim to the contrary.

            > Therefore just achieve the handshake by any means and WebSockets will work correctly in the browser.

            At which point you're parsing a decent chunk of HTTP.

            > I can say that, because I have my own working code that proves it

            Writing code doesn't prove anything; code can have bugs. According to the standard portion I quoted, your code is wrong. A conforming request isn't required to match.

            > I have written perf tools to analyze it with numbers. One of my biggest learnings about software is to always conduct your own performance measurements because developers tend to be universally wrong about performance assumptions and when they are wrong they are frequently wrong by multiple orders of magnitude.

            Performance has absolutely nothing to do with this.

            Even if such an implementation appears to work today in browsers, this makes situations with a still-conforming UA damn near impossible to debug, and there's no guarantees made on header ordering, casing, etc. that would mean it would continue to work. Worse, non-conformant implementations like this are the sort of thing that result in ossification.

            • austin-cheney 14 hours ago

              In my own implementation I wrote a queue system to force message ordering and support offline messaging state and so forth. Control frames can be sent at any time irrespective of message ordering without problems, however.

              In the end an in house implementation that allows custom extensions is worth far more than any irrational unfounded fears. If in the future it doesn’t work then just fix the current approach to account for those future issues. In the meantime I can do things nobody else can because I have something nobody else is willing to write.

              What’s interesting is that this entire thread is about performance concerns. If you raise a solution that people find unfamiliar all the fear and hostility comes out. To me such contrary behavior suggests performance, in general, isn’t a valid concern to most developers in comparison to comfort.