I think you’re severely underestimating the complexity of http/1.1. It’s definitely much simpler than http/2, but it’s a lot of code that needs to be maintained.
Yes; the web server I use for my site is about twice the size of that blog post. Though, I think that if you drop the file-listing functionality you may be able to get it closer.
probably not - it can be quite poorly defined in places and the edge cases can be very fiddly. by pushing for http/2 it encourages more users to pick it up imo
I feel like securing against request smuggling is simpler with http/2. That is of course only one aspect.
Ultimately though, its not like this is getting rid of http/1.1 in general, just DNS over http/1.1. I imagine the real reason is simply nobody was using it. Anyone not on the cutting edge is using normal dns, everyone else is using http/2 (or 3?) for dns. It is an extremely weird middle ground to use dns over http 1. Im guessing the ven diagram was empty.
Request smuggling is an issue when reverse proxying and multiplexing multiple front-end streams over a shared HTTP/1.1 connection on the backend. HTTP/2 on the front-end doesn't resolve that issue, though the exploit techniques are slightly different. In fact, HTTP/2 on the front-end is a deceptive solution to the problem because HTTP/2 is more complex (the binary framing doesn't save you, yet you still have to deal with unexpected headers--you can still send Content-Length headers, for example) and the exploits less intuitive.
HTTP/1.1 is a simpler protocol and easier to implement, even with chunked Transfer-Encoding and pipelining. (For one thing, there's no need to implement HPACK.) It's trying to build multiplexing tunnels across it that is problematic, because buggy or confused handling of the line-delimited framing between ostensibly trusted end point opens up opportunities for desync that, in a simple 1:1 situation, would just be a stupid bug, no different from any other protocol implementation bug.
Because HTTP/2 is more complicated, there's arguably more opportunities for classic memory safety bugs. Contrary common wisdom, there's not a meaningful difference between text and binary protocols in that regard; if anything, text-based protocols are more forgiving of bugs, which is why they tend to promote and ossify proliferation of protocol violations. I've written HTTP and RTSP/RTP stacks several times, including RTSP/RTP nested inside bonded HTTP connections (what Quicktime used to use back in the day). I've also implemented MIME message parsers. The biggest headache and opportunity for bugs, IME, is dealing with header bodies, specifically the various flavors of structured headers, and unfortunately HTTP/2 doesn't directly address that--you're still handed a blob to parse, same as HTTP/1.1 and MIME generally. HTTP/2 does partially address the header folding problem, but it's common to reject those in HTTP/1.x implementations, something you can't do in e-mail stacks, unfortunately.
Argubaly, the complexity issue is not only the protocols themselves but also the fact that thanks to the companies pushing HTTP/2 and 3, there are now multiple (competing/overlapping/incompatible) protocols
For example, people passing requests received by HTTP/2 frontends to HTTP/1.1 backends
>The messages in classic UDP-based DNS [RFC1035] are inherently unordered and have low overhead. A competitive HTTP transport needs to support reordering, parallelism, priority, and header compression to achieve similar performance. Those features were introduced to HTTP in HTTP/2 [RFC7540]. Earlier versions of HTTP are capable of conveying the semantic requirements of DoH but may result in very poor performance.
I'd bet basically all their clients are using HTTP/2 and they don't see the point in maintaining a worse version just for compatibility with clients that barely exist.
Mikrotik DoH user here. While I don't use Quad9, I do use 1.1.1.1. I hope they don't follow suit before Mikrotik get a chance to add HTTP/2 support (if ever).
Rather than throwing HTTP/1.1 into the garbage can, why don't we throw Postel's Law [0] into the garbage where it belongs.
Every method of performing request smuggling relies on making an HTTP request that violates spec. A request that sends both Content-Length and Transfer-Encoding is invalid. Sending two Content-Lengths is invalid. Two Transfer-Encoding headers is allowed -- They should be treated as a comma-separated lists -- so allow them and treat them as such, or canonicalize them as a single header if you're transforming it to something downstream.
But for fuck's sake, there's literally no reason to accept requests that contain most of the methods that smuggling relies upon. Return a 400 Bad Request and move on. No legit client sends these invalid requests unless they have a bug, and it's not your job as a server to work around their bug.
[0] Aka, The Robustness Principle, "Be conservative in what you send, liberal in what you accept."
DNS seems like exactly the scenario where you would want http2 (or http1.1 pipelining but nobody supports that). You need to make a bunch of dns requests at once, and dont want to have to wait a roundtrip to make the next one.
makes sense but I still would prefer to solve that problem with "batch" semantics at a higher level rather than depend on the wire protocol to bend over backwards
I never understood DOH over DOT. It makes sense if you want to hide DNS lookups so that people cannot block the DNS queries to ad and other scam networks.
Thanks to the ossification of the internet, every new protocol or protocol extension needs to be over HTTPS.
DoT works fine, it's supported on all kinds of operating systems even if they don't advertise it, but DoH arrived in browsers. Some shitty ISPs and terrible middleboxes also block DoT (though IMO that should be a reason to switch ISPs, not a reason to stop using DoT).
On the hosting side, there are more options for HTTP proxies/firewalls/multiplexers/terminators than there are for DNS, so it's easier to build infra around DoH. If you're just a small server, you won't need more than an nginx stream proxy, but if you're doing botnet detection and redundant failovers, you may need something more complex.
> though IMO that should be a reason to switch ISPs, not a reason to stop using DoT
If you have that choice, there's many countries that really want to control what their citizens see and can access at this point. If we had DoH + ECH widely adopted it would heavily limit their power.
I’d say nowadays 443/tcp is the only port that you’ll find open in any usable network, anything else is part of a corporate network whack-a-mole game. So while DoH and DoT traffic shouldn’t be distinguishable, 853/tcp is surely a weird port in the grand scheme of things.
My ISP (my area is serviced by 1 more but they offer lower speeds) blocks the DoT port. They cannot block 443. If they start blocking popular DoH domains, I can use any of the mirrors or run my own over https://wongogue.in/catpics/
That's the beauty of DoH - you don't have to pick a resolver which uses a dedicated IP. You can even stand your own up behind a CDN and blocking it would mean blocking HTTPS traffic to the CDN.
If I'm an evil monetizing ISP or a great firewall, I don't really need to catch 100% of the traffic I'm trying to prevent. If there's a handful of people who can circumvent my restrictions, that's fine. As long as I get all the people trying to use popular DNS, that's good enough.
If I really do need to get that last bit, there's always other analysis to be done (request/response size/cadence, always talks to host X before making connections to other hosts, etc)
Not 100% of people need/care about such workarounds either though, so it works out.
For true government level interest in what you are doing, it's a much harder conversation than e.g. avoiding ISPs making a buck intercepting with wildcard fallbacks and is probably going to need to extend to something well beyond just DoH if one is convinced that's their primary concern.
Because if you're on the kind of malicious network that's the reason to use encrypted DNS at all, then your connection attempts on port 853 will probably just get blocked wholesale. DoH is better since it looks the same as all other HTTPS traffic.
And you can still block ad and scam domains with DoH. Either do so with a browser extension, in your hosts file, or with a local resolver that does the filtering and then uses DoH to the upstream for any that it doesn't block.
> And you can still block ad and scam domains with DoH.
How?
There are certain browsers that ignore your DNS settings and talk directly to DoH servers. How could I check what is that the browser requesting through a SSL session?
Do you want me to spoof a cert and put it on a MITM node?
These are my nameservers:
nameserver 10.10.10.65
nameserver 10.10.10.66
If the browser plays along than talking to these is the safest bet for me because it runs AdGuardHome and removes any ad or malicious (these are interchangable terms) content by returning 0.0.0.0 for those queries. I use DoT as uplink so the ISP cannot look into my traffic and I use http->https upgrades for everything.
For me DoH makes it harder to filter the internet.
There are a plethora of ways to control whether the browser uses its own DoH or the system DNS. Some inside the browser itself, some in the machine's OS, and some from the local network.
You can also configure the browser to use your chosen DoH server directly, but this is often as much work as just telling the browser to use the system DNS server and setting that up as DoH anyways.
HTTP/2 [RFC7540] is the minimum RECOMMENDED version of HTTP for use with DoH."
One paper I read some years ago reported DoH is faster than DoT but for multiple queries in single TCP connection outside the browser I find that DoT is faster
I use a local forward proxy for queries with HTTP/2. (Using libnghttp2 is another alternative). In own case (YMMV) HTTP/2 is not signifcantly faster than using HTTP/1.1 pipelining
For me, streaming TCP queries with DoT blows DoH away
I think code to implement http/1.1 in whatever software stack they use would have been shorter than the blog post...
I think you’re severely underestimating the complexity of http/1.1. It’s definitely much simpler than http/2, but it’s a lot of code that needs to be maintained.
To write the code from scratch, sure.
But I'm thinking a few lines of nginx config to proxy http 1.1 to 2
Nginx can't use http2 upstreams, some other reverse proxies can though.
Yes; the web server I use for my site is about twice the size of that blog post. Though, I think that if you drop the file-listing functionality you may be able to get it closer.
probably not - it can be quite poorly defined in places and the edge cases can be very fiddly. by pushing for http/2 it encourages more users to pick it up imo
http/2 surely not simpler?
I feel like securing against request smuggling is simpler with http/2. That is of course only one aspect.
Ultimately though, its not like this is getting rid of http/1.1 in general, just DNS over http/1.1. I imagine the real reason is simply nobody was using it. Anyone not on the cutting edge is using normal dns, everyone else is using http/2 (or 3?) for dns. It is an extremely weird middle ground to use dns over http 1. Im guessing the ven diagram was empty.
Request smuggling is an issue when reverse proxying and multiplexing multiple front-end streams over a shared HTTP/1.1 connection on the backend. HTTP/2 on the front-end doesn't resolve that issue, though the exploit techniques are slightly different. In fact, HTTP/2 on the front-end is a deceptive solution to the problem because HTTP/2 is more complex (the binary framing doesn't save you, yet you still have to deal with unexpected headers--you can still send Content-Length headers, for example) and the exploits less intuitive.
HTTP/1.1 is a simpler protocol and easier to implement, even with chunked Transfer-Encoding and pipelining. (For one thing, there's no need to implement HPACK.) It's trying to build multiplexing tunnels across it that is problematic, because buggy or confused handling of the line-delimited framing between ostensibly trusted end point opens up opportunities for desync that, in a simple 1:1 situation, would just be a stupid bug, no different from any other protocol implementation bug.
Because HTTP/2 is more complicated, there's arguably more opportunities for classic memory safety bugs. Contrary common wisdom, there's not a meaningful difference between text and binary protocols in that regard; if anything, text-based protocols are more forgiving of bugs, which is why they tend to promote and ossify proliferation of protocol violations. I've written HTTP and RTSP/RTP stacks several times, including RTSP/RTP nested inside bonded HTTP connections (what Quicktime used to use back in the day). I've also implemented MIME message parsers. The biggest headache and opportunity for bugs, IME, is dealing with header bodies, specifically the various flavors of structured headers, and unfortunately HTTP/2 doesn't directly address that--you're still handed a blob to parse, same as HTTP/1.1 and MIME generally. HTTP/2 does partially address the header folding problem, but it's common to reject those in HTTP/1.x implementations, something you can't do in e-mail stacks, unfortunately.
Argubaly, the complexity issue is not only the protocols themselves but also the fact that thanks to the companies pushing HTTP/2 and 3, there are now multiple (competing/overlapping/incompatible) protocols
For example, people passing requests received by HTTP/2 frontends to HTTP/1.1 backends
Having to support http/1.1 and http/2 is definitely not simpler.
HTTP/2 is basically HTTP/1.1, just over some custom binary protocol bolted on on top of TLS.
According to the RFC:
>The messages in classic UDP-based DNS [RFC1035] are inherently unordered and have low overhead. A competitive HTTP transport needs to support reordering, parallelism, priority, and header compression to achieve similar performance. Those features were introduced to HTTP in HTTP/2 [RFC7540]. Earlier versions of HTTP are capable of conveying the semantic requirements of DoH but may result in very poor performance.
I'd bet basically all their clients are using HTTP/2 and they don't see the point in maintaining a worse version just for compatibility with clients that barely exist.
Mikrotik DoH user here. While I don't use Quad9, I do use 1.1.1.1. I hope they don't follow suit before Mikrotik get a chance to add HTTP/2 support (if ever).
You should look into dnscrypt[0][1]. Easy and lots of options. jedisct1, cofyc, and many others have done a great job over the last decade here.
0. https://dnscrypt.info
1. https://www.dnscrypt.org
> However, we are reaching the end of life for the libraries and code that support HTTP/1.1
What libraries are ending support for HTTP/1.1? That seems like an extremely bad move and somewhat contrived.
HTTP versions less than 2 have serious unresolvable security issues related to http request/response smuggling and stream desynchronization.
https://http1mustdie.com/
If you're using a reverse proxy, maybe. I don't think it's sufficient to kill a whole version of HTTP because of that.
There is an argument HTTP/2 was created by CDNs for CDNs (reverse proxies)
I have an alternative...
Rather than throwing HTTP/1.1 into the garbage can, why don't we throw Postel's Law [0] into the garbage where it belongs.
Every method of performing request smuggling relies on making an HTTP request that violates spec. A request that sends both Content-Length and Transfer-Encoding is invalid. Sending two Content-Lengths is invalid. Two Transfer-Encoding headers is allowed -- They should be treated as a comma-separated lists -- so allow them and treat them as such, or canonicalize them as a single header if you're transforming it to something downstream.
But for fuck's sake, there's literally no reason to accept requests that contain most of the methods that smuggling relies upon. Return a 400 Bad Request and move on. No legit client sends these invalid requests unless they have a bug, and it's not your job as a server to work around their bug.
[0] Aka, The Robustness Principle, "Be conservative in what you send, liberal in what you accept."
I wonder too, for a DNS query do you ever need keepalive or chunked encoding? HTTP/1.0 seems appropriate and http2 seems overkill
DNS seems like exactly the scenario where you would want http2 (or http1.1 pipelining but nobody supports that). You need to make a bunch of dns requests at once, and dont want to have to wait a roundtrip to make the next one.
ok multiple requests makes sense for keepalive (or just support a "batch" query, it's http already why adhere so tightly to the udp protocol)
http/1.0 w/keepalive is common (amazon s3 for example) perfectly suitable simple protocol for this
Keepalive is not really what you want here.
For this usecase you want to be able to send off multiple requests before recieving their responses (you want to prevent head of line blocking).
If anything, keep alive is probably counter productive. If that is your only option its better to just make separate connections.
makes sense but I still would prefer to solve that problem with "batch" semantics at a higher level rather than depend on the wire protocol to bend over backwards
I never understood DOH over DOT. It makes sense if you want to hide DNS lookups so that people cannot block the DNS queries to ad and other scam networks.
Thanks to the ossification of the internet, every new protocol or protocol extension needs to be over HTTPS.
DoT works fine, it's supported on all kinds of operating systems even if they don't advertise it, but DoH arrived in browsers. Some shitty ISPs and terrible middleboxes also block DoT (though IMO that should be a reason to switch ISPs, not a reason to stop using DoT).
On the hosting side, there are more options for HTTP proxies/firewalls/multiplexers/terminators than there are for DNS, so it's easier to build infra around DoH. If you're just a small server, you won't need more than an nginx stream proxy, but if you're doing botnet detection and redundant failovers, you may need something more complex.
> though IMO that should be a reason to switch ISPs, not a reason to stop using DoT If you have that choice, there's many countries that really want to control what their citizens see and can access at this point. If we had DoH + ECH widely adopted it would heavily limit their power.
> Thanks to the ossification of the internet, every new protocol or protocol extension needs to be over HTTPS.
If someone can tell you're using HTTPS instead of some other TLS-encrypted protocol, that means they've broken TLS.
> If someone can tell you're using HTTPS instead of some other TLS-encrypted protocol, that means they've broken TLS.
Lots of clients just tell the world. ALPN is part of the unecrypted client hello.
I’d say nowadays 443/tcp is the only port that you’ll find open in any usable network, anything else is part of a corporate network whack-a-mole game. So while DoH and DoT traffic shouldn’t be distinguishable, 853/tcp is surely a weird port in the grand scheme of things.
My ISP (my area is serviced by 1 more but they offer lower speeds) blocks the DoT port. They cannot block 443. If they start blocking popular DoH domains, I can use any of the mirrors or run my own over https://wongogue.in/catpics/
Anything that doesn't provide raw access at the internet protocol layer (other than RFP to prevent spoofing) shouldn't qualify as internet provider.
Well…some countries (and some regions) don’t get a choice. We do what we can.
Anyone who has their DNS filtered, e.g., by ISPs that redirect DNS port numbers, like hotels, can use DoH to work around the problem
DOH prevents malicious network providers from blocking DOT traffic to enforce their own DNS services for “efficiency” reasons.
Most ISPs just want to sell your data and with encrypted client hello and DOH they’re losing visibility into what you’re doing.
Except encrypted client hello (ECH) is just a draft and isn't being used server side on the public www
If I'm wrong then please provide some examples of servers that support ECH
Don't you just intercept traffic to well know recursive resolvers? And then drop packets to ports other than 53?
That's the beauty of DoH - you don't have to pick a resolver which uses a dedicated IP. You can even stand your own up behind a CDN and blocking it would mean blocking HTTPS traffic to the CDN.
If I'm an evil monetizing ISP or a great firewall, I don't really need to catch 100% of the traffic I'm trying to prevent. If there's a handful of people who can circumvent my restrictions, that's fine. As long as I get all the people trying to use popular DNS, that's good enough.
If I really do need to get that last bit, there's always other analysis to be done (request/response size/cadence, always talks to host X before making connections to other hosts, etc)
Not 100% of people need/care about such workarounds either though, so it works out.
For true government level interest in what you are doing, it's a much harder conversation than e.g. avoiding ISPs making a buck intercepting with wildcard fallbacks and is probably going to need to extend to something well beyond just DoH if one is convinced that's their primary concern.
Well, that’s T-Mobile for you.
They force you to stay behind their NAT and recently started blocking VPN connections to home labs even.
DOT picked an odd port, DOH uses 443. Otherwise they both have the benefits of TLS.
Because if you're on the kind of malicious network that's the reason to use encrypted DNS at all, then your connection attempts on port 853 will probably just get blocked wholesale. DoH is better since it looks the same as all other HTTPS traffic.
And you can still block ad and scam domains with DoH. Either do so with a browser extension, in your hosts file, or with a local resolver that does the filtering and then uses DoH to the upstream for any that it doesn't block.
> And you can still block ad and scam domains with DoH.
How?
There are certain browsers that ignore your DNS settings and talk directly to DoH servers. How could I check what is that the browser requesting through a SSL session?
Do you want me to spoof a cert and put it on a MITM node?
These are my nameservers:
If the browser plays along than talking to these is the safest bet for me because it runs AdGuardHome and removes any ad or malicious (these are interchangable terms) content by returning 0.0.0.0 for those queries. I use DoT as uplink so the ISP cannot look into my traffic and I use http->https upgrades for everything.For me DoH makes it harder to filter the internet.
There are a plethora of ways to control whether the browser uses its own DoH or the system DNS. Some inside the browser itself, some in the machine's OS, and some from the local network.
You can also configure the browser to use your chosen DoH server directly, but this is often as much work as just telling the browser to use the system DNS server and setting that up as DoH anyways.
AdGuard has a DoH server. Just configure your browser to use https://dns.adguard-dns.com/dns-query for it.
DoQ is better than either dot/doh
DNSCurve is better than DoQ
[dead]
It's both. In oppressive countries (Iran, China, Russia) where all traffic is filtered, DOH is supposed to help keep things concealed, too.
RFC 8484:
"5.2. HTTP/2
HTTP/2 [RFC7540] is the minimum RECOMMENDED version of HTTP for use with DoH."
One paper I read some years ago reported DoH is faster than DoT but for multiple queries in single TCP connection outside the browser I find that DoT is faster
I use a local forward proxy for queries with HTTP/2. (Using libnghttp2 is another alternative). In own case (YMMV) HTTP/2 is not signifcantly faster than using HTTP/1.1 pipelining
For me, streaming TCP queries with DoT blows DoH away
HTTP/1.1 is still heavily used in embedded system.
But is DoH? If your library is too old to support http2, what are the chances you've upgraded the DNS resolver to a DoH resolver?
Luckily it's pretty easy to run your own DoH server if you're deploying devices in the field, and there are alternatives to Quad9.
Its not about age, its about complexity. HTTP/1.1 client is trivial to implement.
We're talking about an HTTP/1.1 server here
NextDNS has a DOH3 (as in, http/3) endpoint but afaict it doesn't seem to always use http/3.