But I'm thinking a few lines of nginx config to proxy http 1.1 to 2
>The messages in classic UDP-based DNS [RFC1035] are inherently unordered and have low overhead. A competitive HTTP transport needs to support reordering, parallelism, priority, and header compression to achieve similar performance. Those features were introduced to HTTP in HTTP/2 [RFC7540]. Earlier versions of HTTP are capable of conveying the semantic requirements of DoH but may result in very poor performance.
I'd bet basically all their clients are using HTTP/2 and they don't see the point in maintaining a worse version just for compatibility with clients that barely exist.
Ultimately though, its not like this is getting rid of http/1.1 in general, just DNS over http/1.1. I imagine the real reason is simply nobody was using it. Anyone not on the cutting edge is using normal dns, everyone else is using http/2 (or 3?) for dns. It is an extremely weird middle ground to use dns over http 1. Im guessing the ven diagram was empty.
HTTP/1.1 is a simpler protocol and easier to implement, even with chunked Transfer-Encoding and pipelining. (For one thing, there's no need to implement HPACK.) It's trying to build multiplexing tunnels across it that is problematic, because buggy or confused handling of the line-delimited framing between ostensibly trusted end point opens up opportunities for desync that, in a simple 1:1 situation, would just be a stupid bug, no different from any other protocol implementation bug.
Because HTTP/2 is more complicated, there's arguably more opportunities for classic memory safety bugs. Contrary common wisdom, there's not a meaningful difference between text and binary protocols in that regard; if anything, text-based protocols are more forgiving of bugs, which is why they tend to promote and ossify proliferation of protocol violations. I've written HTTP and RTSP/RTP stacks several times, including RTSP/RTP nested inside bonded HTTP connections (what Quicktime used to use back in the day). I've also implemented MIME message parsers. The biggest headache and opportunity for bugs, IME, is dealing with header bodies, specifically the various flavors of structured headers, and unfortunately HTTP/2 doesn't directly address that--you're still handed a blob to parse, same as HTTP/1.1 and MIME generally. HTTP/2 does partially address the header folding problem, but it's common to reject those in HTTP/1.x implementations, something you can't do in e-mail stacks, unfortunately.
For example, people passing requests received by HTTP/2 frontends to HTTP/1.1 backends
What libraries are ending support for HTTP/1.1? That seems like an extremely bad move and somewhat contrived.
Rather than throwing HTTP/1.1 into the garbage can, why don't we throw Postel's Law [0] into the garbage where it belongs.
Every method of performing request smuggling relies on making an HTTP request that violates spec. A request that sends both Content-Length and Transfer-Encoding is invalid. Sending two Content-Lengths is invalid. Two Transfer-Encoding headers is allowed -- They should be treated as a comma-separated lists -- so allow them and treat them as such, or canonicalize them as a single header if you're transforming it to something downstream.
But for fuck's sake, there's literally no reason to accept requests that contain most of the methods that smuggling relies upon. Return a 400 Bad Request and move on. No legit client sends these invalid requests unless they have a bug, and it's not your job as a server to work around their bug.
[0] Aka, The Robustness Principle, "Be conservative in what you send, liberal in what you accept."
http/1.0 w/keepalive is common (amazon s3 for example) perfectly suitable simple protocol for this
For this usecase you want to be able to send off multiple requests before recieving their responses (you want to prevent head of line blocking).
If anything, keep alive is probably counter productive. If that is your only option its better to just make separate connections.
For DNS this might come up in format parsing. E.g. in html, First you see <script> tag, fire off the DNS request for that, and go back to parsing. Before you get the DNS result you see an <img> tag for a different domain and want to fire off the DNS result for that. With a batch method you would have to wait until you have all the domain names before sending off the request (this might get more important if you are recieving the file you are patsing over the network and you dont know if the next packet containing the next part of the file is 1ms away or 2000ms).
the problem with relying on the wire protocol to streamline requests that should've been batched is that it lacks the context to do it well
DoT works fine, it's supported on all kinds of operating systems even if they don't advertise it, but DoH arrived in browsers. Some shitty ISPs and terrible middleboxes also block DoT (though IMO that should be a reason to switch ISPs, not a reason to stop using DoT).
On the hosting side, there are more options for HTTP proxies/firewalls/multiplexers/terminators than there are for DNS, so it's easier to build infra around DoH. If you're just a small server, you won't need more than an nginx stream proxy, but if you're doing botnet detection and redundant failovers, you may need something more complex.
If someone can tell you're using HTTPS instead of some other TLS-encrypted protocol, that means they've broken TLS.
Lots of clients just tell the world. ALPN is part of the unecrypted client hello.
Most ISPs just want to sell your data and with encrypted client hello and DOH they’re losing visibility into what you’re doing.
If I really do need to get that last bit, there's always other analysis to be done (request/response size/cadence, always talks to host X before making connections to other hosts, etc)
For true government level interest in what you are doing, it's a much harder conversation than e.g. avoiding ISPs making a buck intercepting with wildcard fallbacks and is probably going to need to extend to something well beyond just DoH if one is convinced that's their primary concern.
They force you to stay behind their NAT and recently started blocking VPN connections to home labs even.
If I'm wrong then please provide some examples of servers that support ECH
Whoever designed TLS did not expect third parties, so-called "content delivery networks", "cloud providers", etc., wanting to offer hosting to an unlimited number of customers ($$) on a limited pool of IP addresses
Problem of cleartext SNI was solved in 2011, well before "QUIC" existed
http://curvecp.org/addressing.html
Without TLS and without SNI anyone can host multiple HTTPS sites on a single IP address
And you can still block ad and scam domains with DoH. Either do so with a browser extension, in your hosts file, or with a local resolver that does the filtering and then uses DoH to the upstream for any that it doesn't block.
How?
There are certain browsers that ignore your DNS settings and talk directly to DoH servers. How could I check what is that the browser requesting through a SSL session?
Do you want me to spoof a cert and put it on a MITM node?
These are my nameservers:
nameserver 10.10.10.65
nameserver 10.10.10.66
If the browser plays along than talking to these is the safest bet for me because it runs AdGuardHome and removes any ad or malicious (these are interchangable terms) content by returning 0.0.0.0 for those queries. I use DoT as uplink so the ISP cannot look into my traffic and I use http->https upgrades for everything.For me DoH makes it harder to filter the internet.
You can also configure the browser to use your chosen DoH server directly, but this is often as much work as just telling the browser to use the system DNS server and setting that up as DoH anyways.
"5.2. HTTP/2
HTTP/2 [RFC7540] is the minimum RECOMMENDED version of HTTP for use with DoH."
One paper I read some years ago reported DoH is faster than DoT but for multiple queries in single TCP connection outside the browser I find that DoT is faster
I use a local forward proxy for queries with HTTP/2. (Using libnghttp2 is another alternative). In own case (YMMV) HTTP/2 is not signifcantly faster than using HTTP/1.1 pipelining
For me, streaming TCP queries with DoT blows DoH away
Luckily it's pretty easy to run your own DoH server if you're deploying devices in the field, and there are alternatives to Quad9.