HTTP 2.0

Screenshot of the World Wide Web (Nexus) Web Browser in use

When the HTTP protocol was initially defined, it was extremely simple: the client opens a socket, sends the GET command followed by the url, and gets in returns headers, two empty lines and then the content of the page. A simple protocol for a simple job: distribute academic content. The protocol has been refined a bit in 1999, but stayed essentially the same.

of the HTTP protocol is taking shape, and it is largely based on . Where version one was simple and ended up changing the world, version two is rather complex and aims at being more efficient. This makes sense: the goal is to keep the web while better using network resources, but there is a certain irony to see HTTP adopt features that were present in some networking protocols but deemed to complicated: binary format, multiplexing.

For sure the environment were HTTP 2 will be released is very different from the one version 1 saw. The protocol that underpins HTTP is TCP, but that protocol is becoming less and less visible, and less and less usable. In 1991 any new protocol would just use a new port, nowadays the only port that is universally usable is the 80, which is reserved… for HTTP. HTTP 1.0 delegated multiplexing to TCP, i.e. a browser would open multiple connections in parallel, HTTP 2.0 is all about squeezing the maximum out of one connection.

As often with computer science, the lower level of the software stack fossilize and new systems are built on top of them while the lower level are slowly forgotten. Maybe this is the case with all human built structures?

Screenshot of the World Wide Web browser running on Next Step. © Tim Berners Lee, public domain.

5 thoughts on “HTTP 2.0”

  1. Any sufficiently complicated network protocol contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of TCP.

  2. The internet routes around any limit. With SOAP and all this web services stuff, all the trafic between applications could use the single port that was still open, 80. I suppose that someone will design “ports” or a similar concept inside this HTTP2, and firewalls will block some, and everything will go through the last open one that we all need to acces Google & Facebook. I suppose this is a consequence of security+lack of flexibility from IT+lazyness from programmers.

    I observe the same fossilization in databases for different reasons:

    SQL (and more generally the relational database) was invented for humans (SELECT mydata FROM mytable). Now humans use other tools that generate SQL queries for them (BusinessObjects, ORM, web pages…).
    To manipulate this data, SQL was okay. Now many technicians use ETL tools (Informatica, Oracle Data Integrator…), sometimes very useful, sometimes unable to do easily what a single SQL query would do painlessly.
    The worse: SQL is able to store links between object with constraints. I see more and more applications using the database as a sort of filesystem and getting all the logic and constraints and links in the java or C++ layer, or only as some meta-information, that you cannot understand with another reporting tool. Extreme case: putting a BLOB in the database (see link)

    • Ports exist in HTTP with at least 2 variants: virtualhosts and paths.
      DPI cannot do much against TLS. It is a trendy idea to consider SSL as a layering error. See for example all the new protocols that put crypto below transmission control (SCTP over DTLS, CurveCP, …)

  3. DPI (Deep Packet Inspection) is basically about analysing the protocol within packets, so it will just need to be adapted to HTTP2 packets.

    Another example of absurd layering of protocol is JSONx, which is a way encoding JSON as XML.

Leave a Reply to MarvCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.