Wait for it – HTTP/2 begins Working Group Last Call!

Updated August 7, 2014 with ALPN as proposed standard

Working Group Last Call (WGLC) was finally announced for HTTP/2, ending on September 1st! This is an important milestone in the standards process.

With the publication of the sixth implementation draft, the technical requirements for HTTP/2 are now reaching closure in the working group. For the extremely curious, the next steps to proposed standard are outlined in Document Shepherding from Working Group Last Call to Publication.

The rapid progress of HTTP/2 demonstrates the benefits of extensive face-to-face interim meetings with committed implementers and the use of a github repository to accept technical contributions to the draft and track outstanding issues. It’s an innovative model for developing standards that has now been adopted by the IETF Transport Layer Security (TLS) working group for TLS 1.3.

HTTP/2 at IETF 90 in Toronto

Gabriel Montenegro, Rob Trace, and David Walp from Microsoft attended the recent IETF HTTPbis meetings in Toronto with me.


Sunday Night Reception at IETF 90

The related meeting materials are available:



Originally, the HTTPbis working group had not planned for HTTP/2 agenda items at IETF 90. But there were a few lingering issues that we wanted to close before WGLC such as mandatory to implement cipher suites.

In addition, we reviewed an update to the Alternate Services draft which is the first instance of a non-critical HTTP/2 extension. You may recall that extensions for HTTP/2 were introduced at the interim meeting in NYC back in June. As I wrote then:

Mike Bishop from Microsoft shared an Extension Frames in HTTP/2 proposal on the HTTPbis mailing list prior to the NYC meeting. The proposal resonated with many participants since it reflected discussions from an earlier interim meeting hosted by Microsoft in Bellevue. The HTTP/2 editor, Martin Thomson, simplified and refined the proposal for discussion at the interim meeting.

Before I left Toronto, I made a pilgrimage to the statue of the reclusive pianist Glenn Gould in front of the CBC studios:


Update on Application-Layer Protocol Negotiation (ALPN)

HTTP/2 requires Application-Layer Protocol Negotiation (ALPN) for secure negotiation. After the Internet Engineering Steering Group (IESG) approved ALPN as a Proposed Standard,  it was recently published as RFC 7301 Transport Layer Security (TLS) Application-Layer Protocol Negotiation Extension. Congratulations to the co-authors from Cisco, Google, Microsoft and Orange!

ALPN support is planned for the OpenSSL 1.0.2 feature release. ALPN status for open source libraries and languages is tracked here.

Are we there yet? Getting closer.

As captured in the NYC interim minutes, the proposed schedule for completing HTTP/2 looks like this:

  • 8/1-9/1 Working group last call
  • 10/1 Begin IETF Last Call
  • 11/9-11/14 Address IETF Last Call issues at IETF 91 as necessary
  • 12/4 IESG Telechat to review HTTP/2 as Proposed Standard
  • 1/2015 Address IESG comments and submit to RFC Editor
  • 2/2015 Publish HTTP/2 as a RFC

The list of HTTP/2 implementations continues to grow. Now would be a good time to commit to deploying an implementation to help validate the WGLC draft. I’ll continue to update you on the progress of this important standard and Microsoft’s support.

5 thoughts on “Wait for it – HTTP/2 begins Working Group Last Call!

  1. This is great. I see, though, that your implementation (http2-katana) is the only implementation that does not support NPN. I am not familiar with the protocol at all, but it seems to me that a server must support a negotiation method (any) in order to make use of HTTP2. Is this an indication that Microsoft will not be supporting NPN in the future and browsers would have to support ALPN (if a browser wants to support Microsoft servers)?

  2. Pingback: Pie in the Sky (August 8th, 2014) | CmdExec Technology News

  3. Hi, you seem to be someone keen on improving things. I’m working with application performance tools I see a need for having some information presented to the web server by the browser (ie a set of http request headers) that indicate various bandwidth constraints on the browser such as the web site, the cdn(s) it uses and key third parties.

    One tool I use adds a javascript agent to the browser and then artificially requests files of various size to determine the speed further loading the datacentre and the device. It would make sense for the browser to do this calculation using the resources that actually contribute to the page from each of the domains it is accessing.

    Firstly, does that make sense? And if so how do I get this discussion going in the group proposing the new http specification?

  4. Pingback: Architecting Websites For The HTTP/2 Era