|
| 1 | +--- |
| 2 | +Author: |
| 3 | + - Xinyang YU |
| 4 | +Author Profile: |
| 5 | + - https://linkedin.com/in/xinyang-yu |
| 6 | +tags: |
| 7 | + - networking |
| 8 | +Creation Date: 2023-08-18T12:56:00 |
| 9 | +Last Date: 2025-10-02T22:17:23+08:00 |
| 10 | +References: |
| 11 | +draft: |
| 12 | +description: |
| 13 | +--- |
| 14 | +## Abstract |
| 15 | +--- |
| 16 | +[[HTTP 1.1]] but with the following 4 improvements |
| 17 | +1. [[#HTTP Multiplexing]] |
| 18 | +2. [[#Server Push]] |
| 19 | +3. Compressing the [[HTTP Headers]] with [[HPACK]] |
| 20 | +4. HTTP messages are binary-encoded instead of ASCII-encoded, this is more efficient for parsing & less readable for hackers |
| 21 | + |
| 22 | +## HTTP Stream |
| 23 | +--- |
| 24 | +- An HTTP stream consists of multiple [[#HTTP Frame]] |
| 25 | + |
| 26 | +>[!important] |
| 27 | +> Each HTTP stream (a pair of [[HTTP Request]]/[[HTTP Response]]) doesn't need to be sent in order in the same [[TCP Connection]], so HTTP response can be received in a different order of sending the HTTP requests. This is achieved with HTTP Stream ID (流标示符) in the HTTP frame. |
| 28 | +
|
| 29 | +### HTTP Frame |
| 30 | +![[http1.2_frame.png|500]] |
| 31 | + |
| 32 | +- An abstraction that allows us to divide [[HTTP Request]] and [[HTTP Response]] into multiple pieces |
| 33 | +- There are two types - HTTP header frame, HTTP data frame |
| 34 | + |
| 35 | +## HTTP Multiplexing |
| 36 | +--- |
| 37 | +- A [[HTTP 2.0]] feature powered by [[#HTTP Stream]], solves [[Head-of-Line Blocking (队头堵塞)#HTTP Head-of-Line Blocking]] in [[HTTP 1.1]]. Each request-response pair is assigned a unique identifier (stream), allowing the client and server to send and **receive responses out of order**, independent of when the requests were made. |
| 38 | +- Usually comes with [[TLS]] |
| 39 | + |
| 40 | +>[!success] Better performance |
| 41 | +> With multiple [[HTTP Request]] in one [[TCP Connection]] at the same time, waiting time reduced greatly aka better performance. |
| 42 | +
|
| 43 | +>[!important] On high-loss or high-latency networks, HTTP/1.1 can actually be faster |
| 44 | +> Because we will still have the [[Head-of-Line Blocking (队头堵塞)#TCP Head-of-Line Blocking]], and HTTP 1.1 may perform better with multiple [[TCP Connection]]. |
| 45 | +> |
| 46 | +> To elaborate more, HTTP/2 multiplexes many requests over a **single TCP connection**. If one packet in that connection is lost, the entire connection stalls until retransmission happens. In contrast, HTTP/1.1 can open 6–8 parallel TCP connections per domain, so a packet loss only stalls that one stream, not all. |
| 47 | +
|
| 48 | + |
| 49 | + |
| 50 | +## Server Push |
| 51 | +--- |
| 52 | +- A [[HTTP 2.0]] improvement that allows client/server to push [[Network Object]] it thinks client/server needs without the need to receive any specific [[HTTP Request]] for the object. This **reduces the number of round trips taken** |
| 53 | + |
| 54 | +>[!caution] |
| 55 | +> However, if a host clicks on the web page, many network objects will be received. This may result in potential [[DDoS]: a single HTTP request can trigger multiple [[HTTP Response]]. |
| 56 | +> |
| 57 | +> If not tuned carefully, the server might push unnecessary or already-cached resources, **wasting bandwidth**. |
| 58 | +
|
| 59 | + |
0 commit comments