HTTP/1.1 vs HTTP/2 vs HTTP/3 (QUIC) for Frontend Performance
HTTP/1.1
Connection-heavy baseline
- - Parallel socket overhead
- - Limited multiplexing
HTTP/2
Multiplexed streams
- - Fewer connections
- - TCP-level loss still matters
HTTP/3 (QUIC)
UDP-based transport
- - Improved loss recovery
- - Better connection migration
Frontend impact
UX outcome lens
- - Protocol != full perf fix
- - Measure LCP/INP outcomes
Core Lens
Protocol upgrades help only when combined with cache policy and asset prioritization.
Flow
HTTP protocol evolution reflects attempts to reduce network inefficiencies that affect web performance. HTTP/1.1 relied on multiple TCP connections to achieve concurrency. HTTP/2 introduced multiplexing to send multiple requests through one connection. HTTP/3 replaces TCP with QUIC, a transport protocol built on UDP that improves packet loss recovery and connection mobility.
For frontend engineers, these differences influence how quickly critical resources arrive, how resilient page loads are under poor connectivity, and how effectively browsers can utilize available bandwidth.
Quick Decision Guide
Senior-Level Decision Guide:
- HTTP/2 improves resource loading efficiency through multiplexing over a single TCP connection. - HTTP/3 (QUIC) improves recovery under packet loss and network transitions. - Protocol upgrades reduce transport bottlenecks but do not fix poor asset strategy or heavy JavaScript. - Mobile networks benefit most from QUIC because connection migration and loss recovery are faster. - Real improvements should be validated through real-user metrics such as LCP, TTFB, and resource timing.
Evolution of HTTP Protocols
Each protocol version addresses different network inefficiencies:
HTTP/1.1:
HTTP/2:
HTTP/3:
The goal of each generation is reducing latency and improving delivery reliability for modern resource-heavy web applications.
HTTP/1.1 Baseline Model
HTTP/1.1 relies on multiple TCP connections to achieve parallel resource loading.
Browsers typically opened several connections per origin to load assets concurrently.
Problems with this approach:
These issues became severe as pages began loading dozens or hundreds of resources.
TCP Slow Start and Connection Overhead
Every new TCP connection starts with limited throughput due to congestion control mechanisms.
Repeated connections therefore cause:
HTTP/1.1 pages with many assets often suffered from inefficient throughput because each connection restarted slow start.
HTTP/2 Multiplexing Model
HTTP/2 allows multiple request-response streams over a single TCP connection.
Advantages:
Multiplexing allows browsers to request many resources without opening multiple TCP connections.
Head-of-Line Blocking Explained
HTTP/2 removes head-of-line blocking at the application layer because requests are multiplexed.
However TCP still enforces ordered packet delivery. If a packet is lost, all streams must wait until the missing packet is retransmitted.
This means a single lost packet can stall all active streams sharing the TCP connection.
HTTP/3 and QUIC Transport
HTTP/3 replaces TCP with QUIC, which runs over UDP.
Key improvements:
Because QUIC manages streams independently, packet loss affecting one stream does not block others.
Connection Migration and Mobile Networks
QUIC supports connection migration.
If a mobile device switches networks (for example Wi-Fi to cellular), the connection can continue without a full reconnection handshake.
This significantly improves reliability for mobile users whose connectivity often changes during browsing.
CDN and Edge Infrastructure Reality
Protocol performance depends heavily on CDN and edge deployment.
Modern CDNs often terminate connections at the edge and communicate with origin servers using different protocols internally.
This means:
Frontend Performance Implications
Protocol improvements amplify good frontend architecture but do not replace it.
Even under HTTP/3:
Protocol efficiency multiplies the benefits of optimized resource loading but cannot compensate for poor asset strategy.
Interview Rubric (Senior Level)
A strong interview answer should explain:
Weak answers list protocol features. Strong answers connect transport behavior to user-visible performance.