It's important to understand why the overhead of an HTTP request has such an impact.
In its simplest form, an HTTP request consists of opening a socket, sending the request on the open socket and reading the response.
To open a socket, the client's TCP/IP stack sends a TCP SYN packet to the server. The server responds with a SYN-ACK, and the client responds to that with an ACK.
So, before you send a single byte of application data, you have to wait for a whole one and a half round trips to the server, at least.
Then the client needs to send the request, wait for the server to parse the request, find the requested data, send it back - that's another round trip plus some server side overhead (hopefully a small overhead, although I've seen some slow servers) plus the time to transmit the actual data, and that's the best case, assuming no network congestion which would result in packets being dropped and retransmitted.
Every chance you have to avoid this, you should.
Modern browsers will issue multiple requests in parallel in an attempt to reduce some of the overhead involved. HTTP requests can theoretically be done on the same socket, making things a little better. But in general, network round trips are bad for performance, and should be avoided.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…