If you actually want to benchmark real world code, use tools like Xdebug and XHProf.
Xdebug is great for when you're working in dev/staging, and XHProf is a great tool for production and it's safe to run it there (as long as you read the instructions). The results of any one single page load aren't going to be as relevant as seeing how your code performs while the server is getting hammered to do a million other things as well and resources become scarce. This raises another question: are you bottlenecking on CPU? RAM? I/O?
You also need to look beyond just the code you are running in your scripts to how your scripts/pages are being served. What web server are you using? As an example, I can make nginx + PHP-FPM seriously out perform mod_php + Apache, which in turn gets trounced for serving static content by using a good CDN.
The next thing to consider is what you are trying to optimise for?
- Is the speed with which the page renders in the users browser the
number one priority?
- Is getting each request to the server thrown back out as quickly as
possible with smallest CPU consumption the goal?
The former can be helped by doing things like gzipping all resources sent to the browser, yet doing so could (in some circumstances) push you further away from the achieving the latter.
Hopefully all of the above can help show that carefully isolated 'lab' testing will not reflect the variables and problems that you will encounter in production, and that you must identify what your high level goal is and then what you can do to get there, before heading off down the micro/premature-optimisation route to hell.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…