At Atlassian, we have add-ons we develop and use to enable more powerful workflows within our products. Using Atlassian Connect, we can write those add-ons in any language. We often write those add-ons using Node.js. But once in a while we run into a performance problem with an add-on. I had to learn how to profile a Node.js app, so we could live happier lives with less waiting.
If you poke at
node --help, it doesn't appear if there are lot of options. Since
Node.js is just a front end for V8, you can find the options hidden away behind
node --v8-options. I wouldn't jump to typing out that command in your terminal just yet. There are
a couple hundred options that you could pass to the V8 engine. A little searching turns up V8
--prof (Log statistical profiling information (implies --log-code).)
If you run your Node.js application with the
--prof flag (e.g.
node --prof app.js),
-v8.log file, e.g.
isolate-0x101804600-v8.log, of stack samples. I believe Node.js
added this in v0.12.x, so if you're stuck on an older version you might be out of luck. Peeking in
the log, it appears to be a CSV file with entries of execution details. This generates data every
few moments, taking a "tick" by peeking at the stack to see where your program is spending time. Raw
data is a bit hard to use, but there's a nice profiling visualization tool to collate
the data for you in the V8 repository.
This profiling visualization tool processes the file in your browser and provides a plot and
ordered call information. Select the file your
node instance generated and hit "Start". The
profile information will give you a block of text which might start off like (path edited so it
fits on the screen):
Showing me a few things of interest at the top of usage list of the profiler. UglifyJS is the first bit of userland code showing up. But UglifyJS doesn't help us find a problem with normal requests, so you can change range of ticks (effectively time, but each tick should align with the next iteration in the event loop) in the page to something that would exclude the app startup and thus one time costs, such as UglifyJS.
There is a plot generated with this tool; but I'll admit I haven't figured out how to interept that data. If someone knows of some good instructions on interepting that data, feel free to share it in the comments.
So between using
node --prof and the visualization tool you should be able to see
bottlenecks in your application! Using the
--prof flag isn't perfect, if you need more details
you would need to instrustment your code instread of sampling it to better results.