Speed Up the jQuery Code: Selectors’ Cache

Short Experiment

Here’s a short optimization of a small chunk of jQuery code. The experiment was to increment the number of DOM element access. In this case this was a change of the innerHTML property – using the $.html() method.

I’ve measured the result with the console.time() method and thus I expect correct results for both cases. In the first case I directly call the jQuery selectors’ html() method:

for (var i = 0; i < 10000; i++) {

While in the second case I “cache” it before the loop:

var t = $('#container');
for (var i = 0; i < 10000; i++) {

and here’s the full code in the first case:

    <title>jQuery Cache</title>
<div id="container"></div>
<script src="/scripts/library.js"></script>
$(document).ready(function() {
    for (var i = 0; i < 10000; i++) {

The Results

As expected the second “cached” approach gave better results. With the increment of the iterations the second method began to be slightly better than the first one. That’s not so much, but imagine you have more than one selector in the loop’s body?


Note that in the time is in ms, and, yes, this is not so much, but when you deal with large data and you’ve more than one selector in the loop’s body, this can be critical!


7 thoughts on “Speed Up the jQuery Code: Selectors’ Cache

  1. The difference that you saw was probably so small because your DOM was so small and simple.

    If you try this with some “real world” html, where the DOM is larger, and the selectors are more complex, you’ll probably see a much greater difference.

  2. “Little” things like this become _very_ important when you’re developing complicated sites used by hundreds or thousands or even more people on daily basis. That’s when “just a few milliseconds” really start making difference 🙂

  3. @Teppo: It’s Javascript. Does the number of users visiting the site even come into play here? It’s not as if you’re saving bandwidth or server CPU cycles with a “cached” query vs. an “ad-hoc” query in this circumstance… all of the code is executed on the client’s machine–not on the server.

  4. @haliphax – You’re missing the point – it’s about client-side performance. if you want to change alot of UI stuff or update many different parts of the DOM tree you’ll have to find a way to reference those Nodes (through a selection engine or DOM methods). You don’t want your website to suck or be stuck in the 90s, do you?

    I don’t think that your test is stressing the right area. I thought from the title it’d be simply reading / selecting / parsing the DOM, but you’re also setting the innerHTML of an element as well (which requires it to re-parse that text and create a DOM Node tree).

    It seems like a test truer to this title (simply caching selectors) would be something like this:

        // Without caching
        $([1,2,3,4,5]).each(function () {
            for (var i = 0, start = +new Date; i < 100000; ++i) {
            console.log(+new Date - start);
        // With caching
        $([1,2,3,4,5]).each(function () {
            for (var i = 0, $sponsors = $('#sponsors'), start = +new Date; i < 100000; ++i) {
            console.log(+new Date - start);

    Which yielded the results (on a site I used to work on, http://lalive.com)


    So, here we see a more pronounced difference.

  5. @Dan Beam – I know I’m accessing the DOM of course, and I know also this is a slow operation. While there are different techniques of optimizing, here I think the test is correct just because it’s using the same DOM access methods. Thus I suppose only the selector is changing the performance results!


Leave a Reply

Your email address will not be published. Required fields are marked *