0 votes
in JavaScript by
Why is processing a sorted array faster than processing an unsorted array?

Here is a piece of C++ code that shows some very peculiar behavior. For some strange reason, sorting the data miraculously makes the code almost six times faster:

#include <algorithm>

#include <ctime>

#include <iostream>

int main()

{

    // Generate data

    const unsigned arraySize = 32768;

    int data[arraySize];

    for (unsigned c = 0; c < arraySize; ++c)

        data[c] = std::rand() % 256;

    // !!! With this, the next loop runs faster.

    std::sort(data, data + arraySize);

    // Test

    clock_t start = clock();

    long long sum = 0;

    for (unsigned i = 0; i < 100000; ++i)

    {

        // Primary loop

        for (unsigned c = 0; c < arraySize; ++c)

        {

            if (data[c] >= 128)

                sum += data[c];

        }

    }

    double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;

    std::cout << elapsedTime << std::endl;

    std::cout << "sum = " << sum << std::endl;

}

Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds.

With the sorted data, the code runs in 1.93 seconds.

Initially, I thought this might be just a language or compiler anomaly, so I tried Java:

import java.util.Arrays;

import java.util.Random;

public class Main

{

    public static void main(String[] args)

    {

        // Generate data

        int arraySize = 32768;

        int data[] = new int[arraySize];

        Random rnd = new Random(0);

        for (int c = 0; c < arraySize; ++c)

            data[c] = rnd.nextInt() % 256;

        // !!! With this, the next loop runs faster

        Arrays.sort(data);

        // Test

        long start = System.nanoTime();

        long sum = 0;

        for (int i = 0; i < 100000; ++i)

        {

            // Primary loop

            for (int c = 0; c < arraySize; ++c)

            {

                if (data[c] >= 128)

                    sum += data[c];

            }

        }

        System.out.println((System.nanoTime() - start) / 1000000000.0);

        System.out.println("sum = " + sum);

    }

}

1 Answer

0 votes
by
As hinted from above, the culprit is this if-statement:

if (data[c] >= 128)

    sum += data[c];

Notice that the data is evenly distributed between 0 and 255. When the data is sorted, roughly the first half of the iterations will not enter the if-statement. After that, they will all enter the if-statement.

This is very friendly to the branch predictor since the branch consecutively goes the same direction many times. Even a simple saturating counter will correctly predict the branch except for the few iterations after it switches direction.

Quick visualization:

T = branch taken

N = branch not taken

data[] = 0, 1, 2, 3, 4, ... 126, 127, 128, 129, 130, ... 250, 251, 252, ...

branch = N  N  N  N  N  ...   N    N    T    T    T  ...   T    T    T  ...

       = NNNNNNNNNNNN ... NNNNNNNTTTTTTTTT ... TTTTTTTTTT  (easy to predict)

However, when the data is completely random, the branch predictor is rendered useless, because it can't predict random data. Thus there will probably be around 50% misprediction (no better than random guessing).

data[] = 226, 185, 125, 158, 198, 144, 217, 79, 202, 118,  14, 150, 177, 182, 133, ...

branch =   T,   T,   N,   T,   T,   T,   T,  N,   T,   N,   N,   T,   T,   T,   N  ...

       = TTNTTTTNTNNTTTN ...   (completely random - hard to predict)

Related questions

+1 vote
asked Jan 30, 2022 in Other by DavidAnderson
+1 vote
asked Mar 8, 2020 in Spark Sql by rahuljain1
...