BlackWaspTM

This web site uses cookies. By using the site you accept the cookie policy.This message is for compliance with the UK ICO law.

Parallel and Asynchronous
.NET 4.0+

Limiting Loop Parallelism

For most parallel loops it's fine to allow the Task Parallel Library to determine the level of parallelism. The number of concurrent operations is set automatically to try to achieve better performance. In other cases it's important to limit parallelism.

ParallelOptions

When you include a parallel For or ForEach loop in your code you will usually just include the size or scope of the loop and a Func delegate describing the loop's body. When you do, the Task Parallel Library (TPL) automatically adds parallel processing to the loop. This often means that every available processor core will be used in order to execute the entire loop in the shortest possible time.

There are some situations where you might not want to use all of the available processors. Some algorithms are improved with a little parallelism but become inefficient when using lots of concurrent threads. This can lead to diminishing returns, or even lower performance, as the number of utilised cores increases. Another situation where you may wish to limit parallel tasks is where you are performing a long-running process that could impact the execution of other software. Here you might decide to use half of the available cores so that the user can continue to work.

To place a limit of the number of processor cores that a single loop will use, you provide that loop with a ParallelOptions object. We've seen this class used before when providing cancellation tokens to parallel loops or to tasks launched using Parallel.Invoke. Here we need to use a different property, namely MaxDegreeOfParallelism. This integer property specifies the number of processor cores that the scheduler is permitted to use for the loop. If you want to limit your loop to a specific fraction of the processors installed in a computer, you need to determine the number of cores and calculate the correct value for the property.

The ParallelOptions class is found in the System.Threading.Tasks namespace, so add the following using directive:

using System.Threading.Tasks;

Before we limit the parallelism, try running the code below. This parallel For loop cycles through twenty numbers, outputting each with the current task ID, allowing us to see the number of tasks in play. The MaxDegreeOfParallelism is not set so it is possible that every available core will be used. NB: Not specifying a maximum degree of parallelism is equivalent to supplying a value of -1.

Parallel.For(0, 20, i =>
{
    Console.WriteLine("{0} on Task {1}", i, Task.CurrentId);
});

/* OUTPUT

0 on Task 1
2 on Task 2
4 on Task 1
6 on Task 1
7 on Task 1
8 on Task 1
9 on Task 1
10 on Task 1
11 on Task 1
12 on Task 1
13 on Task 1
14 on Task 1
15 on Task 1
5 on Task 2
17 on Task 2
18 on Task 2
16 on Task 1
3 on Task 3
1 on Task 4

*/

The example output shows that the loop created four tasks. Your results may vary of course. If we wanted to limit the loop to only two concurrent tasks, and therefore a maximum of two processor cores in use at once, we can set the MaxDegreeOfParallelism to 2. The ParallelOptions object is passed as the third parameter.

Try running the example code to see the restriction applied.

ParallelOptions po = new ParallelOptions();
po.MaxDegreeOfParallelism = 2;

Parallel.For(0, 20, po, i =>
{
    Console.WriteLine("{0} on Task {1}", i, Task.CurrentId);
});

/* OUTPUT

0 on Task 1
1 on Task 1
2 on Task 1
3 on Task 1
4 on Task 1
5 on Task 1
10 on Task 2
11 on Task 2
12 on Task 2
13 on Task 2
14 on Task 2
15 on Task 2
16 on Task 2
17 on Task 2
18 on Task 2
19 on Task 2
7 on Task 2
8 on Task 2
9 on Task 2
6 on Task 1

*/

You can limit the parallelism of ForEach loops in the same manner. The following code demonstrates this with a limit of three processor cores.

ParallelOptions po = new ParallelOptions();
po.MaxDegreeOfParallelism = 3;

Parallel.ForEach(Enumerable.Range(0,19), po, i =>
{
    Console.WriteLine("{0} on Task {1}", i, Task.CurrentId);
});

/* OUTPUT

0 on Task 1
3 on Task 1
4 on Task 1
5 on Task 1
6 on Task 1
7 on Task 1
8 on Task 1
9 on Task 1
10 on Task 1
1 on Task 2
15 on Task 2
16 on Task 2
17 on Task 2
18 on Task 2
2 on Task 3
11 on Task 1
12 on Task 1
13 on Task 1
14 on Task 1

*/
6 April 2013