MITB Banner

Guide To BenchmarkDotNet : A Benchmarking Library For DOTNET Developers

Share

BenchmarkDotNet

Introduction

BenchmarkDotNet is a powerful, open-source, lightweight library extensively used by .NET developers for benchmarking their code. It was introduced by the .NET Foundation. Its current maintainers are Andrey Akinshin (Project Lead) and Adam Sitnik. (Have a look at the BenchmarkDotNet team here).

Before going into the details of BenchmarkDotNet, let us understand in short, the meaning of benchmarking a code and why it is required.

What is a benchmark?

Benchmarking is an act of assessing the relative performance of a piece of code. In simple terms, the benchmark is a test run to know if some modification done to your code has improved, worsened or not affected its performance. It is required for understanding performance metrics of the methods you have used in your application so that those metrics can be used during the code optimization process. Depending upon the extent of changes you make, the benchmark may have a wide scope or a micro-benchmark assessing minute changes. 

Overview of BenchmarkDotNet

BenchmarkDotNet library transforms the methods used in your application into benchmarks. It also enables you to share reproducible measurement experiments. 

BenchmarkDotNet is used by over 4500 projects till date. To name a few – Mono, ASP.NET Core, ML.NET, Entity Framework Core, dotnet/runtime (.NET Core runtime and libraries), Roslyn (C# and Visual Basic compiler), .NET Docs, TensorFlow.NET etc.

Features supported by BenchmarkDotNet

  • Operating systems: Windows, Linux, MacOS
  • Programming languages: C#, F#, Visual Basic
  • Architectures: x86, x64, ARM, ARM64 and Wasm
  • Runtimes: .NET 5+, .NET Framework 4.6.1+, .NET Core 2.0+, Mono, CoreRT

Pros of BenchmarkDotNet

  1. Simplicity

Very complicated performance experiments can be designed in the declarative style using simple APIs of BenchmarkDotNet. For instance, to compare benchmarks with each other, mark one of the benchmarks as the baseline via [Benchmark(baseline: true)]. That benchmark will then be compared with all of the other benchmarks.

  1. Automation

Reliable benchmarks require a lot of benchmark code (i.e. sections of code that are repeated multiple times with minor variations). While writing such repetitive code, you are likely to commit a mistake which may spoil your measurements. BenchmarkDot handles such situations. It also performs certain advanced tasks such as measuring the managed memory traffic and printing disassembly listings of your benchmarks.   

  1. Reliability

BenchmarkDotNet allows for achieving high measurement precision. It tries to choose the best benchmarking parameters. It achieves a good trade-off between the measurement precision and the total time taken to run all the benchmarks. 

 It protects you from most of the benchmarking pitfalls such as deciding the number of method invocations, number of actual iterations and so on. The library handles all this stuff on its own based on the values of statistical metrics. It comprises numerous heuristics, checks, hacks, and tricks which have the potential to make your results more reliable.

  1. Friendliness

 BenchmarkDotNet performs the core part of performance assessment i.e. analyzing the performance data and presents results in a user-friendly form. It gives a summary table that contains a lot of useful data about the executed benchmarks. By default, it includes only the most important columns which are customizable. The column set is adaptive; it depends on the benchmark definition and measured values. 

BenchmarkDotNet also alerts you about some unusual properties of your performance distributions (if any). Besides, it shows only the essential information depending on your results. It keeps the summary user-friendly – small for primitive cases and extended only for the complicated cases. Additional statistics and visualizations, however, can always be added manually.

Config 

Config in BenchmarkDotNet is a set of jobs, columns, exporters, loggers, diagnosers, analysers, validators used to build benchmarks. A short definition of each is these terms are as follows:

Jobs: It is a set of characteristics which describe the way to run the benchmarks. One or more jobs can be specified for each benchmark.

Columns: refer to columns in the summary table

Exporters: An exporter enables exporting results of your benchmark in various formats. Csv, html and markdown are the default exporters. By default, files with results will be located in .\BenchmarkDotNet.Artifacts\results directory. 

Loggers: They enable logging of the results of your benchmarks. By default, log is found on the console and in <BenchmarkName>.log file.

Diagnosers: They attach to your benchmarkers to retrieve useful information. 

ToolChains: BenchmarkDotNet generates, builds and executes a new console app for every benchmark and thus enables process-level isolation. A toolchain contains the app generator, builder, and executor.

Default toolchains:

  • Roslyn for Full .NET Framework and Mono
  • dotnet cli for .NET Core and CoreRT

Analyzers: An analyzer analyzes the summary of each benchmark and produces appropriate warnings wherever necessary. 

Validators: A validator validates each benchmark before its execution and produces validation errors. If any of those errors is critical, then execution of all the benchmarks fails.

Filters: They allow you to choose only some and not all of the benchmarks specified.

Orderers: They enable customization of the order of benchmark results in the summary table.

Components of benchmarking architecture

  • Benchmarks – a web application comprising various scenarios to benchmark
  • BenchmarksServer – a web application that queues jobs which can run custom web applications to be benchmarked.
  • BenchmarksClient – a web application that queues jobs to create custom client loads on a web application
  • BenchmarksDriver – a command-line application that can enqueue server and client jobs and display the results locally.
  • A database server that can run any or all of PostgreSql, Sql Server, MySql, MongoDb

Visit this GitHub repository to know about the step-wise installation of the architecture.

Practical Implementation

Here’s an example of an ASP.NET application which demonstrates how to benchmark a C# code using BenchmarkDotNet.

Installation 

Create a new console application. Then install the BenchmarkDotNet NuGet package.

Create a benchmarking class

 [MemoryDiagnoser]  //type of diagnoser specified
    public class Example
 {
      int ItemsCount = 10000;
  /* use the Benchmark attribute on top of each of the methods that are to 
  be benchmarked */
      [Benchmark]  
      // class to concatenate strings using StringBuilder
             public string A()
      {
            var strbuilder = new StringBuilder();
            for (int i = 0; i < ItemsCount; i++)
            {
                    strbuilder.Append("Item" + i);
               }
              return strbuilder.ToString();
      } //end of A()
      [Benchmark]
           // class to concatenate strings using GenericList
      public string B()
      {
              var list = new List<string>(NumberOfItems);
              for (int i = 0; i < ItemsCount; i++)
            {
                list.Add("Item" + i);
            } //end of for loop
            return list.ToString();
         } //end of B()
 } //end of class Example 

Main method

In the Main method Program.cs file, the initial starting point — the BenchmarkRunner class must be specified in order to inform BenchmarkDotNet to run benchmarks on the specified class (here Example class). 

 static void Main(string[] args)
 {
    var summaryReport = BenchmarkRunner.Run<Example>();
 } 

Run the benchmark

Note: Always run your project in release mode while using benchmarking. The C# compiler does a few optimizations in release mode which are not available in debug mode. Running the project in debug mode will result in an error.

Suppose, the name of the project file is Demo.csproj. To run the benchmark, give the following command at Visual Studio command prompt.

dotnet run -p Demo.csproj -c Release

If you fail to mention the configuration parameter (-c Release) in the above line of code, benchmarking will be attempted on non-optimized code in debug mode and hence will cause an error.

Analyze the summary report

Once the benchmarking process gets executed, a summary of the results will be displayed at the console window. It contains information related to the application’s performance. It also shows details about the environment in which the benchmarks were executed e.g. version of BenchmarkDotNet, operating system, computer hardware, .NET version and much more.

Output:

References

To dive deeper into the powerful BenchmarkDotNet library, refer to the following sources:

Share
Picture of Nikita Shiledarbaxi

Nikita Shiledarbaxi

A zealous learner aspiring to advance in the domain of AI/ML. Eager to grasp emerging techniques to get insights from data and hence explore realistic Data Science applications as well.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.