0 votes
in JAVA by

JMH is a Java harness for building, running, and analysing nano/micro/milli/macro benchmarks written in Java and other languages targetting the JVM.

Basic Considerations

The recommended way to run a JMH benchmark is to use Maven to setup a standalone project that depends on the jar files of your application. This approach is preferred to ensure that the benchmarks are correctly initialized and produce reliable results. It is possible to run benchmarks from within an existing project, and even from within an IDE, however setup is more complex and the results are less reliable.

In all cases, the key to using JMH is enabling the annotation- or bytecode-processors to generate the synthetic benchmark code. Maven archetypes are the primary mechanism used to enable this. We strongly recommend new users make use of the archetype to setup the correct environment.

Preferred Usage: Command Line

  • Setting up the benchmarking project.The following command will generate the new JMH-driven project in test folder:
    $ mvn archetype:generate \
              -DinteractiveMode=false \
              -DarchetypeGroupId=org.openjdk.jmh \
              -DarchetypeArtifactId=jmh-java-benchmark-archetype \
              -DgroupId=org.sample \
              -DartifactId=test \
              -Dversion=1.0
    

    If you want to benchmark an alternative JVM language, use another archetype artifact ID from the list of existing ones, it usually amounts to replacing java to another language in the artifact ID given below. Using alternative archetypes may require additional changes in the build configuration, see the pom.xml in the generated project.

  • Building the benchmarks. After the project is generated, you can build it with the following Maven command:
    $ cd test/
    $ mvn clean install
  • Running the benchmarks. After the build is done, you will get the self-contained executable JAR, which holds your benchmark, and all essential JMH infrastructure code:
    $ java -jar target/benchmarks.jar

    Run with -h to see the command line options available.

When dealing with large projects, it is customary to keep the benchmarks in a separate subproject, which then depends on the tested modules via the usual build dependencies.

...