Optimize Code Coverage with JaCoCo Maven Plugin
Unlock reliable code coverage with the JaCoCo Maven plugin. Guide to configuring reports, checks, multi-module projects, and seamless CI integration.

You’ve got tests. They pass locally. CI is green. Then a refactor lands, one branch wasn’t really exercised, and a bug slips into production anyway.
That’s usually the moment teams start caring about coverage. Not as a vanity metric, but as a way to answer a blunt question: did the tests execute the code paths we’re betting the release on?
The jacoco maven plugin is the tool most Java teams reach for when they need that answer inside a Maven build. It’s simple enough to add in one sitting, but the substantial work starts after the first report appears. Multi-module builds, integration tests, XML output for SonarQube, flaky-looking gaps around exceptions, and build-breaking thresholds are where most tutorials stop being useful. This guide stays on those parts.
Beyond Passing Tests to Proving Quality
A passing test suite can still leave dangerous blind spots.
A controller test might hit the happy path and never touch the validation branch. A service test might mock away the code that throws. An integration test might run, succeed, and still never count toward coverage because the JaCoCo agent was never attached. From the outside, everything looks healthy. Inside the codebase, you’re flying with partial instruments.
That’s why coverage matters. Not because a high number magically means quality, but because coverage exposes where your confidence is earned and where it’s assumed.

The reason the jacoco maven plugin keeps showing up in serious Java projects is straightforward. It’s firmly established in the ecosystem and has over 16,900 direct dependents on Maven Central, which is a strong signal that teams trust it for Maven-based coverage workflows, as shown on Maven Central repository data for jacoco-maven-plugin.
What JaCoCo actually tells you
JaCoCo gives you several views of test effectiveness:
- Line coverage helps you see which executable lines were hit.
- Branch coverage shows whether decision points such as
if/elsepaths were exercised. - Method coverage tells you whether methods ran at all.
- Class coverage gives a broader package and module-level picture.
- Cyclomatic complexity helps you spot code that needs more thoughtful testing.
Those metrics matter most during change. When you rename internals, split services, move logic down a layer, or add a pricing rule under deadline pressure, the coverage report becomes less about optics and more about whether the safety net is real.
Practical rule: Treat coverage as a debugging instrument for your test suite, not a scorecard for your ego.
The jacoco maven plugin earns its place because it fits into the Maven lifecycle cleanly. Add the right plugin executions, run the build, and you get a report your team can inspect without introducing a separate testing platform just to answer basic questions. That’s exactly what you want in an early product build. Less ceremony, more signal.
Your First JaCoCo Report in Under 10 Minutes
The fastest useful setup is a single-module Maven project with one report generated during a normal build. No custom scripts. No separate command that someone forgets to run. Just Maven doing the work every time tests run.

Drop this into your pom.xml under <build><plugins>:
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.12</version>
<executions>
<execution>
<id>default-prepare-agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>default-report</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
<configuration>
<dataFile>${project.build.directory}/jacoco.exec</dataFile>
<outputDirectory>${project.reporting.outputDirectory}/jacoco</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
Then run:
mvn clean verify
With that, Maven generates the report in target/site/jacoco/index.html. The official JaCoCo Maven documentation also calls out the standard report execution pattern and the mvn clean verify workflow in its JaCoCo Maven plugin guide.
What the two goals do
If you remember only two goals, remember these.
| Goal | Job | Why it matters |
|---|---|---|
prepare-agent | Attaches the JaCoCo runtime agent before tests run | Without it, no execution data is collected |
report | Turns execution data into HTML, XML, and CSV reports | Without it, you have raw data but nothing readable |
prepare-agent is the one people underestimate. It’s not cosmetic. It’s the step that injects the agent into the JVM used by your test plugins.
report is the conversion step. It reads the execution data file and produces something humans and CI tools can inspect.
The setup that works reliably
For a normal unit test setup, keep things boring:
- Put the plugin in the project POM.
- Bind
prepare-agentbefore tests run. - Bind
reportso report generation happens as part of the lifecycle. - Run
mvn clean verify. - Open
target/site/jacoco/index.html.
That’s enough for most first projects.
What doesn’t work is ad hoc usage where one developer runs a coverage command manually and everyone else assumes the report is current. Coverage has to be part of the build, or it becomes stale almost immediately.
If a report isn’t generated automatically, the team stops trusting it.
The gotcha that burns the most time
The big one is Maven Surefire or Failsafe configuration that prevents the JaCoCo agent from being used.
If you set forkCount=0 or forkMode=never, the agent doesn’t execute in the forked JVM the way JaCoCo expects, and you can end up staring at 0% coverage even though tests clearly ran. That isn’t a testing failure. It’s an instrumentation failure.
Check your test plugins carefully if coverage looks impossible:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<!-- do not set forkCount to 0 -->
</configuration>
</plugin>
Separate unit and integration reports
As soon as you have both unit tests and integration tests, split the execution data. Don’t dump everything into one file and hope the lifecycle sorts it out.
Use separate report executions like this:
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.12</version>
<executions>
<execution>
<id>default-prepare-agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>default-report</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
<configuration>
<dataFile>${project.build.directory}/jacoco.exec</dataFile>
<outputDirectory>${project.reporting.outputDirectory}/jacoco</outputDirectory>
</configuration>
</execution>
<execution>
<id>report-integration</id>
<phase>post-integration-test</phase>
<goals>
<goal>report</goal>
</goals>
<configuration>
<dataFile>${project.build.directory}/jacoco-it.exec</dataFile>
<outputDirectory>${project.reporting.outputDirectory}/jacoco-it</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
That keeps unit and integration coverage from stepping on each other.
A quick walkthrough can help if you want to compare your setup against a visual example:
<iframe width="100%" style="aspect-ratio: 16 / 9;" src="https://www.youtube.com/embed/E5KMvKrs89Q" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>What to inspect in the generated report
Once index.html opens, don’t obsess over the top-level summary first. Drill down into:
- Packages with low branch coverage
- Service classes with lots of conditional logic
- Exception-heavy adapters
- Classes that show line coverage but weak branch coverage
That’s usually where real risk lives.
The jacoco maven plugin becomes useful the moment it changes a testing decision. If the report shows a validation branch never executes, write the missing test. If the report shows a mapper package isn’t worth enforcing, exclude it later with intent. Coverage should change behavior, not just produce a screenshot for pull requests.
Enforce Coverage Thresholds to Fail the Build
A coverage report that nobody acts on becomes background noise.
Teams look at it once, nod, and move on. A month later the number is lower, nobody remembers why, and the build still passes. If you want coverage to matter, make it part of the contract of the build.
A practical check configuration
JaCoCo’s check goal lets you define minimum standards in pom.xml. A useful starting point is to enforce branch coverage at the class level, because line coverage alone can hide weak tests that execute code without exercising decisions.
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.12</version>
<executions>
<execution>
<id>default-prepare-agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>default-report</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
<execution>
<id>jacoco-check</id>
<phase>verify</phase>
<goals>
<goal>check</goal>
</goals>
<configuration>
<rules>
<rule>
<element>CLASS</element>
<limits>
<limit>
<counter>BRANCH</counter>
<value>COVEREDRATIO</value>
<minimum>0.80</minimum>
</limit>
</limits>
</rule>
</rules>
</configuration>
</execution>
</executions>
</plugin>
Run the usual command:
mvn clean verify
If a class drops below the configured threshold, the build fails.
Why branch coverage is usually the better gate
Line coverage answers, “did this line run?”
Branch coverage answers, “did the code choose both paths?”
That second question is closer to what breaks real features. Validation, pricing, retries, fallback logic, permission checks, and feature flags all live in branches.
A simple comparison helps:
| Metric | Good for | Weakness |
|---|---|---|
| Line coverage | Spotting untouched code | Can look healthy while conditional logic stays weakly tested |
| Branch coverage | Testing decisions and paths | More demanding, sometimes noisy on trivial classes |
| Cyclomatic complexity | Identifying risky code that needs deeper tests | Not a pass/fail gate by itself |
Quality gate advice: Start with branch coverage on classes that contain business logic. Don’t start by punishing DTOs, generated code, and framework glue.
What works in practice
Early-stage teams usually do better with a narrow but real threshold than with broad rules nobody can maintain.
Good candidates for enforcement:
- Core services: Pricing, billing, authentication, workflow orchestration
- Business rules: Validation and decision-heavy logic
- Code that changes often: Areas under active feature work
Poor candidates for strict enforcement:
- Generated classes
- Simple configuration holders
- Thin data carriers
- Framework bootstrap code
If you gate everything with no nuance, people game the metric. If you gate nothing, the report fades into the background. The middle ground works. Enforce meaningful coverage where regressions hurt, then tighten over time as the codebase becomes more disciplined.
Taming JaCoCo in Multi-Module Maven Projects
Single-module coverage is easy. Multi-module coverage is where teams start copying plugin config into five places, generating five partial reports, and still not knowing their actual project-wide status.
That’s the normal pain point with the jacoco maven plugin in larger Maven builds. Each module can produce its own execution data and report, but leadership, CI, and code review usually need one coverage view across the whole build.

The standard Maven way
The classic approach is to create an aggregator module at the root. Submodules collect coverage data. The aggregator then combines it and generates one unified report.
A common layout looks like this:
parent-project/
pom.xml
module-a/
module-b/
module-c/
coverage-aggregate/
In each submodule, configure JaCoCo to collect execution data during tests. Keep that part minimal. The goal is to produce .exec files consistently.
Example submodule setup:
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.12</version>
<executions>
<execution>
<id>prepare-agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
</executions>
</plugin>
Then in the aggregator module, wire the report generation. The exact merge mechanics vary by project structure, but the pattern is always the same: submodules produce execution data, aggregator consumes it, one report comes out.
Why this gets messy fast
The manual aggregator pattern works. It’s also where configuration drift starts.
Common failure modes include:
- Different
destFileconventions across modules - Some modules generating reports, others only execution data
- Paths that work locally but break in CI
- Submodules excluded from aggregation by accident
- Redundant site output when paired with
maven-site-plugin
For a small team, that means the coverage setup becomes its own mini-project. Nobody wants that.
The minute coverage config needs a diagram to maintain, it stops being “just a plugin” and starts becoming build debt.
The simpler route with Easy JaCoCo
If your goal is a single project-wide report with less plumbing, the Easy JaCoCo Maven plugin is the more practical option for many teams. The project documents that standard JaCoCo often requires manual aggregator modules, while Easy JaCoCo automates aggregation into a single report with project-wide checks and produces jacoco-aggregate/jacoco.xml for SonarQube CI integration, as described in the Easy JaCoCo Maven plugin project.
That matters because it removes a lot of boilerplate you’d otherwise maintain by hand.
When to use which approach
Use the standard aggregator model if:
- You need tight control over a legacy build
- Your organization already has a stable parent POM strategy
- You want to stay close to native JaCoCo behavior
Use Easy JaCoCo if:
- You want one project-wide report without hand-rolling an aggregator module
- Your team is small and build maintenance time matters
- You need XML output for SonarQube without bespoke wiring
- You’d rather spend time fixing tests than fixing coverage plumbing
A quick decision table helps:
| Situation | Better fit |
|---|---|
| Established enterprise parent POM with strict conventions | Standard aggregator |
| Growing startup app splitting into modules for the first time | Easy JaCoCo |
| CI needs one XML file for analysis tools | Easy JaCoCo |
| Team wants fully custom aggregation behavior | Standard aggregator |
Multi-module advice that saves time
The biggest mistake in modular builds is trying to make every child module “smart.” Keep child modules boring. Let them collect data. Centralize aggregation and checks at the top.
A few habits keep the setup sane:
- Standardize paths early: Don’t let each module invent its own naming scheme.
- Keep report generation centralized: Per-module HTML is useful for local debugging, but project-level reporting should live in one place.
- Review module boundaries: Missing coverage often reflects misplaced logic, not just missing tests.
- Make CI consume the aggregate XML: Humans can inspect HTML locally, but automation should depend on a single machine-readable report.
For founders and small teams, that last point is the one that changes behavior. Once the whole repo has a single visible coverage artifact, discussions stop revolving around “which module is this report from?” and shift to “which business path still isn’t tested?”
Integrating Coverage into Your CI CD Pipeline
Local coverage is useful. Automated coverage is what makes it stick.
Once the jacoco maven plugin runs in CI on every push or pull request, coverage stops being an occasional manual check and becomes part of delivery. That’s where it starts helping code review, release confidence, and regressions.

What your pipeline should produce
A useful CI coverage job should do three things every time:
- Run the Maven build with tests
- Generate the JaCoCo report artifacts
- Expose machine-readable XML for quality tools and HTML for humans
If you already configured prepare-agent, report, and optional check goals in your POM, the pipeline itself can stay very small.
GitHub Actions example
This workflow runs Maven, generates coverage during the build, and uploads the HTML report directory as an artifact:
name: build-and-coverage
on:
push:
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Java
uses: actions/setup-java@v4
with:
distribution: temurin
java-version: '21'
cache: maven
- name: Build and run coverage
run: mvn clean verify
- name: Upload JaCoCo HTML report
uses: actions/upload-artifact@v4
with:
name: jacoco-html-report
path: target/site/jacoco
That’s enough for a single-module project.
For multi-module builds, point the artifact path at the aggregate report directory your build generates. Don’t assume target/site/jacoco if aggregation writes elsewhere.
Jenkins pipeline example
If you run Jenkins, the same idea applies. Keep the pipeline thin and let Maven own the coverage logic:
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build and Test') {
steps {
sh 'mvn clean verify'
}
}
}
post {
always {
archiveArtifacts artifacts: 'target/site/jacoco/**', fingerprint: true
junit 'target/surefire-reports/*.xml'
}
}
}
If you have integration tests with Failsafe or a multi-module aggregate output, adjust the archived paths accordingly.
CI habit: Archive the HTML report even if your team mainly uses SonarQube. When a pull request looks suspicious, engineers still want something they can open and inspect quickly.
Generating XML for analysis tools
Many teams care less about the HTML and more about jacoco.xml, because that’s what static analysis tools commonly ingest.
If your report goal is running, JaCoCo can generate XML alongside HTML and CSV in the report output directory. In practice, that means your CI can publish the report while another tool reads the XML from the generated site output.
A straightforward pattern is:
- Developers inspect HTML locally or from build artifacts
- Quality tools consume
jacoco.xml - The Maven
checkgoal remains the hard gate
This split works well because it gives both humans and automation the format they need.
What to avoid in CI
Coverage pipelines usually fail for boring reasons, not exotic ones.
Avoid these habits:
- Running
mvn testwhen your checks live inverify - Uploading artifacts from the wrong module path
- Assuming XML exists without confirming the report output
- Mixing unit and integration execution data carelessly
- Treating CI coverage as separate from local coverage
Your build should behave the same way in both places. If the command locally is mvn clean verify, that should usually be the command in CI too.
A clean flow for teams
A healthy coverage workflow looks like this:
| Stage | Output | Who uses it |
|---|---|---|
| Maven test execution | Execution data | JaCoCo internals |
| JaCoCo report generation | HTML, XML, CSV | Developers and CI tools |
| JaCoCo check | Pass or fail build | Entire team |
| Artifact upload | Archived report | Reviewers and maintainers |
That setup is enough for most product teams. You don’t need a sprawling platform to make coverage actionable. You need one Maven command, one report path, one quality gate, and one place in CI where everyone knows to look.
Troubleshooting Elusive Coverage Gaps and Errors
The most annoying JaCoCo problems happen after you’ve done the “correct” setup.
Tests pass. The build runs. The report exists. But some lines you know executed still show as uncovered, or the report is unexpectedly sparse, or a package you thought you excluded still appears, leading teams to lose time because the failure looks conceptual when it’s usually mechanical.
Why exception paths can look uncovered
One subtle issue comes from how JaCoCo places probes in bytecode. JaCoCo may report lines as uncovered in exception-throwing paths because an exception can interrupt execution before the next probe runs. That low-probe design keeps overhead down, but it can make executed lines appear missed in the report, a behavior discussed in the JaCoCo mailing list explanation of uncovered lines in exception paths.
That means the report isn’t necessarily lying. It’s reflecting how instrumentation works.
A typical place this shows up is code like:
public void process(Order order) {
validate(order);
throw new IllegalStateException("broken state");
}
You may hit validate(order) and still not get the visual result you expected around the throw path.
Don’t “fix” this by writing nonsense tests just to satisfy a visual gap. First decide whether the code path is actually validated in a meaningful way.
How to debug suspicious gaps
When coverage looks wrong, work through the basics in order:
- Check test plugin forks: If Surefire or Failsafe is configured incorrectly, the agent may never attach.
- Check report inputs: Confirm the expected
.execfile exists where the report execution expects it. - Check lifecycle phases: A report bound too early can run before the right tests produce coverage data.
- Check exclusions: Overly broad excludes can remove real code from analysis.
- Check bytecode realities: Exception-heavy code and generated code often need more careful interpretation.
If your tests rely heavily on static mocking, constructor interception, or framework-driven execution, it’s also worth reviewing whether the test style itself is obscuring what code runs. This is one reason I prefer keeping business logic in plain services and using advanced mocking sparingly. If static behavior is already complicating your test setup, this guide on mocking static methods with Mockito is a useful companion read.
Empty or near-empty reports
An empty report usually points to one of a few causes.
| Symptom | Likely cause | First check |
|---|---|---|
| Report exists but shows almost nothing | Agent didn’t attach | Surefire or Failsafe config |
| Unit tests covered, integration tests missing | Wrong lifecycle phase or shared data file confusion | Separate unit and integration data files |
| HTML generated, XML missing expectations | Output assumptions don’t match build config | Actual files in report directory |
| Some classes absent entirely | Exclusions or generated code filtering | Plugin configuration and package patterns |
A lot of coverage bugs are path bugs.
Version and compatibility friction
JaCoCo itself is usually stable. Friction tends to come from combinations of Java version, Maven version, test plugin version, and old parent POM defaults.
When the build behaves oddly:
- Confirm the JaCoCo plugin version is current enough for your Java level
- Check whether your parent POM overrides Surefire or Failsafe
- Look for old plugin management entries
- Inspect the effective POM if behavior makes no sense
- Make sure debug information is present in class files if you expect line numbers and source highlighting
That last point matters more than people realize. If line numbers and source highlighting are missing, the report becomes much less useful even if overall instrumentation technically worked.
Exclusions need discipline
Excluding code is sometimes correct. Generated clients, DTOs, or framework scaffolding can dilute the signal.
But exclusions become dangerous when they’re used to inflate coverage instead of focusing it.
Good reasons to exclude:
- Generated sources
- Thin transport objects
- Code you don’t own and won’t test directly
Bad reasons:
- Complex classes with weak tests
- Hard-to-test logic that should be refactored
- Adapters that keep breaking in production
The most productive mindset is simple. If JaCoCo highlights a strange gap, assume neither the tool nor the test is guilty yet. Investigate the build wiring first, then the bytecode reality, then the test design.
Ship with Confidence
The jacoco maven plugin is at its best when it disappears into the build and reports the truth.
That truth starts with a local HTML report, gets stronger when you add a real check gate, and becomes operational when CI publishes artifacts and XML on every change. In larger repos, it only stays useful if you solve the multi-module story cleanly instead of letting each module drift into its own partial setup.
Coverage still isn’t the goal. Reliable change is the goal. Coverage is one of the clearest signals you can automate to support that.
Use it to pressure-test refactors. Use it to catch shallow tests. Use it to keep business logic honest. And when JaCoCo reports something strange, don’t panic and start rewriting tests at random. The sharp teams treat coverage as engineering feedback, not as a beauty contest.
If you want hands-on help getting a Java project, test stack, CI pipeline, or AI-assisted workflow into a shape you can ship, Jean-Baptiste Bolh works directly with founders, developers, and small teams to unblock builds, tighten architecture, and move from “it runs on my machine” to a real release.