In today’s fast-paced development world, speed is everything. For teams working with large codebases, like monorepos, slow Continuous Integration (CI) pipelines can quickly become a bottleneck, hindering productivity and delaying release cycles. At Voltera Group, we specialize in identifying and solving these critical pain points.
Recently, we tackled the challenge of long CI times within our React monorepo. By applying a series of strategic optimizations, we successfully reduced our CI time by 70%, dramatically improving developer experience and project efficiency. Here’s how Voltera Group made it happen.

Understanding the Problem: What Was Slowing Us Down?
Before implementing any solutions, the first step was to pinpoint the exact causes of the sluggish CI performance in our React monorepo. Through in-depth analysis, we identified several key issues:
Monorepo size: Our monorepo housed multiple React applications, shared components, and utilities, all built and tested simultaneously. This resulted in redundant processes, with everything being rebuilt even when changes were isolated.
Overtesting: Every push triggered a complete run of all tests, even when only a specific part of the codebase had been updated.
Inefficient caching: We weren’t fully leveraging modern caching techniques, leading to unnecessary rebuilds from scratch.
Sequential job execution: Tasks such as linting, testing, and building were running sequentially, further slowing down the pipeline.
These inefficiencies were costing valuable development time. At Voltera Group, we know that understanding the root cause is crucial, so we set out to address each of these issues systematically.
Leveraging CI Optimization Tools: Turbocharging Our Workflow
One of the most impactful changes we made was integrating CI optimization tools designed specifically for monorepos. To tackle our React monorepo’s performance issues, Voltera Group introduced key tools like Nx and Turborepo.
Nx Integration: Nx, a powerful build system for monorepos, was at the heart of our solution. Nx allowed us to analyze project dependencies and selectively build or test only the projects affected by changes. By reducing the scope of each CI run, we eliminated unnecessary builds and improved overall performance.
Targeted Builds and Tests: With Nx, we streamlined our pipeline so that only the impacted projects were rebuilt or tested when code changes were pushed. This selective approach slashed unnecessary workloads, speeding up our CI pipeline significantly.
Incremental Builds: Nx’s incremental build cache feature allowed us to avoid rebuilding parts of the code that remained unchanged. This was crucial in reducing build times when only minor components or specific packages were modified.
Dependency Graph Analysis: Nx’s visual dependency graph provided clear insights into how different parts of the monorepo were interrelated. This enabled us to further optimize build and test processes by focusing on the most relevant parts of the project.

Parallelization: Running Jobs Simultaneously
Previously, our CI pipeline ran tasks like testing and building in sequence, which significantly slowed down the process. Voltera Group took advantage of parallel execution to speed things up.
Concurrent Testing: We reconfigured our CI environment to run test suites concurrently. By utilizing multiple runners, we enabled our CI system to execute tests in parallel, drastically cutting down testing time.
Parallel Build Processes: Similarly, we adjusted our pipeline to run build processes across different applications in parallel. This allowed us to distribute workloads more effectively and optimize the build process.
Optimized Caching: Faster Builds with Reusable Artifacts
Caching plays a vital role in CI performance, and optimizing it can save valuable time. At Voltera Group, we fine-tuned our caching strategies to ensure faster builds and tests.
Node_modules Caching: We optimized caching for our node_modules directory, ensuring dependencies were stored between builds. This eliminated the need to reinstall packages every time a CI job was triggered, speeding up the pipeline.
Test Result Caching: For tests that didn’t require re-running unless relevant parts of the code were changed, we implemented test result caching. By reusing previous test results, we further reduced time spent on redundant testing phases.

Smarter Test Suites: Only Testing What Matters
Running an entire test suite for every minor code change was one of the major bottlenecks we identified. Voltera Group adopted a smarter approach to testing, ensuring that only the necessary tests were executed.
Selective Testing: Nx’s selective testing feature allowed us to focus testing efforts solely on the parts of the codebase that had been changed. This minimized the number of tests run during each CI job, saving considerable time.
Test Splitting and Sharding: We broke up large test suites into smaller chunks, running them in parallel. This test-sharding approach ensured that no single job was overwhelmed and that testing was completed more quickly.
Isolating Flaky Tests: Flaky tests were a recurring issue that delayed our CI pipelines. We identified and isolated these tests, preventing them from being run automatically. By addressing their underlying issues, we improved overall test reliability and eliminated unnecessary delays.
CI Environment Tweaks: Fine-Tuning for Speed
In addition to making code and workflow changes, Voltera Group made some key adjustments to our CI environment for maximum speed.
Upgrading CI Runners: We moved to more powerful CI runners with higher CPU and memory capabilities. This allowed us to handle parallel tasks more efficiently, further reducing build and test times.
Docker Optimization: For projects that used Docker, we optimized our Docker images by reducing their size and improving caching strategies. This prevented full rebuilds and minimized Docker-related delays.
Conditional Workflows: We implemented conditional workflows, where jobs or steps were skipped if they weren’t relevant to the changes being made. This prevented unnecessary work from being triggered and kept our CI pipeline lean.