The 4 stages of flakiness (part 2/3): log failed tests with a Jest custom reporter

Mickael Meausoone
HMH Engineering
Published in
7 min readMay 26, 2022

--

Photo by Aleksandr Barsukov on Unsplash

The story so far: we have flaky tests and want to log failures so everything can be rerun later. Have a look at part 1 if you haven’t read it or need a refresh!
My first idea was to read test output to build a list of failed tests, and though it could have been done (maybe) a co-worker suggested to use a custom Jest reporter.

Creating a custom reporter would allow to both control the output of the test run and build the report file. We settled for showing failed tests in “detail”, meaning the error message, the title of the test that failed, the file and line where it happened, without the HTML debug, and the usual summary with how much tests have passed, failed, were skipped, etc.

Sounds cool, but how do I do that? Finding information on Jest reporters was a bit tricky… The documentation is good start but I was in for a dive in Jest source code!

1. Adding a custom reporter

Before starting any real work, let’s just add something basic and see how it goes. Jest has a “reporters” option that we can use in the jest.config.js file and it looks like that, according to docs:

"reporters": ["<rootDir>/my-custom-reporter.js"]

So it’s an array, you can have as much reporters as you need and if you still want to use the default reporter, easy-peasy, just add “default”:

"reporters": ["default", "<rootDir>/my-custom-reporter.js"]

The doc have this example and that’s as good as any hello world:

class MyCustomReporter {
constructor(globalConfig, options) {
this._globalConfig = globalConfig;
this._options = options;
}
onRunComplete(contexts, results) {
console.log('Custom reporter output:');
console.log('GlobalConfig: ', this._globalConfig);
console.log('Options: ', this._options);
}
}
module.exports = MyCustomReporter;

I decided to create the file at the root of the project and expected “rootDir” to be exactly that, the root. But… that’s not exactly the case.
Now picture me trying to understand why nothing is happening. First, Jest says it can’t find the file, shows an error. Ok, FINE. Easy. Update the path, no more error… It means it’s loading and working right? Nope. Whaaat?

1.1. Finding Nemo

Our main project is a monorepo, with a main jest configuration at the root, a lot of packages, each one with a jest.config.js importing and extending a “base” configuration file.
The repo is using Lerna to run tests which in turn execute:
yarn workspace @name test
Locally I tend to use the basic
yarn jest filepath

  1. With the yarn command the tests are run using the package config (which extend the base config jest.config.base.js).
  2. When using yarn jest the root jest config is used as well as the package config, but the “reporters” option is only read from the root one (not sure why).

So when using yarn jest filepath, the root jest config was used but not applying my custom reporter as the base config is not imported in that one.

2. Connecting the dots

Those are my options:

  • add the “reporters” option to root jest.config.js
  • OR run the yarn workspace command

Oh! but wait. I did mention rootDir, right?

  • when using yarn workspace “rootDir” is the package / workspace folder (3 folder up):
    ‘<rootDir>/../../../my-custom-reporter.js’
  • and for yarn jest this is actually the root of the project.
    ‘<rootDir>/my-custom-reporter.js’

This can be confusing depending on your preferences but since the way the configurations were loaded was different depending on the command used, I finally managed to have both working.

2. Writing the reporter

The documentation became a bit lacking as soon as I started having more questions and a specific goal:
https://jestjs.io/docs/configuration#reporters-arraymodulename--modulename-options

Thankfully I could find external resources:
https://javascript.plainenglish.io/writing-a-jest-test-reporter-cb7c123ec211

But in the end the best way was to find existing reporters as reference, both from external parties and from Jest:

https://github.com/sojungko/jest-custom-reporter
https://github.com/topics/jest-reporter
https://github.com/facebook/jest/tree/main/packages/jest-reporters/src

Next are the lifecycle functions that can be used in the reporter.

2.1. onRunStart(aggregatedResults, options)
This lifecycle method is called at the start of the run before any test is evaluated and receive two arguments.

aggregatedResults: object keeping information about the tests. For now all we have is the number of test suites (2 files) and the start time.

{
numFailedTestSuites: 0,
numFailedTests: 0,
numPassedTestSuites: 0,
numPassedTests: 0,
numPendingTestSuites: 0,
numPendingTests: 0,
numRuntimeErrorTestSuites: 0,
numTodoTests: 0,
numTotalTestSuites: 2,
numTotalTests: 0,
openHandles: [],
snapshot: {
added: 0,
didUpdate: false,
failure: false,
filesAdded: 0,
filesRemoved: 0,
filesRemovedList: [],
filesUnmatched: 0,
filesUpdated: 0,
matched: 0,
total: 0,
unchecked: 0,
uncheckedKeysByFile: [],
unmatched: 0,
updated: 0
},
startTime: 1652692338326,
success: false,
testResults: [],
wasInterrupted: false
}

options:

{ estimatedTime: 0, showStatus: true }

2.2. onTestCaseResult(testContext, testCaseResult)
one “it” block has been evaluated, how did it go? This method is called after each test block and receive two arguments.

testContext is an object with this kind of structure:

{
context: {
config: {
automock: false,
cache: true,
name: '@package/name',
cwd: '/Users/UserName/path/to/package',
// ... not much else needed there usually
}
},
// notice this is an absolute path
path: '/Users/UserName/path/to/package/index.test.js'
}

testCaseResult: this is the report for the evaluated test

{
ancestorTitles: [ 'describe label' ],
duration: 3,
failureDetails: [],
failureMessages: [],
fullName: 'describe label test label', // parent + current label
invocations: 1,
location: null,
numPassingAsserts: 0,
// on failed fill failureDetails & failureMessages
status: 'passed',
title: 'test label'
}

2.3. onTestResult(testContext, testResult, aggregatedResults)
Called after a test suite has been evaluated, which means a collection of test blocks, usually a file.

testContext: same as previous description, config and test path.

testResult: report for the current file that finished running, includes the collection of testCaseResult for all tests of that suite.

{
leaks: false,
numFailingTests: 0,
numPassingTests: 4,
numPendingTests: 0,
numTodoTests: 0,
openHandles: [],
perfStats: {
end: 1652692341282,
runtime: 2153,
slow: false,
start: 1652692339129
},
skipped: false,
snapshot: {
added: 0,
fileDeleted: false,
matched: 0,
unchecked: 0,
uncheckedKeys: [],
unmatched: 0,
updated: 0
},
testFilePath: '/Users/UserName/path/to/index.test.js',
testResults: [
... array of "testCaseResult" ...
],
displayName: { color: 'white', name: '@package/name' },
failureMessage: null
}

Special mention for failureMessage here since we are going to need that. This is going to be a big string with error message and details when a test failed:

failureMessages: [
'Error: \x1B[2mexpect(\x1B[22m\x1B[31mreceived\x1B[39m\x1B[2m).\x1B[22mtoEqual\x1B[2m(\x1B[22m\x1B[32mexpected\x1B[39m\x1B[2m) // deep equality\x1B[22m\n' +
'\n' +
'Expected: \x1B[32m2\x1B[39m\n' +
'Received: \x1B[31m3\x1B[39m\n' +
' at Object.<anonymous> (/Users/UserName/path/index.test.js:10:32)\n' +
' at Promise.then.completed (/Users/UserName/path/node_modules/jest-circus/build/utils.js:391:28)\n' +
' at new Promise (<anonymous>)\n' +
' at callAsyncCircusFn'
],

aggregatedResults: same structure as before with updated values.

2.4. onRunComplete(contexts, aggregatedResults)
This method is called after finishing all tests and gets context information (workspace name, path and more) as well as final results as arguments.

Just a note: our setup is using Lerna and run tests by calling this for each updated package:
yarn workspace @packagename test

This means that for each package we have calls to:

  • onRunStart,
  • onTestCaseResult for each test in the first test file,
  • onTestResult when the first file is finished, same for second test file, etc.
  • and finally onRunComplete when all tests in the package have run.

Then the same is done for the next package.

contexts: this is a Set of contexts with { config } (the content of config is the same as seen before).

aggregatedResults: same structure but this time with a full summary of tests, failures and success count, as well as an array of testResult .

Feel free to add some console.log in there to check the actual content for you.

3. What we want to do

If possible, we want to reuse parts of the default Jest reporter. The summary is nice and we need to display test failures, but without the big HTML dump when something fail during React component testing. There’s no need to display each passing tests in details, instead the summary will be enough as it’s showing the total of passing and failing tests.

Keep in mind this is intended for CI use, not local debugging, and too much output would just add noise there.

Also let’s not forget the end goal and add a nice collection of all files and package names that failed, written in a file.

3.1. The default jest reporter source code is a great read
You can actually use parts of it in your code:

Import:

const { utils } = require('@jest/reporters');

Used like that:

const testOptions = {
currentTestCases: this._currentTestCases,
estimatedTime: this._estimatedTime,
roundTime: true,
width: process.stdout.columns,
};
log(`\n\n${utils.getSummary(this._aggregatedResults, testOptions)}\n\n`);

This will show a nice summary and just need currentTestCases and aggregatedResults to work.
Like in the original code we are going to collect those in the lifecycle functions. aggregatedResults is easy since we obtain it directly as an argument, but currentTestCases is an array where we push testContext and testCaseResult when onTestCaseResult() is called.

One thing that is easy to miss: when onTestResult() is called (at the end of a test suite), Jest default reporter filter currentTestCases to remove some entries: you won’t get the right stats in your summary if you forget to do that.

3.2. Getting test failure specific information is a bit hacky
I was a bit expecting a big structure, allowing to read and select information from failures, but no, failureMessages is just a big string. This leaves us with only regular expressions to extract what we need.

Though I can write regular expression, my preference goes to tested and proofed code, so for example lines from @hipstersmoothie repo:
https://github.com/hipstersmoothie/jest-github-reporter/blob/master/src/create-check.ts

const errorMessage = message.split('\n').slice(0, 2).join('\n').trim();
const numbers = message.match(new RegExp(`${testFilePath}(\\d+):\\d+`));
const startLine = numbers ? Number(numbers[1])
: testResult?.location?.line || 0;

Depending on what you do with those error messages, you might want to strip ANSI characters for formatting, but in this case this is not necessary.

3.3. Test file path is absolute
You will have to make it relative yourself to avoid showing a full local path, for example using context.config.cwd and testResult.testFilePath.

4. Get me the code!

Working on this was very interesting and it took a bit of trial and error to make everything work as I wanted, so I hope this would save you some time too:

That file will display a summary of tests as well as eventual condensed output when a test fail. This will also make a comma separated list of failed tests like this:

@packagename path/to/package/index.test.js,@otherpackage path/to/other/file.test.js

The goal is to be able to run this command line:

yarn workspace @packagename test path/to/package/index.test.js

Feel free to adapt to your specific needs.

Watch out for part 3 and the detailed article on retrying failed tests in Jenkins!

Thanks for reading!

Mickael MeausooneStaff Software Engineer, passionate about JavaScript and web tech in general, curious and likes solving interesting problems. Will code for chocolate.

--

--

Senior Software Engineer, JavaScript and web tech in general, curious and likes playing with code. Will code for chocolate.