GitXplorerGitXplorer
j

random-failures

public
1 stars
2 forks
3 issues

Commits

List of commits on branch master.
Verified
ee03509e026d6904e5660ad5454ef606a505644b

Merge pull request #19 from yurloc/typos

jjhrcek committed 5 years ago
Verified
e65bb2e3f671e659e3dba93594997c493812f5de

Fix typos

yyurloc committed 5 years ago
Unverified
9638a738c441f0c0fac6ab8d14c4be25fd154352

Bump to lts-13.5 and remove useles generated on info

committed 6 years ago
Unverified
7d1ec9beaeb4051eab07bf8ecd016a7a0abb9792

Make imports in Config consistent

committed 6 years ago
Unverified
4732236fac21340596bc84334adc36545b3587f6

Deduplicate failures list after url and GH info changes

committed 6 years ago
Unverified
43060ea13acaf210226abb2fd84e894568966c54

Make most file paths configurable from CLI

committed 6 years ago

README

The README file for this repository.

Random failure analysis

The goal of this project is to make it possible to identify flaky tests by aggregating test failure data from RHBA Jenkins.

The project consists of 2 parts:

  • A Haskell program which
    • downloads test results of all unstable builds of jobs from master pullrequests folder of RHBA Jenkins. For each failure it saves 5 items: job URL, test class and test method name, stack trace and date of build.
    • searches for the path of each test class within local filesystem (starting from folder where all kiegroup repositories are cloned) in order to provide GitHub link functionality.
  • A single page Elm application which enables browsing data scraped by the above script. This is deployed at janhrcek.cz/random-failures/ and updated with new data on weekly basis.

Updating the report

RHBA Jenkins is only archiving last 14 days of builds. So it's necessary to periodically (~ once a week) scrape test failure data. The data are persisted in a JSON file, which is then deployed to the gh-pages branch of this repository together with the interactive HTML application for browsing it.

The process of scraping is automated, everything can be done by just running ./cli.sh at the root of this project. The script will

  1. build and run the scraper program, which outputs all the failures into frontend/dist/failures.json
  2. build and runs the front end report
  3. copy the contents of the frontend/dist to the root directory of this repo at gh-pages branch
  4. push the updated report to the gh-pages branch