GitXplorerGitXplorer
j

random-failures

public
1 stars
2 forks
3 issues

Commits

List of commits on branch master.
Unverified
a2ce6619bc8398c6dd061164defa1b993f18863d

Bump to lts-13.1

committed 6 years ago
Unverified
99ccf9d684ed511250bf694bd653d31d69df1c08

Take kiegroup-dir and jenkins-folder from CLI

committed 6 years ago
Unverified
f8b6a50a5b374a15d586c4768a98835d603681c5

Update to latest elm dependencies

committed 6 years ago
Unverified
1fa39f64f9e6daea89ea04fe12305e1e4a55cf64

Decompose BuildDurations

committed 6 years ago
Unverified
b6ad02c0fd3852bdf5f539d9308d3d30e6297287

Replace getMasterPrJobUrls by getJobsRecursively

committed 6 years ago
Unverified
346ed332dd62b1ccde2e6ed9ab4b68b4dce2db33

Recursively retrieve jobs urls from directory url

committed 6 years ago

README

The README file for this repository.

Random failure analysis

The goal of this project is to make it possible to identify flaky tests by aggregating test failure data from RHBA Jenkins.

The project consists of 2 parts:

  • A Haskell program which
    • downloads test results of all unstable builds of jobs from master pullrequests folder of RHBA Jenkins. For each failure it saves 5 items: job URL, test class and test method name, stack trace and date of build.
    • searches for the path of each test class within local filesystem (starting from folder where all kiegroup repositories are cloned) in order to provide GitHub link functionality.
  • A single page Elm application which enables browsing data scraped by the above script. This is deployed at janhrcek.cz/random-failures/ and updated with new data on weekly basis.

Updating the report

RHBA Jenkins is only archiving last 14 days of builds. So it's necessary to periodically (~ once a week) scrape test failure data. The data are persisted in a JSON file, which is then deployed to the gh-pages branch of this repository together with the interactive HTML application for browsing it.

The process of scraping is automated, everything can be done by just running ./cli.sh at the root of this project. The script will

  1. build and run the scraper program, which outputs all the failures into frontend/dist/failures.json
  2. build and runs the front end report
  3. copy the contents of the frontend/dist to the root directory of this repo at gh-pages branch
  4. push the updated report to the gh-pages branch