Job application to two categories are described as follows:
All of this work was made in a Linux environment only, using it in Mac or Windows might need some different steps; read carefully the steps about installing and configuring the applications.
A Command Line Interface (CLI) application to show info about some projects. This info is processed through data consumed from an Representational State Transfer Application Programming Interface (REST API) that only accepts GET method and it's response is a JSON, unicode, format; this response is an array of logs. This application needs to perform sorting and analysis of those logs, presenting the following:
- Last five tracebacks of all projects;
- Mean and the standard deviation from the requests;
- Error and critical counter, grouped by hour and project.
The API presents the following rules:
- 30 requisitions for minute;
- 1200 logs each requisition or the difference between then.
The application must update the data each minute, assuming the already consumed data. As discussed with the recruiter, I've decided the running time from the app as the starting point for the requests.
Host and authorization are being inserted in the request header through a .env file, that must be located at the root project folder in the following format:
HOSTNAME="endpoint-here"
AUTHORIZATION="api-key-here"
Since the idea behind this project isn't to have access to this information but how they are utilized though out this application.
The requests are being handle through the following https request:
hostname: "endpoint-here"
authorization: "api-key-here"
The API response will be like that:
{
timestamp: number;
level: INFO | DEBUG | ERROR | CRITICAL (string);
project: string;
message: string;
response_code: number;
traceback: string (optional);
request_duration: number (optional);
}
As the API are being consumed through a Node application -- and the all JavaScript (JS) numbers are floats -- I've modified the response style presented here because, to the process, won't matter whether or not the numbers are float or int.
The interface is pretty simple to use. Just follow the instructions listed there.
A series of JS questions. I've implemented some of them in TypeScript (TS) so they could be my proof of concept located at tags folder. And all of the answered questions, which were asked in to this project, can be found at tags.md file.
The answers are what the job description asked. Even so I've decided to implement it in Node to allow me testing it as automatically possible; that way I won't be attached to always needing to open my browser to see whether or not it works.
Since the idea is to provide the answers to the questions only, I won't be attaching the website used as dependency, and is configured in the .env file as:
URL="website-url-here"
The projects are written in Node and the help of npm to work. Once they are installed, just open the project directory and run the following command to install the dependencies:
npm install
If you ran into some errors related to package dependencies and want to know how to handle it, read the Security info.
To run the projects, before all of that, compile the files. Since they are written in TypeScript (TS) and Node runs JS to do so, run the following command:
npm run build
To run the Dev project, just:
npm run dev
To run the Tags project, just:
npm run tags
To have a uglifyied version just run it:
npm run uglify
Plain and simple TS with the Microsoft linter standards as base. As there's use of TS in both projects, the .tsconfig.json file was configured to accept only ECMA Script 6 format.
I've added also a code review through Codacy.
Some functions have side-effects, they are tagged with __ at the end and those whom are callbacks have it at the beginning.
Tests are written with Jest through ts-jest and integrated with Travis CI and Codecov.
When running the tests, there's no need of previously building it; the TS files only are needed.
To run all of the tests you will need to set the following environment variables at the .env file:
MOCK_API="some-mock-api-endpoint"
MOCK_API_ERROR="some-invalid-mock-api-endpoint"
MOCK_KEY="some-mock-api-key-authorization"
To help out API mocking, the mockapi is used. And alcula help me out calculating the mocking data for standard deviation and mean.
To run all tests just:
npm test
If you ran into some errors related to package dependencies and want to know how to handle it, read the Security info.
I've added a integration with Snyk to ensure that all of my dependencies have no bugs or errors reported without fixing it first before Continuos integration (CI) to ensure the Continuos Development (CD).
When Snyk report some errors or bugs that can be fixed, just follow the CLI command to fix them before running -- more info at their docs.
Just talk to me through an issue.
There's no versioning system being used here due to the ephemeral nature of this project.
After the interview they gave me some feedback. So, all of the commits after july 18th are updates to it.
The Dev and Tags interviews were not at the same day, so they are only updated after the interview itself.
- Reduce the RAM memory consumption;
- Improve calculations.
I've decided to save all the consumed API data each hour; so, that way, the memory consumption would not be that great.
note: their example was if the application was running for five years ahahahahah
To reduce calculations I've decided to create a new variable storing the last calculated value. This were the way that in first versions of the application was done -- don't actually remember "when" and "why" I changed the approach.
Talking to the interviewers they said that the CLI lack of carousel graphs wasn't a BIG problem, so the data that can't be shown won't be stored in RAM memory -- it will be saved to the CSV files.
Interview still have to be done.
- Just me.
Like many Open-Source Software (OSS) the MIT license is used, more about it in LICENSE.