GitXplorerGitXplorer
l

eco-viz

public
2 stars
0 forks
0 issues

Commits

List of commits on branch master.
Unverified
2dcd6aec472a679d61a1c0e4d6da04875b709783

Finish README

llisjin committed 7 years ago
Unverified
8bc8cc3e54c9619dca22bc16d512451c9e66bd01

Add preprocessing directions

llisjin committed 7 years ago
Unverified
d28d54caf42fe9cac39f53f26129cc7fb7cc7059

Clean up and fix jsonify bug

llisjin committed 7 years ago
Unverified
024d57044f132c8ab6b4473eda5619b42b88cf00

Fix bipartite x-axis bug, add curved edges

llisjin committed 7 years ago
Unverified
3fc5bd22746a917bf67d6a9342f8e58f821f7ef0

Ordered matrix for bipartite cores

llisjin committed 7 years ago
Unverified
d3084876985ad8e1b54d122bff6e8445007519d2

Local CSS and JS

llisjin committed 7 years ago

README

The README file for this repository.

eco-viz

ECOviz is an end-to-end system for summarizing time-evolving networks in a domain-specific manner (i.e., by using the egonets of domain-specific nodes of interest), and interactively visualizing the summary output.

Web Application

After completing the requirements and installation process below, here is how to start the web app:

  1. Start ArangoDB (on Mac OS, the command is /usr/local/opt/arangodb/sbin/arangod &).
  2. Run the following in the repo's root.
source venv/bin/activate
python app.py
  1. Navigate to localhost:3000/tc-viz (ECOviz-Time) or localhost:3000/con-viz (ECOviz-Pair) in your browser.

Requirements

  1. Make sure the required packages are installed. To create an isolated Python environment, install virtualenv. Then run the following in the repo's root.
# Create and activate the virtual environment
virtualenv venv --distribute
source venv/bin/activate


# Install all requirements
pip install -r requirements.txt
  1. Install ArangoDB by following the instructions for your OS.

Installation

Data Generation

We used a UM fMRI dataset that includes 61 subjects in two states: rest and mindful rest. Download the data here. Note that due to memory limitations of your machine, you will likely only be able to analyze a subset of the subjects' data at once.

In this step, the fMRI data will be converted into edge lists according to some threshold and time step parameters that you specify. Then you will import these edge lists into ArangoDB.

  1. The files are originally in <subject_id>_<MindfulRest|Rest>.csv format.
    1. Make two directories, m_rest_brains/ and rest_brains/.
    2. Convert the corresponding mindful/resting files to <subject_id>_roiTC.mat format.
  2. Clone the lisjin branch and follow the "Usage for New Data" instructions to generate edge lists from the fMRI data.
  3. Move all files generated in the previous step to the arango-scripts/edge_lists/ directory in this repo.
  4. Within arango-scripts/edge_lists, run sh conv_all.sh to convert the edge lists to JSON format. Once finished, run mv *.json .. to move the JSON files up one directory.
  5. Start ArangoDB (on Mac OS, the command is /usr/local/opt/arangodb/sbin/arangod &).
    1. Create a database by running arangosh and then db._createDatabase('tc-viz') within the shell.
  6. Run sh import.sh to import all JSON edge lists into your ArangoDB database.
    1. If you make a mistake, delete all graphs in ArangoDB by running sh drop.sh and try importing again.

TimeCrunch

To summarize the time-evolving networks, we use a modified version of TimeCrunch (original paper by Shah et al.). You will run TimeCrunch on the time-evolving graph edge lists, then convert them to JSON format.

  1. Clone the lisjin-tc-egonet branch, which is TimeCrunch modified for egonet extraction, and follow the instructions.
    1. You will need the .txt (not JSON) edge lists generated in step 2 of the previous section.
  2. After running TimeCrunch, you will have one file ending with _greedy.tmodel for each temporal graph.
    1. Run the command mkdir preprocess/tmodels/ in this repo.
    2. Move all files ending in _greedy.tmodel from the lisjin-tc-egonet branch to preprocess/tmodels/.
  3. From within the preprocess/ directory of this repo, run sh parse_all.sh. (This will convert the .tmodel TimeCrunch output into JSON format, which is friendlier to the web app.)
  4. You will now have JSON files that are prefixed by a subject ID (e.g., MH01).
    1. For each subject, create run the command mkdir data/<subject_id>/ in the root of this repo.
    2. Move all JSON files of a subject into its corresponding directory in data/ (e.g., MH01*.json will go to data/MH01/)

You're done.

win