GitXplorerGitXplorer
H

evaporate

public
484 stars
46 forks
12 issues

Commits

List of commits on branch main.
Verified
59eda5d34415f71ffb3a72bb902e1505d1f59e83

Merge pull request #32 from xinyi-zhao/main

ssimran-arora committed 10 months ago
Unverified
72ce4885401b674bb28db865d3e91905666aa7bb

modify demo

xxinyi-zhao committed 10 months ago
Unverified
35cefa5621b316339b6c048e099c04fd6fc1aefe

arrange the code to demo

xxinyi-zhao committed 10 months ago
Unverified
f0f1a6f6890fc61256efa4e7ed09f9a0984d4b4c

light cleanup

ssimran-arora committed a year ago
Unverified
da3df938dfd73e77eb66b7206f1cc9a41ddb129a

update notebook

ssimran-arora committed a year ago
Verified
a25623068ac139c63e2ed766e594327a7fcdac55

Merge pull request #30 from xinyi-zhao/main

ssimran-arora committed a year ago

README

The README file for this repository.

Evaporate

Evaporate diagram

Code, datasets, and extended writeup for paper Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes.

Setup

We encourage the use of conda environments:

conda create --name evaporate python=3.8
conda activate evaporate

Clone as follows:

# Evaporate code
git clone git@github.com:HazyResearch/evaporate.git
cd evaporate
pip install -e .

# Weak supervision code
cd metal-evap
git submodule init
git submodule update
pip install -e .

# Manifest (to install from source, which helps you modify the set of supported models. Otherwise, ``setup.py`` installs ``manifest-ml``)
git clone git@github.com:HazyResearch/manifest.git
cd manifest
pip install -e .

Datasets

The data used in the paper is hosted on Hugging Face's datasets platform: https://huggingface.co/datasets/hazyresearch/evaporate.

To download the datasets, run the following commands in your terminal:

git lfs install
git clone https://huggingface.co/datasets/hazyresearch/evaporate

Or download it via Python:

from datasets import load_dataset
dataset = load_dataset("hazyresearch/evaporate")

The code expects the data to be stored at /data/evaporate/ as specified in constants.py CONSTANTS, though can be modified.

Running the code

Run closed IE and open IE using the commands:

bash run.sh

The keys in run.sh can be obtained by registering with the LLM provider. For instance, if you want to run inference with the OpenAI API models, create an account here.

The script includes commands for both closed and open IE runs. To walk through the code, look at run_profiler.py. For open IE, the code first uses schema_identification.py to generate a list of attributes for the schema. Next, the code iterates through this list to perform extraction using profiler.py. As functions are generated in profiler.py, evaluate_profiler.py is used to score the function outputs against the outputs of directly prompting the LM on the sample documents.

Citation

If you use this codebase, or otherwise found our work valuable, please cite:

@article{arora2023evaporate,
  title={Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes},
  author={Arora, Simran and Yang, Brandon and Eyuboglu, Sabri and Narayan, Avanika and Hojel, Andrew and Trummer, Immanuel and R\'e, Christopher},
  journal={arXiv:2304.09433},
  year={2023}
}