GitXplorerGitXplorer
o

simple-evals

public
1522 stars
132 forks
11 issues

Commits

List of commits on branch main.
Verified
2df1a92bbddb8c89fbeb3670e2dd125b10632bca

Merge pull request #21 from kzl-openai/o1

kkzl-openai committed 7 days ago
Unverified
fb3f1efd0f712b5d5aea956dce7851df41fc333d

update simple-evals for o1 release

kkzl-openai committed 8 days ago
Unverified
01c488b07d9c4f93ea5c43b4be71fdb7207ee722

Add Multilingual MMLU repro code and results

eetr2460 committed 8 days ago
Verified
c1603a86af8fd236c5810e557b7a0351b67c2bc8

Merge pull request #19 from ted-at-openai/patch-2

yyuchenhe07 committed a month ago
Verified
ffe057f4664c7817e7830d7e4f85c4b1de0a26b2

Fixes method typo

tted-at-openai committed a month ago
Verified
0ad0a1de62b6c5ee859c8adcd011fcfb020fe949

Merge pull request #18 from ted-at-openai/patch-1

yyuchenhe07 committed a month ago

README

The README file for this repository.

Overview

This repository contains a lightweight library for evaluating language models. We are open sourcing it so we can be transparent about the accuracy numbers we're publishing alongside our latest models.

Benchmark Results

Model Prompt MMLU GPQA MATH HumanEval MGSM1 DROP1
(F1, 3-shot)
o1 MATH-5002
o1-preview n/a3 90.8 73.3 85.5 92.4 90.8 74.8
o1-mini n/a 85.2 60.0 90.0 92.4 89.9 83.9
o1 (work in progress) n/a 92.3 77.3 94.8 n/a n/a n/a
GPT-4o
gpt-4o-2024-08-06 assistant4 88.7 53.1 75.9 90.2 90.0 79.8
gpt-4o-2024-05-13 assistant 87.2 49.9 76.6 91.0 89.9 83.7
gpt-4o-mini-2024-07-18 assistant 82.0 40.2 70.2 87.2 87.0 79.7
GPT-4 Turbo and GPT-4
gpt-4-turbo-2024-04-09 assistant 86.7 49.3 73.4 88.2 89.6 86.0
gpt-4-0125-preview assistant 85.4 41.4 64.5 86.6 85.1 81.5
gpt-4-1106-preview assistant 84.7 42.5 64.3 83.7 87.1 83.2
Other Models (Reported)
Claude 3.5 Sonnet unknown 88.3 59.4 71.1 92.0 91.6 87.1
Claude 3 Opus unknown 86.8 50.4 60.1 84.9 90.7 83.1
Llama 3.1 405b unknown 88.6 50.7 73.8 89.0 91.6 84.8
Llama 3.1 70b unknown 82.0 41.7 68.0 80.5 86.9 79.6
Llama 3.1 8b unknown 68.4 30.4 51.9 72.6 68.9 59.5
Grok 2 unknown 87.5 56.0 76.1 88.4 n/a n/a
Grok 2 mini unknown 86.2 51.0 73.0 85.7 n/a n/a
Gemini 1.0 Ultra unknown 83.7 n/a 53.2 74.4 79.0 82.4
Gemini 1.5 Pro unknown 81.9 n/a 58.5 71.9 88.7 78.9
Gemini 1.5 Flash unknown 77.9 38.6 40.9 71.5 75.5 78.4

Background

Evals are sensitive to prompting, and there's significant variation in the formulations used in recent publications and libraries. Some use few-shot prompts or role playing prompts ("You are an expert software programmer..."). These approaches are carryovers from evaluating base models (rather than instruction/chat-tuned models) and from models that were worse at following instructions.

For this library, we are emphasizing the zero-shot, chain-of-thought setting, with simple instructions like "Solve the following multiple choice problem". We believe that this prompting technique is a better reflection of the models' performance in realistic usage.

We will not be actively maintaining this repository and monitoring PRs and Issues. In particular, we're not accepting new evals. Here are the changes we might accept.

  • Bug fixes (hopefully not needed!)
  • Adding adapters for new models
  • Adding new rows to the table below with eval results, given new models and new system prompts.

This repository is NOT intended as a replacement for https://github.com/openai/evals, which is designed to be a comprehensive collection of a large number of evals.

Evals

This repository currently contains the following evals:

Samplers

We have implemented sampling interfaces for the following language model APIs:

Make sure to set the *_API_KEY environment variables before using these APIs.

Setup

Due to the optional dependencies, we're not providing a unified setup mechanism. Instead, we're providing instructions for each eval and sampler.

For HumanEval (python programming)

git clone https://github.com/openai/human-eval
pip install -e human-eval

For the OpenAI API:

pip install openai

For the Anthropic API:

pip install anthropic

Demo

python -m simple-evals.demo

This will launch evaluations through the OpenAI API.

Notes

Legal Stuff

By contributing to evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI evals will be subject to our usual Usage Policies: https://platform.openai.com/docs/usage-policies.

  1. We believe these evals are saturated for our newer models, but are reporting them for completeness. 2

  2. For o1 models, we evaluate on MATH-500, which is an newer, IID version of MATH.

  3. o1 models do not support using a system prompt.

  4. assistant system message in OpenAI API doc: "You are a helpful assistant." .