GitXplorerGitXplorer
z

Context-GEBC

public
4 stars
1 forks
0 issues

Commits

List of commits on branch master.
Verified
df777c7c546e9372a77832cd578c8dfada507f5b

Merge pull request #2 from yunlong10/master

zzjr2000 committed 2 years ago
Verified
2c90df989c28808f58fde5bd66a0cf7064725539

Add missing file

zzjr2000 committed 2 years ago
Unverified
5d78e1c0a1dbf8925380d1320a771e7208dd8afc

update

committed 2 years ago
Verified
15f11bca03fe178d545780f96773404166d43a72

Update README.md

tttengwang committed 2 years ago
Verified
c023be7af34b6361caab9c7232c4edb25dd35c15

update (#1)

committed 2 years ago
Unverified
9a5e21aebdb8e7de5058acd8a39674a8861c90fd

update

committed 2 years ago

README

The README file for this repository.

Context-GEBC

Code for LOVEU Challenge 2022 (Track 2 Generic Event Boundary Captioning Challenge). Our model directly takes the whole video clip as input and generates a caption for each time boundary parallelly. With this design, the model could learn the context information of each time boundary, thus, the potential boundary-boundary interaction could be modeled.

Our method achieves a 72.84 score on the test set, and we reach the $2^{nd}$ place in this challenge. The technical report is available here.

Environment

Our code is adapted from the official implementation of PDVC, please see the original repo for the environment preparation.

Data

Using CLIP to extract frame-level features and Omnivore to extract clip-level features. We use this pipeline to extract features.

Then, put the extracted features under these two folders:

data/gebc/features/clip_gebc,
data/gebc/omni_gebc

You can also directly download the official provided features here. But, remember to change the visual_feature_folder and feature_dim in the config file.

Using VinVL to extract region-level features. The region feature of a video is saved to multiple .npy files, where each single file contains the region feature of a sampled frame. Merge the feature file paths into video_to_frame_index.json in the following format:

{
    "video_id": [
        "frame_1_feat.npy",
        "frame_2_feat.npy",
        ...     
    ],
    ...
}

Then put this file under data/gebc/.

Usage

Train

python train.py --cfg_path ${CONFIG_PATH} --gpu_id ${GPU_ID}

Evaluation

python eval.py --eval_folder ${EVAL_FOLDER} \
 --gpu_id=${GPU_ID} \
 --eval_caption_file=${VAL_ANNO_FILE} \
 --eval_model_path=save/${eval_folder}/model-best-dvc.pth \
 --eval_transformer_input_type gt_proposals \
 --eval_tool_version 2018_cider \
 --eval_batch_size ${EVAL_BATCHSIZE}

We train three models to predict subject, before and after, the corresponding config file and validation file are listed below:

Type CONFIG_PATH VAL_ANNO_FILE
Subject cfgs/gebc/gebc_clip_omni_5e5_objq50_subject.yml data/gebc/valset_highest_f1_subject.json
Before cfgs/gebc/gebc_clip_omni_5e5_objq50_before.yml data/gebc/valset_highest_f1_before.json
After cfgs/gebc/gebc_clip_omni_5e5_objq50_after.yml data/gebc/valset_highest_f1_after.json

Acknowledgement

This repo is mainly based on PDVC. We thank the authors for their efforts.