Nothing Special   »   [go: up one dir, main page]

Skip to content

Data and code of the Findings of EMNLP'23 paper MuG: A Multimodal Classification Benchmark on Game Data with Tabular, Textual, and Visual Fields

License

Notifications You must be signed in to change notification settings

lujiaying/MUG-Bench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

49 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

MuG-Bench

Data DOI Paper DOI

Data and code of the Findings of EMNLP'23 paper "MuG: A Multimodal Classification Benchmark on Game Data with Tabular, Textual, and Visual Fields". For any suggestion/question, please feel free to create an issue or drop an email @ (jiaying.lu@emory.edu).

Table of Contents

Datasets

The eight datasets used in paper can be downloaded from https://doi.org/10.6084/m9.figshare.21454413. After downloading and decompressing them under ./datasets/ directory, the directory looks like:

πŸ“ ./dataset
|-- πŸ“ Pokemon-primary_type
|-- πŸ“ Pokemon-secondary_type
|-- πŸ“ Hearthstone-All-cardClass
|-- πŸ“ Hearthstone-All-set
|-- πŸ“ Hearthstone-Minion-race
|-- πŸ“ Hearthstone-Spell-spellSchool
|-- πŸ“ LeagueOfLegends-Skin-category
|-- πŸ“ CSGO-Skin-quality
|-- CHANGELOG

And each subdirectory represents one dataset, for instance:

πŸ“ ./dataset/Pokemon-primary_type
|-- info.json
|-- train.csv
|-- dev.csv
|-- test.csv
|-- train_images.zip
|-- dev_images.zip
|-- test_images.zip

where info.json stores meta information of the dataset; train/dev/test.csv store raw tabular and text features of each sample; train_images/dev_images/test_images.zip represent a compressed directory of raw images.

Prerequisites

All dependecies are listed in conda_env.yml. We recommend using conda to manage the environment.

conda env create -n MuG_env --file conda_env.yml

Reproduce Results

Example scripts to run unimodal classifiers and multimodal classifiers are listed in run_baseline.sh.

For instance, we can use the following script to reproduce the proposed MuGNet model:

# Run MuGNet modal
python -m baselines.MuGNet.exec \
        --dataset_dir datasets/Pokemon_PrimaryType \
        --exp_save_dir exps/Pokemon_PrimaryType_mugnet \
        --fit_time_limit 28800 

where --dataset_dir specifies the dataset directory, --exp_save_dir specifies the destination to store the final output, --fit_time_limit specifies the time limit for the model to run in seconds.

Another example to run GBM model:

python -m baselines.autogluon.exec \
        --dataset_dir datasets/Pokemon_PrimaryType \
        --exp_save_dir exps/Pokemon_PrimaryType_GBMLarge \
        --fit_time_limit 28800 \
        --fit_setting GBMLarge

Citing Our Work

@inproceedings{lu2023MuG,
  title = {MuG: A Multimodal Classification Benchmark on Game Data with Tabular, Textual, and Visual Fields},
  author = {Jiaying Lu and Yongchen Qian and Shifan Zhao and Yuanzhe Xi and Carl Yang},
  doi={10.18653/v1/2023.findings-emnlp.354},
  Series = {Findings-EMNLP'23},
  booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2023},
  month = {December},
  year = {2023}
}

About

Data and code of the Findings of EMNLP'23 paper MuG: A Multimodal Classification Benchmark on Game Data with Tabular, Textual, and Visual Fields

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published