A Comprehensive Benchmark for General Few-shot Named-Entity-Recognition Evaluation.
With both Pre-trained Models Fine-tuning and Advanced Few-shot Learning Algorithms
Diverse text content from News, Review, Social Media and Dialogue
11 datasets over 3 tracks: Public, Enterprise and In-the-Wild.
To track the advances in Few-shot NER research.
The General Few-shot NER Evaluation benchmark is a collection of resources for training, evaluating, and analyzing systems for understanding named entities from text. It consists of:
The format of the benchmark is model-agnostic, so any system capable of processing natural language sentences and producing corresponding predictions is eligible to participate. The ultimate goal is to drive research in the development of general and efficient language understanding systems.
@article{huang2020few,
title={Few-Shot Named Entity Recognition: A Comprehensive Study},
author={Huang, Jiaxin and Li, Chunyuan and Subudhi, Krishan and Jose, Damien and Balakrishnan, Shobana and Chen, Weizhu and Peng, Baolin and Gao, Jianfeng and Han, Jiawei},
journal={EMNLP},
year={2021}
}