Meet Few-shot NER!

A Comprehensive Benchmark for General Few-shot Named-Entity-Recognition Evaluation.

Why General Few-shot NER Evaluation?



Strong Baseline Methods

With both Pre-trained Models Fine-tuning and Advanced Few-shot Learning Algorithms




Diverse Language Domain

Diverse text content from News, Review, Social Media and Dialogue



Multiple Evaluation Tracks over Representative Tasks

11 datasets over 3 tracks: Public, Enterprise and In-the-Wild.



Leaderboard!

To track the advances in Few-shot NER research.



What is General Few-shot NER Evaluation?


The General Few-shot NER Evaluation benchmark is a collection of resources for training, evaluating, and analyzing systems for understanding named entities from text. It consists of:

  • A benchmark of 11 tasks built on established existing datasets and selected to cover a diverse range of domains, degrees of difficulty and task types
  • A public leaderboard for tracking performance on the benchmark

The format of the benchmark is model-agnostic, so any system capable of processing natural language sentences and producing corresponding predictions is eligible to participate. The ultimate goal is to drive research in the development of general and efficient language understanding systems.


Paper


Please cite our paper as below if you use the benchmark or codebase.

@article{huang2020few,
  title={Few-Shot Named Entity Recognition: A Comprehensive Study},
  author={Huang, Jiaxin and Li, Chunyuan and Subudhi, Krishan and Jose, Damien and Balakrishnan, Shobana and Chen, Weizhu and Peng, Baolin and Gao, Jianfeng and Han, Jiawei},
  journal={EMNLP},
  year={2021}
}
                        

Contact



Have any questions or suggestions? Feel free to reach us at https://github.com/few-shot-NER-benchmark