Multi-source Machine Learning for Natural Language Relation Extraction

Vision

We develop multi-source information (relation and attribute) extraction techniques that are robust to limited labeled data and noise. We leverage semantic information and prompts for zero-shot relation extraction, and visual semantic information for multimodal multi-feature few-shot relation extraction. We also develop n-ary cross-sentence relation extraction methods for both supervised and unsupervised settings.

Funded Projects

Publications