Methods to Structured Knowledge Grounding
In progress
We present a collection of research papers that related to structured knowledge grounding tasks.
sk-encoding
: Exploring structured knowledge encoding methods(concatenation of text and structured knowledge, positional embeddings design, manipulation in transformers etc.) on structured knowledge grounding tasks.
pre-training
: Exploring pre-train(unsupervised training data source, self-supervised tasks etc.) on structured knowledge grounding tasks.
constrained-decoding
: Exploring decoding methods(constrained decoding etc.) on structured knowledge grounding tasks.
unifying
: Exploring unification of structured knowledge grounding tasks.
prompt-learning
: Exploring prompt-learning methods on structured knowledge grounding tasks.
A Comprehensive Exploration on WikiSQL with Table-Aware Word Contextualization. NIPS-19 sk-encoding
K-BERT: Enabling Language Representation with Knowledge Graph. AAAI-20 sk-encoding
TAPAS: Weakly Supervised Table Parsing via Pre-training. ACL-20 sk-encoding
pre-training
A Simple Language Model for Task-Oriented Dialogue. NIPS-20 sk-encoding
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data. ACL-20 sk-encoding
pre-training
HittER: Hierarchical Transformers for Knowledge Graph Embeddings. EMNLP-21 sk-encoding
GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing. ICLR-21 pre-training
Multi-Task Pre-training for Plug-and-play Task-oriented Dialogue System. EMNLP-21 pre-training
unifying
SCoRe: Pre-Training for Context Representation in Conversational Semantic Parsing. ICLR-21 sk-encoding
pre-training
Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training. NAACL-21 sk-encoding
pre-training
Structure-Grounded Pretraining for Text-to-SQL. NAACL-21 pre-training
Understanding tables with intermediate pre-training. EMNLP-20 pre-training
KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation. EMNLP-20 sk-encoding
pre-training
unifying
UniK-QA: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering. NAACL-22 sk-encoding
unifying
JAKET: Joint Pre-training of Knowledge Graph and Language Understanding. AAAI-22 sk-encoding
pre-training
Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training. AAAI-21 pre-training
Table Fact Verification with Structure-Aware Transformer. ACL-20 sk-encoding
Structural Adapters in Pretrained Language Models for AMR-to-Text Generation. EMNLP-21 sk-encoding
Constrained Language Models Yield Few-Shot Semantic Parsers. EMNLP-21 sk-encoding
constrained-decoding
Case-based Reasoning for Natural Language Queries over Knowledge Bases EMNLP-21 prompt-learning
DoT: An efficient Double Transformer for NLP tasks with tables. ACL-21 sk-encoding
Database Reasoning Over Text. ACL-21 sk-encoding
Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning Skills. ACL-22 pre-training
TAPEX: Table Pre-training via Learning a Neural SQL Executor. ICLR-22 pre-training
HTLM: Hyper-Text Pre-Training and Prompting of Language Models. arxiv-21 pre-training
MATE: Multi-view Attention for Table Transformer Efficiency. EMNLP-21 sk-encoding
RnG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering. ACL-22 prompt-learning
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System. ACL-22 pre-training
unifying
PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models. EMNLP-21 constrained-decoding
FORTAP: Using Formulae for Numerical-Reasoning-Aware Table Pretraining. ACL-22 pre-training
Learning To Retrieve Prompts for In-Context Learning NAACL-HLT-22 prompt-learning
Multi-Instance Training for Question Answering Across Table and Linked Text arxiv-21
Synchromesh: Reliable Code Generation from Pre-trained Language Models ICLR-22 constrained-decoding
prompt-learning
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models. EMNLP-22 unifying
TableFormer: Robust Transformer Modeling for Table-Text Encoding. ACL-22 sk-encoding
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models. arxiv-22 prompt-learning
In-Context Learning for Few-Shot Dialogue State Tracking. EMNLP-22 prompt-learning
T-RAG: End-to-End Table Question Answering via Retrieval-Augmented Generation. arxiv-22 sk-encoding
ArcaneQA: Dynamic Program Induction and Contextualized Encoding for Knowledge Base Question Answering. COLING-22 sk-encoding
SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for Task-Oriented Dialog Understanding. COLING-22 sk-encoding
pre-training
Evaluating the Impact of Model Scale for Compositional Generalization in Semantic Parsing. EMNLP-22 unifying
prompt-learning
TaCube: Pre-computing Data Cubes for Answering Numerical-Reasoning Questions over Tabular Data. EMNLP-22 sk-encoding
pre-training
PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation. EMNLP-22 pre-training
Natural Language to Code Translation with Execution. arxiv-22 prompt-learning
guided with execution
R2D2: Robust Data-to-Text with Replacement Detection. arxiv-22 sk-encoding
OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering. NAACL-22 pre-training
Dual-Channel Evidence Fusion for Fact Verification over Texts and Tables. NAACL-22 sk-encoding
Binding Language Models in Symbolic Languages. arxiv-22 prompt-learning