Table of Contents
Fetching ...
Paper

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TL;DR

BERT introduces a deep bidirectional Transformer encoder pretrained with Masked Language Modeling and Next Sentence Prediction to learn universal language representations. By pre-training on unlabeled text and fine-tuning with a single output layer, it achieves state-of-the-art results across GLUE, SQuAD, and SWAG, demonstrating strong transfer to both sentence- and token-level tasks. The work shows that bidirectional context and joint pre-training tasks are key drivers of performance, with model size and pre-training data significantly impacting results. This approach establishes a general, versatile pre-training recipe that reduces task-specific architectural engineering in NLP.

Abstract

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).