-
Notifications
You must be signed in to change notification settings - Fork 30.1k
Standardize BertGeneration model card #40250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Standardize BertGeneration model card #40250
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
The BertGeneration model is a BERT model that can be leveraged for sequence-to-sequence tasks using | ||
[`EncoderDecoderModel`] as proposed in [Leveraging Pre-trained Checkpoints for Sequence Generation | ||
Tasks](https://huggingface.co/papers/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. | ||
[BertGeneration](https://huggingface.co/papers/1907.12461) leverages pre-trained BERT checkpoints for sequence-to-sequence tasks using EncoderDecoderModel architecture. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BertGeneration](https://huggingface.co/papers/1907.12461) leverages pre-trained BERT checkpoints for sequence-to-sequence tasks using EncoderDecoderModel architecture. | |
[BertGeneration](https://huggingface.co/papers/1907.12461) leverages pretrained BERT checkpoints for sequence-to-sequence tasks with the [`EncoderDecoderModel`] architecture. BertGeneration adapts the [`BERT`] for generative tasks. |
|
||
The abstract from the paper is the following: | ||
BertGeneration adapts the powerful BERT encoder for generative tasks by using it in encoder-decoder architectures for tasks like summarization, translation, and text fusion. Think of it as taking BERT's deep understanding of language and teaching it to generate new text based on input context. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BertGeneration adapts the powerful BERT encoder for generative tasks by using it in encoder-decoder architectures for tasks like summarization, translation, and text fusion. Think of it as taking BERT's deep understanding of language and teaching it to generate new text based on input context. |
GPT-2 and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both | ||
encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine Translation, | ||
Text Summarization, Sentence Splitting, and Sentence Fusion.* | ||
You can find all the original BertGeneration checkpoints under the [BERT Generation](https://huggingface.co/models?search=bert-generation) collection. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can find all the original BertGeneration checkpoints under the [BERT Generation](https://huggingface.co/models?search=bert-generation) collection. | |
You can find all the original BERT checkpoints under the [BERT](https://huggingface.co/collections/google/bert-release-64ff5e7a4be99045d1896dbc) collection. |
The model can be used in combination with the [`EncoderDecoderModel`] to leverage two pretrained BERT checkpoints for | ||
subsequent fine-tuning: | ||
<hfoptions id="usage"> | ||
<hfoption id="Pipeline"> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import torch
from transformers import pipeline
pipeline = pipeline(
task="text2text-generation",
model="google/roberta2roberta_L-24_discofuse",
torch_dtype=torch.float16,
device=0
)
pipeline("Plants create energy through ")
>>> # instantiate sentence fusion model | ||
>>> sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse") | ||
>>> tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse") | ||
from transformers import BertGenerationEncoder, BertGenerationDecoder, BertTokenizer, EncoderDecoderModel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets show a pretrained example here:
import torch
from transformers import EncoderDecoderModel, AutoTokenizer
model = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
input_ids = tokenizer(
"Plants create energy through ", add_special_tokens=False, return_tensors="pt"
).input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
# Using transformers-cli for quick inference | ||
python -m transformers.models.bert_generation --model google/roberta2roberta_L-24_discofuse --input "This is the first sentence. This is the second sentence." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Using transformers-cli for quick inference | |
python -m transformers.models.bert_generation --model google/roberta2roberta_L-24_discofuse --input "This is the first sentence. This is the second sentence." | |
echo -e "Plants create energy through " | transformers run --task text2text-generation --model "google/roberta2roberta_L-24_discofuse" --device 0 |
combination with [`EncoderDecoder`]. | ||
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends. | ||
|
||
The example below uses [BitsAndBytesConfig](../main_classes/quantization#transformers.BitsAndBytesConfig) to quantize the weights to 4-bit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example below uses [BitsAndBytesConfig](../main_classes/quantization#transformers.BitsAndBytesConfig) to quantize the weights to 4-bit. | |
The example below uses [BitsAndBytesConfig](../quantizationbitsandbytes) to quantize the weights to 4-bit. |
The example below uses [BitsAndBytesConfig](../main_classes/quantization#transformers.BitsAndBytesConfig) to quantize the weights to 4-bit. | ||
|
||
```python | ||
from transformers import BertGenerationEncoder, BertTokenizer, BitsAndBytesConfig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use the same code snippet with a pretrained model from the AutoModel
example above
|
||
## Notes | ||
|
||
- BertGenerationEncoder and BertGenerationDecoder should be used in combination with EncoderDecoderModel for sequence-to-sequence tasks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- BertGenerationEncoder and BertGenerationDecoder should be used in combination with EncoderDecoderModel for sequence-to-sequence tasks. | |
- [`BertGenerationEncoder`] and [`BertGenerationDecoder`] should be used in combination with [`EncoderDecoderModel`] for sequence-to-sequence tasks. | |
<add code example here from https://huggingface.co/docs/transformers/model_doc/bert-generation#usage-examples-and-tips> |
result = tokenizer.decode(outputs[0]) | ||
``` | ||
|
||
## Resources |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove this section
What does this PR do?
#36979
Updated the BertGeneration model card to follow the new standardized format including:
This standardizes the BertGeneration documentation to match the new template format requested in issue #36979.
Before submitting
Pull Request section?
to it if that's the case.
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
Who can review?
@stevhliu - Documentation lead who is managing the model card standardization project.
Anyone in the community is free to review the PR once the tests have passed.