Skip to content

Standardize BertGeneration model card #40250

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

nemitha2005
Copy link

@nemitha2005 nemitha2005 commented Aug 18, 2025

What does this PR do?

#36979

Updated the BertGeneration model card to follow the new standardized format including:

  • New consistent layout with badges
  • Friendly description written for accessibility
  • Usage examples with Pipeline, AutoModel, and transformers-cli
  • Quantization example with BitsAndBytesConfig
  • Updated resources section with proper links

This standardizes the BertGeneration documentation to match the new template format requested in issue #36979.

Before submitting

Who can review?

@stevhliu - Documentation lead who is managing the model card standardization project.

Anyone in the community is free to review the PR once the tests have passed.

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

The BertGeneration model is a BERT model that can be leveraged for sequence-to-sequence tasks using
[`EncoderDecoderModel`] as proposed in [Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks](https://huggingface.co/papers/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
[BertGeneration](https://huggingface.co/papers/1907.12461) leverages pre-trained BERT checkpoints for sequence-to-sequence tasks using EncoderDecoderModel architecture.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[BertGeneration](https://huggingface.co/papers/1907.12461) leverages pre-trained BERT checkpoints for sequence-to-sequence tasks using EncoderDecoderModel architecture.
[BertGeneration](https://huggingface.co/papers/1907.12461) leverages pretrained BERT checkpoints for sequence-to-sequence tasks with the [`EncoderDecoderModel`] architecture. BertGeneration adapts the [`BERT`] for generative tasks.


The abstract from the paper is the following:
BertGeneration adapts the powerful BERT encoder for generative tasks by using it in encoder-decoder architectures for tasks like summarization, translation, and text fusion. Think of it as taking BERT's deep understanding of language and teaching it to generate new text based on input context.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
BertGeneration adapts the powerful BERT encoder for generative tasks by using it in encoder-decoder architectures for tasks like summarization, translation, and text fusion. Think of it as taking BERT's deep understanding of language and teaching it to generate new text based on input context.

GPT-2 and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both
encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine Translation,
Text Summarization, Sentence Splitting, and Sentence Fusion.*
You can find all the original BertGeneration checkpoints under the [BERT Generation](https://huggingface.co/models?search=bert-generation) collection.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
You can find all the original BertGeneration checkpoints under the [BERT Generation](https://huggingface.co/models?search=bert-generation) collection.
You can find all the original BERT checkpoints under the [BERT](https://huggingface.co/collections/google/bert-release-64ff5e7a4be99045d1896dbc) collection.

The model can be used in combination with the [`EncoderDecoderModel`] to leverage two pretrained BERT checkpoints for
subsequent fine-tuning:
<hfoptions id="usage">
<hfoption id="Pipeline">
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import torch
from transformers import pipeline

pipeline = pipeline(
    task="text2text-generation",
    model="google/roberta2roberta_L-24_discofuse",
    torch_dtype=torch.float16,
    device=0
)
pipeline("Plants create energy through ")

>>> # instantiate sentence fusion model
>>> sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse")
>>> tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
from transformers import BertGenerationEncoder, BertGenerationDecoder, BertTokenizer, EncoderDecoderModel
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets show a pretrained example here:

import torch
from transformers import EncoderDecoderModel, AutoTokenizer

model = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")

input_ids = tokenizer(
    "Plants create energy through ", add_special_tokens=False, return_tensors="pt"
).input_ids

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))

Comment on lines +79 to +80
# Using transformers-cli for quick inference
python -m transformers.models.bert_generation --model google/roberta2roberta_L-24_discofuse --input "This is the first sentence. This is the second sentence."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Using transformers-cli for quick inference
python -m transformers.models.bert_generation --model google/roberta2roberta_L-24_discofuse --input "This is the first sentence. This is the second sentence."
echo -e "Plants create energy through " | transformers run --task text2text-generation --model "google/roberta2roberta_L-24_discofuse" --device 0

combination with [`EncoderDecoder`].
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.

The example below uses [BitsAndBytesConfig](../main_classes/quantization#transformers.BitsAndBytesConfig) to quantize the weights to 4-bit.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The example below uses [BitsAndBytesConfig](../main_classes/quantization#transformers.BitsAndBytesConfig) to quantize the weights to 4-bit.
The example below uses [BitsAndBytesConfig](../quantizationbitsandbytes) to quantize the weights to 4-bit.

The example below uses [BitsAndBytesConfig](../main_classes/quantization#transformers.BitsAndBytesConfig) to quantize the weights to 4-bit.

```python
from transformers import BertGenerationEncoder, BertTokenizer, BitsAndBytesConfig
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use the same code snippet with a pretrained model from the AutoModel example above


## Notes

- BertGenerationEncoder and BertGenerationDecoder should be used in combination with EncoderDecoderModel for sequence-to-sequence tasks.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- BertGenerationEncoder and BertGenerationDecoder should be used in combination with EncoderDecoderModel for sequence-to-sequence tasks.
- [`BertGenerationEncoder`] and [`BertGenerationDecoder`] should be used in combination with [`EncoderDecoderModel`] for sequence-to-sequence tasks.
<add code example here from https://huggingface.co/docs/transformers/model_doc/bert-generation#usage-examples-and-tips>

result = tokenizer.decode(outputs[0])
```

## Resources
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can remove this section

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants