News Report Technology
January 24, 2023

GLIGEN: new frozen text-to-image generation model with bounding box

In Brief

GLIGEN, or Grounded-Language-to-Image Generation, is a novel technique that builds on and extends the capability of current pre-trained diffusion models.

With caption and bounding box condition inputs, GLIGEN model generates open-world grounded text2img.

GLIGEN can generate a variety of objects in specific places and styles by leveraging knowledge from a pretrained text2img model.

GLIGEN may also ground human keypoints while generating text-to-images.

Large-scale text-to-image diffusion models have come a long way. However, the current practice is to rely solely on text input, which can limit controllability. GLIGEN, or Grounded-Language-to-Image Generation, is a novel technique that builds on and extends the capability of current pre-trained text-to-image diffusion models by allowing them to be conditioned on grounding inputs.

GLIGEN: new frozen text-to-image generation model with bounding box

To maintain the pre-trained model’s extensive concept knowledge, developers freeze all of its weights and pump the grounding information into fresh trainable layers via a controlled process. With caption and bounding box condition inputs, GLIGEN model generates open-world grounded text-to-image, and the grounding ability generalizes effectively to novel spatial configurations and concepts.

Check out the demo here.

GLIGEN is based on existing pretrained diffusion models, the original weights of which have been frozen to retain massive amounts of pre-trained knowledge.
  • GLIGEN is based on existing pre-trained diffusion models, the original weights of which have been frozen to retain massive amounts of pre-trained knowledge.
  • At each transformer block, a new trainable Gated Self-Attention layer is created to absorb additional grounding input.
  • Each grounding token has two types of information: semantic information about the grounded thing (encoded text or image) and spatial position information (encoded bounding box or key points).
Related article: VToonify: A real-time AI model for generating artistic portrait videos
Newly added modulated layers are continuously pre-trained on massive grounding data (image-text-box), which is more cost-effective than alternative methods of using a pretrained diffusion model, such as full-model finetuning. Similar to Lego, different trained layers can be plugged in and out to allow various new capabilities.
Newly added modulated layers are continuously pre-trained on massive grounding data (image-text-box). This is more cost-effective than alternative methods of using a pre-trained diffusion model, such as full-model finetuning. Similar to Lego, different trained layers can be plugged in and out to allow various new capabilities.
GLIGEN supports scheduled sampling in the diffusion process for inference, where the model can dynamically select to use grounding tokens (by adding the new layer) or the original diffusion model with good prior (by kicking out the new layer), and thus balance generation quality and grounding ability.
GLIGEN supports scheduled sampling in the diffusion process for inference, where the model can dynamically select to use grounding tokens (by adding the new layer) or the original diffusion model with good prior (by kicking out the new layer), and thus balance generation quality and grounding ability.
GLIGEN can generate a variety of objects in specific places and styles by leveraging knowledge from a pretrained text2img model.
GLIGEN can generate a variety of objects in specific places and styles by leveraging knowledge from a pretrained text2img model.
Related article: Microsoft has released a diffusion model that can build a 3D avatar from a single photo of a person
GLIGEN can also be trained using reference pics.
GLIGEN can also be trained using reference pics. The top row suggests that reference photographs, in addition to written descriptions, can provide more fine-grained characteristics such as style and shape the car. The second row demonstrates that a reference image can also be utilized as a style image, in which case we discover that grounding it into a corner or edge of an image suffices.
GLIGEN, like other diffusion models, can perform grounded image inpaint, which can generate objects that closely match supplied bounding boxes.
GLIGEN, like other diffusion models, can perform grounded image inpaint, which can generate objects that closely match supplied bounding boxes.
GLIGEN may also ground human keypoints while generating text-to-images.
GLIGEN may also ground human key points while generating text-to-images.

Read more about AI:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

Hot Stories

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024
Join Our Newsletter.
Latest News

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024

Supply and Demand Zones

Cryptocurrency, like any other currency, is a financial instrument based on the fundamental economic principles of supply ...

Know More

Top 10 Crypto Wallets in 2024

With the current fast-growing crypto market, the significance of reliable and secure wallet solutions cannot be emphasized ...

Know More
Join Our Innovative Tech Community
Read More
Read more
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
Business News Report
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
March 29, 2024
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
News Report Technology
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
March 29, 2024
Top Investment Projects of the Week 25-29.03
Digest Technology
Top Investment Projects of the Week 25-29.03
March 29, 2024
Vitalik Buterin Advocates For Memecoins’ Potential In Crypto Sector, Favors ‘Good Memecoins’
News Report Technology
Vitalik Buterin Advocates For Memecoins’ Potential In Crypto Sector, Favors ‘Good Memecoins’
March 29, 2024