Yutong Feng
·
Linlin Zhang
·
Hengyuan Cao
·
Yiming Chen
·
Xiaoduan Feng
·
Jian Cao
·
Yuxiong Wu
·
Bin Wang
Kunbyte AI | Zhejiang University
![]() |
- [2025.08.20] 🎉🎉🎉 We release the model weights, inference demo and evaluation benchmark of OmniTry! To experience our advanced version and other related features, please visit our product website k-fashionshop (in Chinese) or visboom (in English).
Noted: Currently, OmniTry requires at least 28GB of VRAM for inference under torch.bfloat16. We will continue work to decrease memory requirements.
-
Create the checkpoint directory:
mkdir checkpoints
-
Download the FLUX.1-Fill-dev into
checkpoints/FLUX.1-Fill-dev
-
Download the LoRA of OmniTry into
checkpoints/omnitry_v1_unified.safetensors
. You can also download theomnitry_v1_clothes.safetensors
that specifically finetuned on the clothe data only.
Install the environment with conda
conda env create -f environment.yml
conda activate omnitry
or pip
:
pip install -r requirements.txt
(Optional) We recommend to install the flash-attention to accelerate the inference process:
pip install flash-attn==2.6.3
For running the gradio demo:
python gradio_demo.py
To change different versions of checkpoints for OmniTry, replace the lora_path
in configs/omnitry_v1_unified.yaml
.
We present a unified evaluation benchmark for OmniTry. Please refer to the OmniTry-Bench.
This project is developped on the diffusers and FLUX. We appreciate the contributors for their awesome works.
If you find this codebase useful for your research, please use the following entry.
@article{feng2025omnitry,
title={OmniTry: Virtual Try-On Anything without Masks},
author={Feng, Yutong and Zhang, Linlin and Cao, Hengyuan and Chen, Yiming and Feng, Xiaoduan and Cao, Jian and Wu, Yuxiong and Wang, Bin},
journal={arXiv preprint arXiv:2508.13632},
year={2025}
}