Skip to content

icon picker
VLM




VLM
Paper Name
Categories
Type
Concept
Recent issues
Motivations
Contributions
Evaluation List
Target
Conf./Jour.
Year
Link
Open-Vocabulary Customization from CLIP via Data-Free Knowledge Distillation
Data for knowledge distillation not always available due to copyrights and privacy concerns.
Exist DFKD methods fail due to heavy reliance on BatchNorm layers → unusable in CLIP.
Image-Text matching → DFKD for CLIP.

Inverse surrogate dataset from CLIP based on text prompts.
Distill a student model from CLIP using surrogate dataset.
Style dictionary diversification → diversity of synthetic images.
Class consistency → prevent uncontrollable semantics introduced by diversification.
There are no rows in this table

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.