JavaScript required
We’re sorry, but Coda doesn’t work properly without JavaScript enabled.
Skip to content
Skydvn-PNU
DG
FDG
Federated Segmentation
Federated Unlearning - Adversarial Recovery
Decentralized MARL
VLM
LLM-ComSoc
Diffusion Models
Protein Foldings
Reasoning/Symbolic: A Survey
Decentralized MARL: A survey
Resource Allocation - SemCom: A survey
Untitled page
SubGraph-FL
WorldModel-DRL
More
Share
Explore
VLM
VLM
VLM
Paper Name
Categories
Type
Concept
Recent issues
Motivations
Contributions
Evaluation List
Target
Conf./Jour.
Year
Link
Paper Name
Categories
Type
Concept
Recent issues
Motivations
Contributions
Evaluation List
Target
Conf./Jour.
Year
Link
Open-Vocabulary Customization from CLIP via Data-Free Knowledge Distillation
Loss Function Design
Data for knowledge distillation not always available due to copyrights and privacy concerns.
Exist DFKD methods fail due to heavy reliance on BatchNorm layers → unusable in CLIP.
Image-Text matching → DFKD for CLIP.
Inverse surrogate dataset from CLIP based on text prompts.
Distill a student model from CLIP using surrogate dataset.
Style dictionary diversification → diversity of synthetic images.
Class consistency → prevent uncontrollable semantics introduced by diversification.
There are no rows in this table
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
Ctrl
P
) instead.