- 👋 Hi, I’m @FFHow
- 👀 I’m interested in Trustworthy AI, Large Language models, and Visual Generative models.
- 🌱 I’m currently studying at SIGS, Tsinghua University, majoring in computer science and technology.
- 📫 Welcome Contact me at fang-h23@mails.tsinghua.edu.cn
- ✨ My homepage
-
Shenzhen International Graduate School, Tsinghua Univeristy.
- Shenzhen, Guangdong Province, China
-
12:34
(UTC +08:00) - https://scholar.google.com/citations?user=12237G0AAAAJ&hl=zh-CN
Pinned Loading
-
Model-Inversion-Attack-ToolBox
Model-Inversion-Attack-ToolBox PublicA comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.
-
GIFD_Gradient_Inversion_Attack
GIFD_Gradient_Inversion_Attack Public[ICCV-2023] Gradient inversion attack, Federated learning, Generative adversarial network.
-
CGNC_Targeted_Adversarial_Attacks
CGNC_Targeted_Adversarial_Attacks Public[ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacks
-
CPGC_VLP_Universal_Attacks
CPGC_VLP_Universal_Attacks Public[ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Generator, Generative Attacks.
-
CoPA_Contrastive_Paraphrase_Attacks
CoPA_Contrastive_Paraphrase_Attacks Public[EMNLP-2025] Paraphrasing Attacks, LLM-Generated Text Detectors, Training-free Decoding Strategy
-
CMI_VLD_Hallucination_Mitigation
CMI_VLD_Hallucination_Mitigation Public[NeurIPS-2025] Large Vision-Language Models, Hallucination Mitigation, Conditional Mutual Information, Token Purification
If the problem persists, check the GitHub status page or contact support.