This project is an implementation of the paper: Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], ICML 2019.
-
Updated
Mar 17, 2024 - Python
This project is an implementation of the paper: Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], ICML 2019.
End-to-end fine-tuning of Hugging Face models using LoRA, QLoRA, quantization, and PEFT techniques. Optimized for low-memory with efficient model deployment
A LLM(llama) finetuned for work well with mental health assistance
FineTuning LLMs on conversational medical dataset.
Advanced NLP MLOps pipeline for misinformation detection, utilizing RoBERTa with LoRA (PEFT) for efficient fine-tuning. This project focuses on cross-domain generalization across the FakeNews-Kaggle and LIAR datasets, featuring robust data engineering, mixed-precision training, and comprehensive metric evaluation.
parameter-efficient finetuning method for dynamic faical expression recongition
⭐️⭐️⭐️LLMs RoadMap,帮助各位从transformers仓库视角了解NLP传统任务,模型高效微调,低精度微调,分布式模型训练等工程内容
This is the repo for prompt tuning a language model to improve the given prompt (vague).
This project leverages FLAN-T5 from Hugging Face to perform dialogue summarization, fine-tuning with ROUGE, and detoxifying summaries using PPO and PEFT.
Develop a chatbot that can effectively adapt to context and topic shifts in a conversation, leveraging the Stanford Question Answering Dataset to provide informed and relevant responses, and thereby increasing user satisfaction and engagement.
Implementation of Low-Rank Adaptation (LoRA) for parameter-efficient fine-tuning of GPT-2 on the SQuAD dataset for question answering, exploring training efficiency, loss masking, and performance metrics like F1 and Exact Match. Final Course project for Deep Learning at University of Kerman, Spring 2025.
AI Assistant for Customer Support
Lightweight Python toolkit for fine-tuning image datasets with Parameter Efficient Fine Tuning (PEFT) and ViTs
Language Fusion for Parameter-Efficient Cross-lingual Transfer
LoRA + QLoRA fine‑tuning toolkit optimized for Intel Arc Battlemage GPUs
Domain-Specific Sentiment Analysis Model fine-tuned on FinBERT using PEFT (LoRA) to classify financial texts into positive, negative, and neutral sentiment. Achieves high accuracy on domain-specific data with minimal computational cost by leveraging parameter-efficient fine-tuning
KoRA is a novel PEFT method that introduces inter-adapter communication via a CompositionBlock inspired by the Kolmogorov–Arnold Representation Theorem. It composes query, key, and value adapters into a unified representation — achieving robust generalization and cross-domain transfer with minimal parameter overhead.
Add a description, image, and links to the peft topic page so that developers can more easily learn about it.
To associate your repository with the peft topic, visit your repo's landing page and select "manage topics."