Stars
Open Agent Coding CLI, Koding with GLM, Qwen, Kimi, DeepSeek etc.(welcome to use Kode to summit PR)
Automated workflows for Claude Code. Features spec-driven development for new features (Requirements → Design → Tasks → Implementation) and streamlined bug fix workflow for quick issue resolution (…
The Elastic stack (ELK) powered by Docker and Compose.
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
RWKVFF is a library based on harrisonvanderbyl/rwkv_chatbot which has a bunch of neat features.
tinygrad port of the RWKV large language model.
高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型
Large-scale Pre-training Corpus for Chinese 100G 中文预训练语料
Making offline AI models accessible to all types of edge devices.
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RN…
Milvus is a high-performance, cloud-native vector database built for scalable vector ANN search
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
中文langchain项目|小必应,Q.Talk,强聊,QiangTalk
🦜🔗 Build context-aware reasoning applications
Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。
✅4g GPU可用 | 简易实现ChatGLM单机调用多个计算设备(GPU、CPU)进行推理
TextGen: Implementation of Text Generation models, include LLaMA, BLOOM, GPT2, BART, T5, SongNet and so on. 文本生成模型,实现了包括LLaMA,ChatGLM,BLOOM,GPT2,Seq2Seq,BART,T5,UDA等模型的训练和预测,开箱即用。
《AI 研发提效:自己动手训练 LoRA》,包含 Llama (Alpaca LoRA)模型、ChatGLM (ChatGLM Tuning)相关 Lora 的训练。训练内容:用户故事生成、测试代码生成、代码辅助生成、文本转 SQL、文本生成代码……
对ChatGLM直接使用RLHF提升或降低目标输出概率|Modify ChatGLM output with only RLHF
chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu
Lark是一个开源的Golang IM服务端项目,具有高性能和可扩展等特性。采用微服务架构设计,支持集群和水平扩展,能够满足高并发业务需求,并实现了万人群消息的秒达。
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
Interact with your documents using the power of GPT, 100% privately, no data leaks
LLM training code for Databricks foundation models
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, m…