Lists (6)
Sort Name ascending (A-Z)
Stars
A list of public EMG datasets and their papers, with a focus on raw EMG signals.
A collection of projects showcasing RAG, agents, workflows, and other AI use cases
Training and evaluating encoding models to predict fMRI brain responses to naturalistic video stimuli
🦛 CHONK docs with Chonkie ✨ — The no-nonsense RAG library
AllTracker is a model for tracking all pixels in a video.
Project page of the paper "Learning general and distinctive 3D local deep descriptors for point cloud registration" published in IEEE T-PAMI
[ISPRS 2025] The official implementation of the paper CoFF "Cross-Modal Feature Fusion for Robust Point Cloud Registration with Ambiguous Geometry".
Leveraging Inlier Correspondences Proportion for Point Cloud Registration. https://arxiv.org/abs/2201.12094.
Robust online multiband drift estimation in electrophysiology data
notes for software engineers getting up to speed on new AI developments. Serves as datastore for https://latent.space writing, and product brainstorming, but has cleaned up canonical references und…
Calibrated inference of spiking from calcium ΔF/F data using deep networks
JerryWu-code / TinyZero
Forked from Jiayi-Pan/TinyZeroDeepseek R1 zero tiny version own reproduce on two A100s.
AMEGA-LLM: Autonomous Medical Evaluation for Guideline Adherence of Large Language Models
Common clinical models in the forms of openEHR archetypes and GDL guidelines
Minimal reproduction of DeepSeek R1-Zero
The LLM's practical guide: From the fundamentals to deploying advanced LLM and RAG apps to AWS using LLMOps best practices
Code repository for emg2pose dataset and model benchmarks
Interactive Medical Image Segmentation: A Benchmark Dataset and Baseline
first base model for full-duplex conversational audio
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
A connectome-constrained deep mechanistic network (DMN) model of the fruit fly visual system in PyTorch.
Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audio codec.
[ICLR 2025] SOTA discrete acoustic codec models with 40/75 tokens per second for audio language modeling