Represent, send, store and search multimodal data
-
Updated
Oct 1, 2024 - Python
Represent, send, store and search multimodal data
A python native client for easy interaction with a Weaviate instance.
Build super simple end-to-end data & ETL pipelines for your vector databases and Generative AI applications
🎩 Magic in Pocket / 🪄 口袋里的“魔法”.
Designed for offline use, this RAG application template is based on Andrej Baranovskij's tutorials. It offers a starting point for building your own local RAG pipeline, independent of online APIs and cloud-based LLM services like OpenAI.
Async bulk data ingestion and querying in various document, graph and vector databases via their Python clients
This repository contains an example of how to use the Weaviate vector search engine's text2vec-openai module
Integrated LLM-based document and data Q&A with knowledge graph visualization
This project demonstrates how to parse emails, process them using OpenAI's GPT-3.5, and load the data into a Weaviate vector database for enhanced search capabilities. Utilizing few-shot prompts and parallel processing, it showcases the power of combining NLP techniques with vector search.
Generate videos on any topic automatically, harnessing OpenAI for script generation, ElevenLabs for TTS, and Giphy and Unsplash for multimedia
Python SDK for FirstBatch: Real-time personalization using vectorDBs
📃 A contracts clause summarization system using LLM and vector database
Text/Image search for similar products
Multilingual Semantic Search with Reranking on a prepared large vectorized dataset comprising 10 million Wikipedia documents. It supports dense retrieval, keyword search, and hybrid search.
Easily create semantic search based LLM applications
Add a description, image, and links to the weaviate topic page so that developers can more easily learn about it.
To associate your repository with the weaviate topic, visit your repo's landing page and select "manage topics."