Copilot for Obsidian is a free and open-source ChatGPT interface right inside Obsidian. It has a minimalistic design and is straightforward to use.
- 💬 ChatGPT UI in Obsidian.
- 🛠️ Prompt AI with your writing using Copilot commands to get quick results.
- 🚀 Turbocharge your Second Brain with AI.
- 🧠 Talk to your past notes for insights.
My goal is to make this AI assistant local-first and privacy-focused. More features are under construction. Stay tuned!
- Chat with ChatGPT right inside Obsidian in the Copilot Chat window.
- No repetitive login. Use your own API key (stored locally).
- No monthly fee. Pay only for what you use.
- Model selection of GPT-3.5, GPT-4, Azure OpenAI models and more.
- No need to buy ChatGPT Plus to use GPT-4 if you have API access.
- No usage cap for GPT-4 like ChatGPT Plus.
- One-click copying any message as markdown.
- One-click saving the entire conversation as a note.
- Use the active note as context, and start a discussion around it by switching to "QA: Active Note" in the Mode Selection menu.
- NEW in v2.1.0: Unlimited context when chatting with your long note, no more "context length exceeded" errors!!
- This feature is powered by retrieval augmentation with a local vector store. No sending your data to a cloud-based vector search service!
- Easy commands to simplify, emojify, summarize, translate, change tone, fix grammar, rewrite into a tweet/thread, count tokens and more.
- Set your own parameters like LLM temperature, max tokens, conversation context based on your need (pls be mindful of the API cost).
- NEW in v2.2.0: User custom prompt support added! Now you can add, apply, delete your custom prompts, persisted in your local Obsidian environment!
- Chat with ChatGPT, copy messages to note, save entire conversation as a note
- QA around your past note
- Fix grammar and spelling, Summarize, Simplify, Emojify, Remove URLs
- Generate glossary, table of contents
- Translate to a language of your choosing
- Change tone: professional, casual, straightforward, confident, friendly
- Make longer/shorter
- Rewrite into a tweet/thread
The settings page lets you set your own temperature, max tokens, conversation context based on your need.
New models will be added as I get access.
You can also use your own system prompt, choose between different embedding providers such as OpenAI, CohereAI (their trial API is free and quite stable!) and Huggingface Inference API (free but sometimes times out).
Copilot for Obsidian is now available in Obsidian Community Plugin!
- Open Community Plugins settings page, click on the Browse button.
- Search for "Copilot" in the search bar and find the plugin with this exact name.
- Click on the Install button.
- Once the installation is complete, enable the Copilot plugin by toggling on its switch in the Community Plugins settings page.
Now you can see the chat icon in your leftside ribbon, clicking on it will open the chat panel on the right! Don't forget to check out the Copilot commands available in the commands palette!
- Go to the latest release
- Download
main.js
,manifest.json
,styles.css
and put them under.obsidian/plugins/obsidian-copilot/
in your vault - Open your Obsidian settings > Community plugins, and turn on
Copilot
.
- The chat history is not saved by default. Please use "Save as Note" to save it. The note will have a title
Chat-Year_Month_Day-Hour_Minute_Second
, you can change its name as needed. - "New Chat" clears all previous chat history. Again, please use "Save as Note" if you would like to save the chat.
- "Use Active Note as Context" creates a local vector index for the active note so that you can chat with super long note! To start the QA, please switch from "Conversation" to "QA: Active Note" in the Mode Selection dropdown.
- You can set a very long context in the setting "Conversation turns in context" if needed.
It's not using my note as context
- Please don't forget to switch to "QA: Active Note" in the Mode Selection dropdown in order to start the QA. Copilot does not have your note as context in "Conversation" mode.
- In fact, you don't have to click the button on the right before starting the QA. Switching to QA mode in the dropdown directly is enough for Copilot to read the note as context. The button on the right is only for when you'd like to manually rebuild the index for the active note, like, when you'd like to switch context to another note, or you think the current index is corrupted because you switched the embedding provider, etc.
- Reference issue: logancyang#51
Unresponsive QA when using Huggingface as the Embedding Provider
- Huggingface Inference API is free to use. It can give errors such as 503 or 504 frequently at times because their server has issues. If it's an issue for you, please consider using OpenAI or CohereAI as the embedding provider. Just keep in mind that OpenAI costs more, especially with very long notes as context.
"model_not_found"
- You need to have access to some of the models like GPT-4 or Azure ones to use them. If you don't, sign up on their waitlist!
- A common misunderstanding I see is that some think they have access to GPT-4 API when they get ChatGPT Plus subscription. That is not true. You need to get access to GPT-4 API to use the GPT-4 model in this plugin. Please check if you can successfully use your model in the OpenAI playground first https://platform.openai.com/playground?mode=chat&model=gpt-4. If not, you can apply for GPT-4 API access here https://openai.com/waitlist/gpt-4-api. Once you have access to the API, you can use GPT-4 with this plugin without the ChatGPT Plus subsciption!
- Reference issue: logancyang#3 (comment)
"insufficient_quota"
- It might be because you haven't set up payment for your OpenAI account, or you exceeded your max monthly limit. OpenAI has a cap on how much you can use their API, usually $120 for individual users.
- Reference issue: logancyang#11
"context_length_exceeded"
- GPT-3.5 has a 4096 context token limit, GPT-4 has 8K (there is a 32K one available to the public soon per OpenAI). So if you set a big token limit in your Copilot setting, you could get this error. Note that the prompts behind the scenes for Copilot commands can also take up tokens, so please limit your message length and max tokens to avoid this error. (For QA with Unlimited Context, use the "QA: Active Note" chain in the dropdown! Requires Copilot v2.1.0.)
- Reference issue: logancyang#1 (comment)
When opening an issue, please include relevant console logs. You can go to Copilot's settings and turn on "Debug mode" at the bottom for more console messages!
- Integration with more LLMs, including open-source and local ones
- More standard prompts that can be used with commands
- Unlimited context for a collection of notes
If you are enjoying Copilot, please support my work by buying me a coffee here: https://www.buymeacoffee.com/logancyang
Please also help spread the word by sharing about the Copilot for Obsidian Plugin on Twitter, Reddit, or any other social media platform you use.
You can find me on Twitter @logancyang.