A CLI tool and library for interacting with various Large Language Models (LLMs).
Séance provides a unified interface to communicate with different providers like OpenAI, Google Gemini, and Anthropic directly from your terminal.
- Unified CLI: A single command to interact with multiple LLM providers.
- Provider Support: Currently supports OpenAI, Google Gemini, Anthropic, OpenRouter, and LMStudio.
- Simple Configuration: Configure all your API keys in a single INI file.
- Extensible: Designed to be easily extendable with new providers.
You can install Séance using nimble
:
nimble install seance
Before using Séance, you need to configure your API keys. Create a configuration file at ~/.config/seance/config.ini
.
Here is an example configuration:
# ~/.config/seance/config.ini
[seance]
default_provider = gemini
auto_session = true
[openai]
key = sk-...
`model = gpt-5-nano
[gemini]
key = ...
[anthropic]
key = ...
[openrouter]
key = ...
[lmstudio]
# No key is needed for LMStudio
endpoint = http://localhost:1234/v1/chat/completions
model = Qwen3-4B-Thinking-2507-MLX-8bit
If your configuration file becomes corrupted, Séance will detect it and offer to delete the file for you.
Once configured, you can use the seance
command to interact with your chosen LLM.
# Get a response from the default provider
seance chat "What is the speed of light?"
# Specify a provider for your query
seance chat "Explain the theory of relativity" --provider gemini
# Write your pull request descriptions
git diff main... | seance chat "Write a conventional commit PR"
Séance supports session management, allowing you to continue conversations and manage session history.
By default, Séance will automatically create a session for each conversation. You can disable this behavior by setting auto_session = false
in your ~/.config/seance/config.ini
file.
-
Listing Sessions: To see a list of all your active sessions, use the
list
command:seance list
-
Continuing a Session: To continue a previous conversation, use the
--session
flag with the session ID:seance chat "Tell me more about that." --session <session_id>
-
Pruning Sessions: To clean up old sessions, you can use the
prune
command. This will delete all sessions older than 10 days, or whatever you specify with --days:seance prune --days 30
-
Disabling Session for a Single Chat: To prevent a session from being loaded or saved for a specific chat, use the
--no_session
flag:seance chat "This chat should not be saved." --no_session
-
Using a JSON Schema: To force the output to be in a specific JSON format, you can use the
--json
flag. For the Gemini, Anthropic, OpenAI, and LMStudio providers, you can also use the--schema
flag to provide a JSON schema to which the output must conform.# Create a schema file echo '{"type": "object", "properties": {"recipe_name": {"type": "string"}}}' > schema.json # Use the schema seance chat "Give me a recipe for chocolate chip cookies" --provider gemini --json --schema schema.json
Séance provides a clean, simple API for interacting with LLMs programmatically.
Just import seance
and start chatting:
import seance
# Basic chat with default provider
let response = chat("What is 2 + 2?")
echo response # "4"
# Conversation context is maintained automatically
let response1 = chat("My name is Alice")
let response2 = chat("What's my name?") # Remembers "Alice"
# Specify provider and model
let response3 = chat("Tell me a joke", some(OpenAI), some("gpt-4o"))
echo response3
# Add a system prompt
let response4 = chat("Explain recursion", systemPrompt = some("You are a helpful coding assistant"))
echo response4
Control conversation context with explicit sessions:
import seance
# Create sessions with system prompts
var workSession = newSession(some("You are a helpful coding assistant"))
var personalSession = newSession(some("You are a friendly personal assistant"))
# Work conversation - much more intuitive!
let workResponse1 = workSession.chat("How do I optimize SQL queries?", some(OpenAI))
let workResponse2 = workSession.chat("What about indexing?") # Continues work context
# Personal conversation (separate context)
let personalResponse = personalSession.chat("What's a good pasta recipe?", some(Anthropic))
# Reset global session
resetSession(some("You are now a creative writing assistant"))
let response = chat("Write a short story about a robot")
The chat
functions support these optional parameters:
# Global session chat
proc chat*(content: string,
provider: Option[Provider] = none(Provider), # OpenAI, Anthropic, Gemini, OpenRouter
model: Option[string] = none(string), # Override model from config
systemPrompt: Option[string] = none(string), # Set system prompt
jsonMode: bool = false, # Request JSON output
schema: Option[string] = none(string) # Path to a JSON schema file
): string
# Session-specific chat
proc chat*(session: var Session,
content: string,
provider: Option[Provider] = none(Provider),
model: Option[string] = none(string),
systemPrompt: Option[string] = none(string), # Only used if session is empty
jsonMode: bool = false, # Request JSON output
schema: Option[string] = none(string) # Path to a JSON schema file
): string
You can also get a JSON response from the providers that support it.
import seance
import std/json
# Get a JSON response
let response = chat("Give me a recipe for chocolate chip cookies", jsonMode = true)
let jsonResponse = parseJson(response)
echo jsonResponse["recipe_name"].getStr()
Both approaches work:
# Global session
let response = chat("Hello!")
# Explicit session (more intuitive)
var session = newSession()
let response = session.chat("Hello!")
That's it! No complex message arrays, no role management, just simple text in and text out with automatic conversation handling.
Séance uses Carapace to generate shell-specific completions. You can generate séance's carapace specification file as follows:
seance completion
Follow carapace's documentation to install the completions for your shell.
To contribute to Séance or run it from the source:
# 1. Clone the repository
git clone https://github.com/emre/seance.git
cd seance
# 2. Install development dependencies
nimble install -d --accept
# 3. Run the tests
nimble test
This project is licensed under the MIT License.