AI-Agnostic

Use any model. Switch anytime. Keep everything.

Your prompts, agents, and repositories work across every supported model. Choose the best model for each task, switch providers mid-project, or run your own private models.

Your intelligence layer is model-independent.

AI repositories, prompt libraries, agents, writing styles, and training datasets all sit above the models. They work the same whether you're running GPT, Claude, Gemini, or a privately deployed model. Set different defaults for different modules: one model for drafting, another for review, a third for agent conversations. Switch mid-task without losing context. When a new model launches, your entire workflow works with it immediately.

Model Selection

Choose the right model for every task

Different models excel at different tasks. Set per-module defaults: use one model for high-quality drafting, another for fast agent conversations, and a third for document review. Override defaults on any individual task. The platform routes to your chosen model while maintaining the same context assembly, the same prompt libraries, and the same output quality controls regardless of which model processes the request.

One workflow. Any model. Full control.

Per-Module Defaults

Assign different models to drafting, agents, review, and art generation independently

Hot-Swapping

Switch models mid-task without losing context, prompts, or configuration

Private Model Support

Connect privately deployed LLMs for sensitive matters that require on-premise processing

Choose

Select from any supported model or connect your own

Configure

Set per-module defaults and fallback models

Switch

Hot-swap mid-task with full context preserved

Portability

Your AI briefcase travels with you

Every professional carries a briefcase of AI repositories: prompt libraries, agents, writing styles, and training datasets for their document types. Export your entire configuration. Import it elsewhere. If a provider changes pricing or policies, your workflows move with you instantly. Your intelligence is never locked inside any single vendor.

Your repositories are portable. Your models are replaceable.

Fallback model routing

Set backup models for each module. If a primary model is unavailable or experiencing high latency, the platform automatically routes to your designated fallback without interrupting your workflow or losing context.

Cost-aware model selection

Route routine tasks to cost-effective models and reserve premium models for high-stakes drafting. Per-token costs are tracked and visible, so you can optimize spend without sacrificing quality where it matters.

Future-proof architecture

New models are integrated at the platform level. When the next generation of models launches, your existing prompt libraries, agents, and writing styles work with them immediately. No migration, no reconfiguration, no retraining.

Zero vendor lock-in

Your reasoning objects, context files, and configuration are portable. Export your entire AI repository: prompts, training data, agent configurations. Import it elsewhere. If a provider changes pricing or policies, switch instantly.

Common questions

Which AI models does the platform support?

The platform integrates with major foundation models including GPT, Claude, and Gemini, and supports connecting privately deployed LLMs. You choose which model to use for each module — drafting, agents, review, and art generation can each run on different models. New models are added at the platform level, so your workflows work with them automatically.

What happens if I switch models mid-project?

Can I use privately hosted models for sensitive matters?

What does 'AI-agnostic' actually mean?

See model-agnostic drafting in action

Book a 15-minute demo. We'll show you how prompt libraries, agents, and repositories work identically across every supported model.