Vellum Review

8.8/10

Run a personal or team AI assistant across chat, email, memory, channels, and workflows with managed or self-hosted deployment.

Review updated May 2026 By The AI Way Editorial Tested 133+ tools across the site 5 min read
Vellum AI Agents CLI Tool iOS App Mac App Web-Based Freemium from $20.00/mo

Our Verdict

Vellum is worth opening when you want an assistant that does more than answer prompts and can actually persist across memory, channels, tools, and devices. Its strongest edge is the combination of assistant capability with deployment control, permission levels, and cross-surface access, which is much closer to an operating environment than a chat box. But that same depth makes it heavier than a casual assistant, so teams without clear workflow intent may find it more complex than they need.

Try it
Free to start, then pay when the limits stop you. Starts at $20.00 USD.
open_in_new Try Vellum
Official Website Snapshot Visit Site ↗

check_circle Pros

  • It stays available across web, macOS, iOS, CLI, and messaging-style channels, which makes the assistant useful in real day-to-day work instead of one tab.
  • The managed versus self-hosted split gives teams a meaningful trust and deployment choice that many assistant products never explain clearly.
  • Permission levels like strict and conservative make the operational boundary visible before the assistant touches live systems.

cancel Cons

  • The product surface is broad enough that onboarding will feel heavier than a simple personal AI assistant.
  • Its value depends on actually wiring memory, channels, and workflows into real work, so lightweight users may never reach the payoff point.
  • Teams that do not care about deployment control or system permissions may be paying for more platform depth than they need.

Should you use it?

Best for: Running a persistent AI assistant across devices, channels, and workflows when you care about memory, permissions, and where the assistant is deployed.

Skip it if: Skip this if you only want a lightweight prompt box or if your team does not need an assistant operating across connected tools and channels.

Is it worth the price?

Freemium Starts at $20.00 USD

The entry pricing is approachable enough to test, but the real cost question is not the monthly number. It is whether you will actually use the memory, channels, deployment control, and workflow depth often enough to justify a platform instead of a simpler assistant.

The Free Tier

A public free starter path exists before paid tiers begin.

Paid Upgrade
$20/month

Paid access expands the assistant beyond the starter path and makes the fuller cross-surface platform practical for ongoing use.

One thing to know before you start

Decide your trust model before you test the assistant. Vellum is much easier to evaluate when you already know whether you want managed convenience or self-hosted control.

What people actually use it for

Give one assistant persistent memory across work apps and communication channels

This is the clearest reason to use Vellum because the product is built around more than one chat surface. The public pages show memory, shared memory, email, Slack, SMS, WhatsApp, browser use, and device apps, which means the assistant can stay useful outside a single conversation thread. That matters when the real goal is continuity across work contexts rather than asking isolated one-off questions in a browser tab.

Run an assistant in a stricter trust model than a normal AI chat app

Vellum is a good fit when the team actually cares where data and tokens live, how files are stored, and how much authority the assistant has before it acts. The public trust and security language makes deployment mode and permission levels first-class product decisions, not hidden implementation details. That is useful for teams that want an assistant in production-adjacent workflows but are not willing to hand it unrestricted access without guardrails.

Use one assistant across desktop, mobile, and command line instead of siloed clients

Many assistant products still feel like one interface attached to one use case. Vellum stands out because the public product surface includes web, macOS, iOS, and CLI access. That is useful when work moves between meetings, desktops, terminals, and messages during the day and the assistant needs to stay available in all of them. It is less compelling for users who only want a browser-based AI helper and do not plan to carry the assistant into different work surfaces.

What does Vellum actually do?

A lot of assistant products still behave like dressed-up chat windows. They can answer questions, but they do not really persist across the places where work happens, and they do not give much control over how much they can do once connected to real systems. Vellum is aimed at a more ambitious problem. The public product pages show an assistant with memory, shared memory, channels, email, calendar, browser use, workspace apps, and device clients across web, macOS, iOS, and CLI. That makes the product feel closer to an assistant operating layer than to a single interface for asking smarter questions.

The strongest signal on the site is not just capability, but control. Vellum publicly explains the difference between managed and self-hosted deployments, where secrets and files are stored, and how permission levels like strict, conservative, relaxed, and full access shape the assistant’s reach. That is important because it tells you the product expects real operational use, not only casual experimentation. Teams that care about trust boundaries can decide whether they want Vellum to stay more locked down or act more aggressively inside connected systems, which gives the platform a level of deployment maturity many assistant tools do not expose clearly.

The tradeoff is that all this depth only pays off if you genuinely need it. If your use case is mostly prompt writing or occasional research help, Vellum’s memory model, channel support, deployment choices, and permission system can feel like more platform than problem. Even though the entry pricing is not extreme, the real cost is adoption complexity: deciding how the assistant should behave, where it should run, what tools it should access, and how much authority it should get. Vellum works best for people and teams who already know they want an assistant embedded into real workflows, not for someone shopping for the lightest possible AI companion.

What you can do with it

Run one assistant across chat, email, memory, workflows, and connected channels instead of keeping context trapped in separate apps.
Choose between managed and self-hosted deployment depending on how much infrastructure and data control your team needs.
Use permission levels to limit how much power the assistant has before it touches live systems or sensitive workflows.
Access the assistant across web, macOS, iOS, and CLI surfaces rather than locking it into one interface.

Technical details

platform
Web app, macOS app, iOS app, CLI
deployment
Managed platform or self-hosted
api_available
No public API center surfaced in captured pages

Top Alternatives to Vellum

If Vellum is close but still misses the job, try one of these instead.

Key Questions

Is Vellum mainly a chat app or a broader assistant platform?
It is much broader than a normal chat app. The public pages show memory, channels, workflows, browser use, desktop and mobile apps, and deployment options rather than a single conversational surface.
Can teams choose where Vellum runs and how much access it gets?
Yes. The public trust and security docs explicitly separate managed and self-hosted deployment and describe multiple permission levels for assistant behavior.
What platforms can you use Vellum on?
The captured public pages show web, macOS, iOS, and CLI access. That gives the assistant a much wider surface than a browser-only tool.
What is the public paid entry point right now?
The captured pricing docs show paid tiers starting at $20 per month, with a public free starter path available before that.