Skip to content

MCP Server

CameraGraph includes a built-in MCP (Model Context Protocol) server that lets AI assistants like Claude control the app for you. Create scenes, add layers, apply filters, start recording, and more, just by asking.

  1. Open Settings in CameraGraph.
  2. Navigate to MCP Server.
  3. Toggle the server on.

The server runs on port 5225 by default.

Add the following to your Claude Desktop configuration file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

{
"mcpServers": {
"cameragraph": {
"command": "npx",
"args": [
"mcp-remote",
"http://localhost:5225/mcp"
]
}
}
}

Save the file and restart Claude Desktop. You should see CameraGraph listed as an available MCP connection.

Run this command in your terminal:

Terminal window
claude mcp add cameragraph --transport http http://localhost:5225/mcp

Claude Code will now be able to interact with CameraGraph during your sessions.

Once connected, you can ask Claude to control CameraGraph using natural language. Here’s a summary of what’s possible:

  • Create, rename, and delete scenes
  • Switch between scenes
  • Reorder scenes
  • Add camera feeds, screen shares, images, text, and shapes
  • Position, resize, and layer elements on the canvas
  • Apply shape masks to camera elements
  • Apply color filters and adjustments
  • Add blur, sharpen, or other visual effects
  • Remove or modify existing filters
  • Start and stop recording
  • Begin and end a live stream
  • Check current recording or streaming status
  • Change the canvas resolution and background
  • Arrange and align elements

Install the virtual camera into Zoom, Meet, or Discord. Instead of the default webcam, you’re running through CameraGraph, so you can have a name/title lower-third, a subtle film look on your camera, or a branded frame. Pre-build a few scenes (camera + screen share PiP, full screen share, BRB card) and switch between them from CameraGraph or have an agent cycle through them on a schedule.

Each scene is a “slide.” Your agent walks through a script: title card, camera with talking points as text overlays, screen share for the demo, back to camera for Q&A. No more fumbling with Keynote or sharing your screen manually. You just talk, the agent drives.

Tell the agent what you want to teach. It sets up the scenes: intro card, screen recording of your IDE with camera PiP, browser showing the result, outro. Hit record, walk through each scene on a timer, stop recording. Edited-looking video with zero post-production.

Streaming a game or reacting to something? The agent manages scene switching, applies filters for emphasis (glitch on a fail, bloom on a win), and updates text overlays with scores or commentary. One-person production crew.

Use browser and text elements to build a live dashboard: pull data from an API, update text overlays in real-time, pipe it through the virtual camera into a meeting or stream. A self-updating “TV screen” for your team.

Script a 30-second clip: logo intro, camera with lower-third, screen demo, outro with CTA text. The agent builds it, records it, done. A repeatable template you can re-run whenever you ship something new.

Yes. You can use CameraGraph with screen shares, images, text, and shapes without any camera.

No. The MCP server controls the app regardless of whether you’re actively recording or streaming.

Can I use it with other AI assistants besides Claude?

Section titled “Can I use it with other AI assistants besides Claude?”

Any AI assistant or tool that supports the Model Context Protocol can connect to CameraGraph’s MCP server using the same endpoint.

Is the MCP server accessible over the network?

Section titled “Is the MCP server accessible over the network?”

By default, the server only listens on localhost and is not exposed to your network.