Introducing Cursor Automations

Blog · Docs · Try it out!


We’re excited to introduce Cursor Automations: always-on cloud agents that run a schedule or in response to events!

Imagine an agent that reviews every PR for security vulnerabilities before you even open Slack. Or one that triages bug reports overnight, finds the root cause, and has a fix ready by morning. Or a weekly digest that summarizes everything your team shipped, all automatically. That’s what automations enable.

Define a trigger, write a prompt, and choose which tools the agent can use. When triggered, a cloud agent spins up, follows your instructions, and verifies its own output. Agents can open PRs, comment on code, send Slack messages, call MCP servers, and use Memories to learn across runs.

Create a new automation at https://cursor.com/automations/new, or start from a template in the marketplace. Learn more in our docs!

We’d love your feedback!

  • What workflows are you automating?
  • How is the memory tool working across repeated runs?
  • What triggers or integrations would you like to see added?

If you’ve found a bug, please post it in Bug Reports instead, so we can track and address it properly, but also feel free to drop a link to it in this thread for visibility.

3 Likes

Can we automate to send a slack message when a PR has not received any reviews for 24 hours?

Hey, what exactly is the reason we cannot use Composer 1.5 for Automations anymore?

2 Likes

There’s a ton of issues when attempting to use Automations with a github enterprise connection. Nothing can connect to the existing github integration.

Automations need something to happen, and the absence of something isn’t enough.

You could, however, run a daily automation to collect stale PRs and send a Slack Message! Not exactly what you proposed, but maybe close enough!

please can you support multiple repos in a single automation - I want to write a weekly changelog for all the work the team has done to share with customer success and GTM but I now somehow need to do this across all repos and then hope that they don’t all overlap with eachother? I’m loving the direction but to me the multi-repo support is a key to keep things managable and avoid duplicating alerts on slack etc..

I am also suprised I can’t

  • create automations natively in cursor as a view but I imagine that is coming
  • use your own composer 1.5 model to use for the tasks
  • send slack messages to myself rather than a channel is that possible with the rules
  • have more logic in the triggers e.g. and / or etc.. would be heplful especialy if / once multi repos are allowed if I want to hav a daily and a weekly summary in the same automation rule I can’t seem to manage that easily or clearly

This has seriously been needed so big kudos for taking the first of a very important step and this is the type of thing that will build a mote around using Cursor if your team’s automations are all using the platform too.

What is the pricing approach or will that be revealed in due time, bugbot was way too expesive per person IMO and automations is vastly more powerful but seems to be a team feature so please figure out how to incorporate it into the existing pricing or make it easy to get into to encourage usage and evolution of the product.

1 Like

Is it necessary to set up a cloud environment for a codebase before using an Automations workflow?

This is exciting!

Two thoughts on what AI automation needs as it scales:

1. Intelligent model routing — tooling to maximize user budget and build better products

2. Security guardrails — prevent secrets from slipping through as automation increases

WORKFLOW I’M AUTOMATING: INTELLIGENT MODEL SELECTION

I built Model Matchmaker (GitHub link in my forum profile), an open source hook that classifies prompts and routes to the right model. Users get 50-70% more prompts within the same budget by not burning Opus on git commits or using Haiku for architecture decisions for example.

Proven demand: 120+ stars and 12 forks in 48 hours

Proof it works:

- Retroactive analysis of my own prompts: 70% were overpaying (simple tasks on Opus)

- I’m seeing 3-5x faster iteration on simple tasks (Haiku vs Opus/Sonnet)

- I’m building more within same budget → better projects → better Cursor showcases

AUTOMATIONS + INTELLIGENT MODEL ROUTING

Automations would be perfect for intelligent model selection. If a cloud agent spins up to triage bug reports overnight, it should automatically use the cheapest model that can handle the task, not default to the most expensive one.

Users who maximize their budget build more impressive projects, creating better word-of-mouth and showcases. That outcome gets even better with automations running unattended work efficiently.

BLOCKER: MODE AND MODEL METADATA

For intelligent model routing to work in automations (or hooks), we need mode and model metadata in payloads. I submitted a feature request for this (see my other forum posts). Short version: hooks currently can’t see which mode the user is in or which models are available, forcing fragile workarounds.

With that metadata, automations could intelligently route: “This is a triage task, use Haiku. This is architecture, use Opus.” That’s the same problem Model Matchmaker solves for interactive sessions, now applied to unattended cloud agents.

INTEGRATION I’D LOVE TO SEE

Model selection as a first-class automation capability. Not just “use this model” but “use the best model that meets these criteria” (complexity score, code generation vs analysis, etc.).

The classification logic is MIT-licensed and open source. Happy to collaborate on making intelligent model routing a native Cursor capability.

CURSOR RULES FOR SECURITY (OUT OF THE BOX)

A model hallucinated during a credential rotation task and displayed a production secret in the chat UI, directly contradicting the security practices it was implementing. I caught it before damage, but this shouldn’t be possible.

Bug report in my other forum posts.

WHAT I’M PROPOSING

Include secrets safety protections as default Cursor rules that:

1. Block dangerous commands (cat .env, echo $SECRET) with clear error messages

2. Warn on risky patterns (hardcoded keys, fallback values like process.env.KEY || ‘sk_test_…’)

3. Force .gitignore verification before git add to prevent accidental commits

4. Provide escape hatch: cursor.secretsProtection: false for users who need it

WHY THIS BENEFITS CURSOR

Zero-day protection for new users — prevents embarrassing security incidents before they happen

Competitive positioning — Cursor becomes known as security-conscious by default (users appreciate this, competitors don’t have it)

Prevents negative word-of-mouth — better to be proactive than reactive to “I accidentally exposed my API key to an AI”

Reduces support burden — stops the “oops I exposed secrets” support tickets

HOW I BUILT THIS

Secrets Safety Rules (in my .cursorrules):

- NEVER run: cat .env, echo $SECRET, secret-viewing commands

- NEVER run secret-generating commands via Shell tool (output enters AI context)

- NEVER hardcode secrets or use fallback values

- ALWAYS add secret files to .gitignore BEFORE creating them

Public Content Security (in .cursor/rules/public-content-security.mdc):

- Prevents accidental exposure in open-source repos, docs, READMEs

- Blocks Firebase IDs, storage buckets, Cloud Function URLs, collection names

- Forces .gitignore verification before git add

PROOF IT WORKS

These rules have caught several near-misses:

- Agent tried to cat .env to check a variable name → blocked

- Agent tried to log Firebase secret to verify it loaded → blocked

- Agent created a repo and tried git add . before .gitignore was complete → blocked

OPEN SOURCE RELEASE

I’m releasing these as part of a broader Cursor Toolkit repo (rules, skills, and workflows for production-grade AI-assisted development). The secrets safety rules are one component, alongside git workflow best practices, model selection guidance, cross-platform development patterns, and proposal writing frameworks.

Feel free to incorporate these rules into the default agent system prompt or ship as default .cursorrules templates. Happy to collaborate on adapting them for broader use!

Love the direction. Some things needed:

  1. Multi repo is desperately needed. Anything that isn’t a monolith can only somewhat use this.
  2. In the same vein - automations being tied to a single repo means I need to create the same generic “bug fixer agent from a webhook” for every repository. Every one gets a unique webhook. Etc
  3. Triggers - rather than implementing a bunch of first party triggers, give us a webhook request body mapper. Some tools dump massive JSON bodies and that can cause context rot in the agents. I’ve gotten around this for now by leveraging n8n.
  4. Another commenter mentioned this, but some better Boolean logic in the existing triggers would be great too. We’re an open source project, so have to protect from adversarial pull request creation. It’d be nice to filter down to groups (core committers) instead of explicit allow lists of people.
  5. Let me add skills
  6. Cloud Agent screen recording didn’t make it into automations. Unless you guys open up the API that you have for Slack/Linear → Cloud Agents (beyond what’s publicly available - like the repository routing).. Automations are the only way for Jira and other chat users (like Mattermost :slight_smile:) to do similar tasks
  7. Automation runs are buried underneath the automation. I understand these are supposed to be ephemeral runs and not touched, but sometimes it’d be nice to go in and steer the agent.

One last thing: when you trigger an automation via a webhook, the response has a UUID in it. If you could inject that UUID into the sandbox via ENV and make the agent aware it’d be great. I’m exposing a lot of tools to the Automation through an n8n MCP Server (wrapping internal APIs) - having the Automation send this arg as part of the tool request body would be helpful.