Background Agent does not recognize Cursor Commands

Where does the bug appear (feature/product)?

Background Agent (GitHub, Slack, Web, Linear)

Describe the Bug

The Background Agent does not recognize or execute Cursor Commands (e.g., “/request-qa”) even when the Cursor UI highlights them in yellow as if they are recognized. When prompted with a command-oriented instruction, the agent treats it as plain text and proceeds with an alternate flow. It also selected the wrong PR number despite an explicit link in the prompt.

Steps to Reproduce

  1. Open Cursor and start a new Background Agent.
  2. Provide to the prompt a /example that’s on your repo in .cursor/commands/example.md.
  3. Observe that the agent did not execute the /example command directly

Expected Behavior

Background Agent should load and honor Cursor Commands context so that slash commands (e.g., “/request-qa”) are executed or at least routed to the appropriate command handler.

Screenshots / Screen Recordings

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.7.52 (user setup)
VSCode Version: 1.99.3
Commit: 9675251a06b1314d50ff34b0cbe5109b78f848c0
Date: 2025-10-17T01:41:03.967Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.26200

For AI issues: which model did you use?

gpt5-high

For AI issues: add Request ID with privacy disabled

My agent is nowhere to be found now :(. A separate issue I’m having is that some agent conversations disappear from my history.

Additional Information

The UI highlighting of “/request-qa” suggests the environment recognizes the command syntax, but the agent runtime does not load or bind the command set

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the report. This confirms the issue, the Background Agent is treating /request-qa as plain text in its reasoning instead of executing the command, even though the UI highlights it correctly.

It looks like a bug where the Background Agent runtime isn’t loading custom commands from .cursor/commands/. A couple of quick questions to complete the picture:

  • Do these commands work correctly in regular Agent mode (not Background Agent)?
  • What’s the content of your .cursor/commands/request-qa.md file?
  • Does this happen with all models or specifically with gpt-5-high?

The mismatch between UI recognition (yellow highlight) and actual execution suggests the frontend recognizes the command, but the backend agent environment doesn’t have access to it.

Hey @deanrie,

  • Yes they do
  • Here it is:

Draft QA Testing Message

You are tasked with creating a comprehensive, non-technical testing plan for our QA engineer @astral.b. This message will be posted to our #testing Slack channel to guide manual testing of PR changes.

Continuous Execution Requirement

  • Execute this workflow end-to-end in a single continuous session.
  • Do not stop after each step; proceed immediately to the next.
  • Only pause if blocked by missing information (use Interactive MCP tools to ask the user for what you need).
  • Before yielding, verify the Definition of Done below.

Workflow

1. Gather PR Context

First, determine the PR number. If the user invoked this command with a PR number argument, use that. Otherwise, get the current branch and fetch the associated PR number:

# Get current branch
current_branch=$(git rev-parse --abbrev-ref HEAD)

# Get PR number for current branch
pr_number=$(gh pr list --head "$current_branch" --json number --jq '.[0].number')

Then collect the following information:

  • PR Diff: Use run_terminal_cmd to execute gh pr diff <PR_NUMBER> | cat to get the files changed
  • PR Description: Use run_terminal_cmd to execute gh pr view <PR_NUMBER> --json body --jq .body | cat
  • CodeRabbitAI Walkthrough: Use run_terminal_cmd to execute gh pr view <PR_NUMBER> --json comments --jq '.comments[] | select(.author.login == "coderabbitai") | .body' | cat | head -n 100 to get the first walkthrough comment
  • Associated GitHub Issues: Use run_terminal_cmd to execute gh pr view <PR_NUMBER> --json body --jq .body | cat and extract any issue references (e.g., #123, closes #456), then fetch each issue with gh issue view <ISSUE_NUMBER> --json title,body --jq '{title, body}' | cat
  • PR Merge Status and Target: Check if PR is merged and what branch it targets with gh pr view <PR_NUMBER> --json state,merged,baseRefName --jq '{state, merged, baseRefName}' | cat

2. Analyze Changes and Impact

Based on the gathered context, perform a deep analysis:

  • Identify Changed Systems: Map file changes to the major systems in our codebase (e.g., tournaments, notifications, authentication, profiles, teams, brackets, livestreaming, WYSIWYG editing, etc.)
  • Categorize Change Depth:
    • Deep/Core Changes: Changes to core utilities, propagation systems, shared contexts, or widely-used components that could cause regressions anywhere
    • System-Specific Changes: Changes isolated to specific features (e.g., only streaming, only notifications)
    • UI-Only Changes: Pure frontend/styling changes with minimal logic
  • Identify Deployment Requirements: Check if the changes require:
    • Manual Cloud Function deployment (backend changes to functions/src/)
    • Special emulator setup
    • Staging deployment vs Vercel preview deployment
  • Map User Flows: Identify the complete user flows that could be affected, not just the code that changed

3. Determine Testing Environment

Based on the PR status and changes:

  • If PR is merged to develop:

    • Testing should be on staging.blumint.io
    • Cloud Functions are already deployed automatically - no manual deployment needed
  • If PR is not merged (temporary Vercel deployment):

    • Construct the Vercel preview URL from the branch name using this pattern:
      • Format: agora-git-{branch-name}-blumint.vercel.app
      • Example: For branch feat/improve-username, the URL is agora-git-feat-improve-username-blumint.vercel.app
      • Note: Replace slashes (/) with hyphens (-) in the branch name
    • You can verify by running: gh pr view <PR_NUMBER> --json headRefName --jq .headRefName to get the branch name
    • If backend changes exist: List the exact Cloud Functions that need to be deployed locally
    • Specify deployment commands (e.g., firebase deploy --only functions:firestore/user)
  • If local testing is needed:

    • Specify the exact setup steps (branch to checkout, functions to deploy, special URLs like 127.0.0.1:3000, etc.)

4. Draft the Testing Message

Create a clear, actionable Slack message following this structure:

Opening Line:

  • Tag @astral.b
  • State where to test (staging.blumint.io, Vercel URL, or local setup)
  • Brief summary of what changed

Setup Steps (if needed):

  • Branch to checkout in Cursor (if local testing required)
  • Cloud Functions to deploy (be specific with paths like firestore/user, auth/onUserCreate)
  • Special URLs or configurations needed
  • Any test data or accounts needed

Regression Testing (if deep changes):

  • List the major user flows that could have regressions
  • Be specific about actions to test (e.g., “bracket progression (scoring, reverting, etc)”, “team operations (invite, delete, join, leave)”)
  • Focus on the systems that were changed or depend on changed code

New Functionality Testing (always include):

  • List each new feature or fix
  • For each item, provide:
    • Clear description of what should now work
    • Specific steps to verify it works
    • Expected behavior/outcome
  • Be granular - break down complex features into testable steps

Edge Cases (if applicable):

  • Test different user roles (admin, regular user, non-signed-in)
  • Test different states (with/without phone number, with/without teams, etc.)
  • Test real-time updates if applicable
  • Test responsive behavior if UI changes were made

Closing:

  • Ask @astral.b to report any issues they find
  • If it’s a critical change, emphasize thoroughness

5. Format and Present

Present the drafted message in a code block formatted for Slack (using Slack markdown):

  • Use *bold* for section headers
  • Use for bullet points
  • Use `code` for function names, URLs, or technical terms
  • Use numbered lists for sequential steps
  • Keep paragraphs concise and scannable

After the Slack message, provide a brief explanation of:

  • What systems are at highest risk for regressions
  • Why you chose the specific test cases
  • Any additional context the developer should know

Example Output

Here’s an example of a high-quality QA testing message for a PR that added username slug resolution:

@astral.b Please test the changes made by PR #35521 on staging.blumint.io (PR #35521 has already been merged to develop)

What Changed:
This PR adds friendly username URLs (e.g., blumint.io/@telmo or blumint.io/telmo) that automatically redirect to the corresponding group page (blumint.io/utc-*/[groupId]). Certain usernames that would otherwise be problematic are reserved/forbidden names and consequentially blocked.

---

Regression Testing — Username & Profile Pages:

Since this modifies core routing middleware and username handling:

1. Regular Routing Access:
   • Visit /docs— should properly open the documentation page (NOT redirect)
   • Visit /tournament/TOURNAMENT_ID — should properly open tournament page (NOT redirect)
   • Visit other reserved paths like /tournaments, /games, /guilds — verify no unwanted redirects

2. Group Pages (Users, Guilds, Games):
   • Navigate to various user profiles via their base62 IDs (e.g., /utc-*/2vWjhz4rNHtUVfb7SRZR26)
   • Verify pages load correctly without redirect loops
   • Check that group pages with valid base62 IDs in the URL work normally

3. Static Assets & API Routes:
   • Check that static files like /robots.txt, /sitemap.xml, /favicon.ico still load
   • Verify API routes like /api/* are not affected

4. Clipboard Share URL (Short Links):
   • On any group page, click the share button to copy a short link
   • Expected: Should generate a short link and copy to clipboard
   • Paste the short link and visit it
   • Expected: Should redirect to the correct page

---

New Functionality Testing:

1. Username Slug Redirects:
   • Find a user with a username (e.g., "Telmo")
   • Visit staging.blumint.io/telmo (lowercase)
   • Expected: Should redirect (308) to staging.blumint.io/utc-*/[groupId] where [groupId] is Telmo's user ID
   • Try with @ prefix: staging.blumint.io/@telmo
   • Expected: Same redirect behavior
   • Try with mixed case: staging.blumint.io/TeLmO
   • Expected: Should still redirect (username matching is case-insensitive)

2. Guild & Game Username Redirects:
   • Find a guild with a username (e.g., a guild named "Champions")
   • Visit staging.blumint.io/champions
   • Expected: Redirects to the guild's page
   • Find a game with a username
   • Visit staging.blumint.io/[game-username]
   • Expected: Redirects to the game's page

3. Reserved Username Blocking:
   • Try to change your username to a reserved word: profile, settings, tournaments, games, guilds, api, _next, etc.
   • Expected: Should show "This username is reserved" error and prevent the change
   • Try common forbidden names like admin, root, moderator
   • Expected: Same blocking behavior

4. Non-existent Usernames:
   • Visit staging.blumint.io/nonexistentuserxyz123
   • Expected: Should show 404 page (no redirect loop, not 500 error)

5. Invalid Group ID Handling:
   • Visit staging.blumint.io/utc-7/invalidgroupid123
   • Expected: Should show 404 page

6. Real-time Username Sync:
   • Create a new user account (sign up)
   • Immediately try visiting staging.blumint.io/[your-username]
   • Expected: Should redirect to your new profile (username should be in KV immediately)
   • Change your username
   • Wait a few seconds, then try both old and new username URLs
   • Expected: New username works, old username returns 404

7. Username Changes:
   • Change your username in profile settings
   • Verify the old username slug no longer works
   • Verify the new username slug redirects correctly to your profile

---

Edge Cases:

Different User Roles:
• Test username redirects while signed in vs signed out
• Test as different user types (regular user, admin if you have access)

Special Characters:
• Try usernames with numbers: /user123
• Try usernames with underscores: /user_name (if allowed by username validation)
• Try usernames with with @ characters in them: /user_name (if allowed by username validation)

Redirect Loop Prevention:
• Verify that visiting a page with a username slug that equals the groupId doesn't cause a redirect loop (e.g., if someone's groupId happens to match their username)

---

Please report any issues you find, especially:
• Any unexpected redirects or redirect loops
• Reserved names that aren't being blocked
• Username changes not reflecting in real-time
• Any 404s that should be working
• Performance issues with page loads

This is a core routing change, so thoroughness is important. Thanks! :pray:

Notice how this example:

  • Clearly states where to test and PR merge status
  • Provides context about what changed
  • Separates regression testing from new functionality
  • Includes specific, actionable steps with expected outcomes
  • Covers edge cases systematically
  • Emphasizes critical areas that need thorough testing
  • Uses clear Slack formatting with bullets and sections

Definition of Done

  • All PR context has been successfully gathered (diff, description, CodeRabbitAI comment, associated issues)
  • Changes have been analyzed and categorized by impact
  • Testing environment and setup steps are clearly specified
  • Testing message includes both regression testing and new functionality testing
  • Message is formatted for Slack and ready to copy/paste
  • User knows how to send the message to Slack

Quality Guidelines

Your testing message should be:

  • Actionable: Every item should be something @astral.b can directly test
  • Specific: Avoid vague instructions like “test the feature” - instead say “click X, verify Y appears, ensure Z happens”
  • Complete: Cover both happy paths and edge cases
  • Prioritized: Put the most critical tests first
  • Clear: Use simple language, avoid technical jargon where possible
  • Realistic: Don’t ask for testing that would take hours unless it’s truly necessary

Remember: The goal is to catch bugs before production. A good testing plan helps @astral.b efficiently verify that:

  1. New features work as intended
  2. Existing features haven’t regressed
  3. Edge cases are handled properly
  • It seems to happen with all models

Thanks for the details. That’s exactly what we needed.

It does look like a bug: the Background Agent runtime isn’t loading custom commands from .cursor/commands/, even though the frontend highlights them correctly. Since your commands work in regular Agent mode and the issue affects all models, it’s clear the backend environment needs an update to support custom commands.

I’ll pass this to the team. Thanks for the thorough report.

1 Like

Hey @deanrie, any updates on this situation?

Hey, the team is still investigating this.

1 Like

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.