ManagePullRequest will no longer open PRs in Github for submodules in cloud agents

Where does the bug appear (feature/product)?

Somewhere else…

Describe the Bug

Today all of our workflows started failing because we trigger cloud agents in a repo that has submodules. The agents are prompted to open PRs in any number of the submodules as one of the steps. The submodule PRs fail with some message like “the built-in PR tool only targeted the workspace root repo“.

As far as I can tell this seems like a behavior change with the github integration in cursor, not a config issue on our end.

Steps to Reproduce

  1. run a cloud agent for a repo that has a submodule
  2. prompt the agent to create a PR for the submodule repo

Expected Behavior

The PR is opened for the submodule

Operating System

Linux

Version Information

This is in a cloud agent.

For AI issues: which model did you use?

GPT-5.4 High

For AI issues: add Request ID with privacy disabled

bc-30d5ae82-6ac5-4394-86c1-8087cef62866

Does this stop you from using Cursor

No

Hey, thanks for the report.

Right now, the ManagePullRequest tool in Cloud Agents only supports creating PRs for the workspace root repo. Submodule repos aren’t supported as PR targets. This is a known limitation, discussed here: Cloud Agents + Multiple Repositories (non-monorepo setup) - #3 by Colin.

If this used to work for you, the agent was probably using a workaround like the gh CLI to create PRs in submodule repos, and a recent change in how agents handle PR operations may have affected that behavior. Can you confirm if agents previously used the gh CLI or something similar to open PRs in submodule repos?

Multi-repo support for Cloud Agents isn’t available yet. I’ve shared this with the team, and your automation plus submodules case is a good reason to prioritize it.

As a possible workaround, you can try explicitly telling the agent in your prompt to use gh pr create for PRs in submodules instead of the built-in PR tool. No guarantee it’ll work reliably, but it’s worth a try.

Let me know if that helps, or if you have any extra details on how it worked before.

After more investigation, we had a gh cli configured that had broken when a team member rebuilt the team environment incorrectly. This issue is resolved