LLM web chat feature request

Hey Cursor Community and Team,

I’ve been playing around with Cursor and had an idea that I think could be super useful for us developers who often juggle between local IDEs and web-based LLM chats. Cursor currently does an awesome job connecting to most LLMs via API, but there are times when the API isn’t the best (or even possible) option:

  • API costs or limitations: Sometimes APIs are pricey or capped.
  • Unique or specialized models: Some web-chat-only models have no API available.
  • Ad-hoc brainstorming: Using web-based LLMs for exploratory coding and architecture design feels intuitive.

I think Cursor could shine by integrating features specifically designed to support working with an LLM through its web chat interface. Here’s how I’d see it working:

How Cursor Could Do It:

1. Quick context copying:
Have a built-in command (/copy-context <instructions> ) to copy the current coding context straight from Cursor to your clipboard. It would include:

  • All files you’ve actively added to your session.
  • Any files you’ve marked as read-only.

You’d just paste this into the web chat interface of the LLM to quickly set the stage for it to act as your high-level “big brain architect.”

  • LLM feedback integration: After interacting with the web chat, simply copy the LLM’s generated response (via a handy “copy response” button) and then use a new command like /paste in Cursor to instantly apply those suggestions to your files.
  • Enhanced Copy/Paste mode: Cursor could introduce a streamlined --copy-paste mode:
    • Automatically copy updated code contexts whenever you add or read files.
    • Clearly notify when the context is copied (e.g., “:white_check_mark: Copied code context to clipboard!”).
    • Seamlessly paste and apply the LLM’s replies directly within Cursor, significantly reducing friction between brainstorming externally and implementing locally.

This would make Cursor a powerhouse tool, allowing you to leverage any web-based LLM for architecture or brainstorming (the “big brain”) and then quickly bring that intelligence back into your local workflow using Cursor’s regular LLM integration for fine-grained edits.

What do you all think? Would this enhance your workflow as much as mine?
This will allow to use Chatgpt o1 pro with cursor.