Best way to use Composer for a WhatsApp automation project without exceeding $20 budget?

Where does the bug appear (feature/product)?

Somewhere else…

Describe the Bug

Hi everyone,

I’m currently building a WhatsApp automation system (customer support + AI responses), and I previously developed a large part of it using Claude. However, costs became too high, so I’m now considering fully switching to Cursor (Composer) to make it sustainable.

My goal is to keep everything within the $20/month Pro plan.

I have a few questions:

Can Composer alone (Auto + Composer 2) handle most of the development workflow without relying heavily on API usage (Claude/GPT)?
What are best practices to minimize token usage when working on a medium-large codebase?
How should I structure my project so Composer doesn’t constantly re-read large contexts?
Is it better to split the backend (webhooks, WhatsApp logic, AI flows) into smaller modules to reduce cost?
Any real-world strategies to avoid hitting limits quickly when iterating fast?

Context:

Project includes WhatsApp webhook handling, AI responses, and conversation state logic
Already partially built, now optimizing for cost
Goal is long-term scalability without unpredictable expenses

Would really appreciate practical advice from anyone using Composer in production

Thanks!

Steps to Reproduce

Create or open a medium-sized codebase (backend with webhooks, WhatsApp logic, and AI flows).
Use Composer (Auto / Composer 2) inside Cursor to iterate on features (refactoring, adding logic, debugging).
Perform multiple consecutive prompts that require understanding of different parts of the codebase.
Observe how often Composer re-reads or reprocesses large portions of the context.
Continue iterating until hitting usage limits or noticing performance degradation.

Expected Behavior

Composer should efficiently work with large codebases without repeatedly consuming excessive context.
Minimal unnecessary token usage when iterating on small changes.
Stable performance across multiple iterations within the $20/month plan.
Clear strategies or controls to manage context size and cost.
Version

Operating System

MacOS

Version Information

Editor: Cursor (latest version)
Plan: Pro ($20/month)
Environment: macOS (development machine)
Project type: Node.js backend (webhooks + WhatsApp integration + AI processing)

For AI issues: which model did you use?

Composer Auto mode (switching between models automatically)
Composer 2 (primary focus for development workflow)

For AI issues: add Request ID with privacy disabled

Not available / Not captured yet (can provide if needed after testing with logging enabled)

Additional Information

Previously used Anthropic’s Claude via API, but costs scaled too quickly.
Exploring whether Cursor Composer can replace most API-dependent workflows.
Key concern: maintaining scalability while keeping costs predictable.
Project includes:
WhatsApp webhook handling (via providers like Twilio)
AI-generated responses
Conversation state management
Main goal: optimize development workflow and architecture to stay within fixed monthly cost.

Does this stop you from using Cursor

No - Cursor works, but with this issue

You may still hit limits since Auto/Composer is not unlimited. Try to use gpt-5-mini for as much as you can, since it does not cost any usage, and only use Auto/Composer or a higher model only if gpt-5-mini fails. If your goal is to truly stay under $20/mo, you will need to do more manual coding with tab completion and use free models like gpt-5-mini which will require more babysitting and smaller requests. Once you get something working, gpt-5-mini is good at duplicating patterns, so saying something like “create this new module/feature based on how this other feature works, and make these changes." will be more effective then telling it to just build something out of the blue and fully integrate it.

hi @Jerry_Frias, here are a few tips to stay as token-efficient as possible in Cursor:

  • Prefer Composer 2 (Standard) instead of Fast when you do not need the lowest latency. You will see either a selection for Composer 2 and Composer 2 Fast or have an edit icon to set model details.
  • Keep each chat focused on a single task or feature, and start a new chat when you switch topics.
  • Let the agent search your repo instead of manually attaching large files.
  • Only include screenshots when they are really necessary, because images cost far more tokens than text.
  • Use Plan mode to outline the task clearly before implementation.
  • Use Debug mode when fixing errors or failing tests.
  • If the agent starts going in the wrong direction, go back to your earlier prompt in the same chat and correct it there instead of continuing from the mistake. This helps avoid dragging the wrong code and extra context forward, which often makes long chats less reliable.

Hey! Thanks for the tips

Quick question — I’m trying to use Composer 2 (Standard) instead of Fast, but I don’t see that option anywhere in the model selector.

Right now I only see “Composer 2 Fast” and other models like GPT-5.x, Sonnet, etc. Is there a way to enable or access the non-fast version, or is it not available in some plans/regions?

Appreciate the help!

1 Like

Try in Settings > Models, search for Composer and you should see both options.

Thanks, I checked Settings → Models and searched for Composer, but I still only see “Composer 2 Fast” — the standard one doesn’t show up.

1 Like

Could you share a screenshot and which Cursor version you are on? (full version details)