I’ve been considering the best ways to help Cursor handle large projects more effectively.
For instance, I have a large React Native project that includes our own design system, and I was thinking of these options:
A) Create a comprehensive documentation file that details the code overview and the design system, including its usage. This would help Cursor understand the project better and use it more accurately.
B) Set up a default prompt that includes this information in every request.
Currently, Cursor struggles when asked to create new screens. It tends to invent components, add props that don’t exist, miss imports, and run into issues with the components and the design system.
Use Cursor as a tool, not as a replacement for writing code anymore. Use it the way you normally program in small, modular parts. I’ve been using Cursor for over 8 months now at a production level for the fashion company I work for, which has an online shop with over 10,000 files daily.
The key is to give the LLM the context it needs to solve the task, not the entire codebase; otherwise, it won’t work. I first consider how I would approach the task without Cursor, look through the files, and gather the relevant context. Then, I either add the files with @ or keep them open and reference them with / and open editors.
Next, I write a detailed prompt outlining what needs to be done, apply it, and review the changes made by Cursor. If I’m not satisfied or it’s not working, I do follow-up prompts until it does. As experienced software engineer you know already how to solve the task, Cursor is not replacing your job, you still need too understand your code and the code which the LLM its producing. In the end, I always prompt something like,
“This solution works, now refactor it to aim for simpler, more readable, and idiomatic code.”
Why? Because complex code is bad. Period. As a software engineer, you should always strive for simpler code in your codebase there’s no need for complicated code when it can be done more simply, making it easier for your colleagues to read and understand.
The key using Cursor in bigger projects is context. We are not at this point where we can feed a entire codebase to a LLM. Cursor has a limit too 20.000 Tokens, use them wisely and precisely. And you will see its working well.
@Dreams , your approach and explanation is gold-level wisdom. I’ve been learning and relearning to approach development in Cursor in smaller chunks–despite the promise of the bigger models with more tokens. Also, paying very close attention to making well-documented commits and trying not to go too far down the rabbit hole before reverting back and trying it again with more instructions to the AI. Over the coming months we’ll likely have a bit more flexibility with larger files and tokens but keeping the perspective of us being developers and not outsourcing all of the coding or at least problem-solving to an AI makes a lot of sense.
Yes, I can fully attest to everything that you wrote. One question though: Did you experiment with prompting the model to add a lot of comments on the pieces of code? I found that it helps.
No, when aim for simple and readable code you dont need comments in my opinion. Same goes for the LLM, i just describe what needs to be done in detail, Example:
“Move the function getActiveCart into checkout.ts and refactor it so i can pass the parameters env, envPublic. Import it in page.server.ts and use it there inside the load function.”
Never. I know my codebase and where stuff happens. If you feed too much context its starting losing track on the important context. I also only use Chat, no composer. Composer is too buggy at the moment.
I agree. Writing prompts like you would instruct a junior dev is the best way to code with LLMs. I think if people don’t have enough experience, ie. they are not even junior devs, they should simply ask ChatGPT to help them structure such a prompt based on what they want to achieve. They will learn as well the best way and gradually understand coding even if they haven’t coded much themselves.
I’d like to add to this great advice (effective prompting) that you can also ask the AI to make this plan for you. Then, you can tweak it by adding the stuff you were thinking of in your own plan (if any). And then, you use that as the prompt to start your development.
You don’t need to do this in Cursor (in fact, I recommend you don’t and keep our credits for the development work); use ChatGPT or Copilot or Bard, etc, to build out your plan, and then once you have everything, summarize that into a prompt (the LLMs would have a better prompt than me all day) and feed that to a cursor session, but do it in two shots. First, you let it know what is about to go down, and then you provide the dev plan devised into a prompt with the assistance of an LLM.