If you’re not already an experienced coder, you can certainly learn a fair bit about a particular technology using an AI assistant. But the caveat I’d add is you kinda have to force yourself to get in there with the code. Be bold and confident. Make mistakes and then blame the AI for them
One of the worst things you can do is to assume that the AI is better than you’re capable of becoming. Your ability to see the bigger picture, and your vision for how it should operate, is still beyond even the best models. As a human being (which I boldly assume you are), your attention and focus is able to pick up on logical flaws that the AI will miss. Don’t be afraid to challenge it.
Get in deep with the code all the way along, too. If you don’t, you’ll reach a point of diminishing returns, where the AI can’t figure out what’s wrong, and you don’t understand well enough what it’s built for you to be capable of the manual debugging. At that point diving in yourself is a huge task and you’ll wish you knew your own codebase better (been there…)
Oh also also, another tip (which I’ll add to the doc) in case you’re not already – when things are being weird, make the AI insert copious amounts of debug logging, and then paste those logs back at it in bulk. Something they really do excel at is in reading far more lines of logs than we can be bothered to, and in seeing where the output is mismatched with their assumptions.
Best of luck with your coding journey @ianjh, you’ll be an expert in no time. And, y’know… with hair.
Can’t fucking thank you enough for making this - excited to go through the rest of the comments too when I get the chance. Just got started a week ago and have been going hard with it. Had figured out half this stuff through trial and error but I also had no clue if what I was doing was correct. The other half (such as using the composer) is news to me and I’m psyched to try it out!! Quick question, sure someone already mentioned this but what are your thoughts on using the one option shadow workspace is - seems like people have a positive opinion towards it, no?
But ya, thanks again, can’t thank you enough!! Absolutely insane what I’ve been able to do with this all with no coding knowledge, just know computers from my PC gaming days, but that’s just the basics. Adderall, Cursor & Mushrooms is all you need baby!!! You know what I’m talking bout!! Ya you do!
You are most welcome!! Hahah sounds like you’re having a lot of fun with it
It’s pretty amazing what you can achieve with some AI assistance, and you can gain a ton of experience very quickly.
Honestly I don’t have a good answer for you on the shadow workspace yet. I only tried enabling it myself a few days ago (mostly because I couldn’t find the setting in the usual place and assumed it wasn’t available for macOS yet!). I haven’t really noticed a significant difference so far, but I also haven’t given it a fair test either. If you have the spare RAM for it, I don’t see any reason not to give it a go. Hopefully I’ll be able to form a more useful opinion over the next couple of weeks!
very well written mate thanks for taking the time to share this. it is clear, concise and an easy read with some humor sprinkled. will keep these tips handy as i explore Cursor.
Thanks so much @ed1432, very kind of you! I hope you have lots of fun with your explorations. If you do discover any new tips or tricks along the way do feel free to drop them in here and I’ll update the guide too.
Relative newb to Cursor but loving it. This was exactly the guide I needed.
One HUGE question. How exactly do I create a fresh session with composer? I tried restarting but it picks up exactly where it was. Do I just tell it to start a new session?
And for those new to this discussion, well worth going back through all the comments and answers on this post. @three seems to answer quite often.
You can also click the sorta speedometer looking thingy, and that will open your Composer control panel. At the bottom left that will have a “Create New”:
Note: if you open the control panel when you have a really long Composer session going, be prepared for it to take a bit (or a lot) of time to open.
That control panel is also where you can create Notepads (from ‘Create New’), which are great for frequently used context such as a description of your project. You can add them into conversation context just like with files.
Feels like that would be great as a prompt (or rather condense the gist of it into a prompt) to give GPT4o or Sonnet outside of Cursor to which then writes the boilerplate in the beginning and a summary of what was done at the end
i tried doing something similar with making composer write and append to a .txt file of what prompts i given it, what we did, which files we added
then it should append it.
But neither does it create the files, nor edit them until you shout it at after each prompt and then it just overwrites the entire file even when verifying step by step if it did it right “oh i didnt do it right let me try again, ignore you and overwrite it again”
Frustrating, especially since a few months ago it actually did follow the orders
The inescapeble trusim for sucess with big code projects, and I have been working on them since the 1980’s is modularity. Keep the logic modular, the repository modular, the testing modular and everything works out a lot better, faster.
I read here about noobs with 7000 line files and of course the Ai is going to puke out on shit like that. I keep my file modules to about 500 lines of code, wth plenty of error trapping and custom json format error logging levels comprised of error, warning, info.
Nice attempt…
Composer, remains a source of frustration for many users. Its performance is inconsistent—seemingly failing or producing suboptimal results nearly 50% of the time. Despite its central role, Composer is not even mentioned or explained in the documentation, leaving users to figure out its quirks through trial and error. - Is it RAG or another methodology? Understanding this is essential for users who wish to optimize workflows for large codebases. Contextual awareness is an advertised strength, yet there is minimal guidance on structuring projects.
Cursor needs to open up and provide incidence-based guidance on best practices. The forum is getting filled with “Joe Rogan science”
At this point, I doubt the Cursor team has systematically tested various use case as the latest updates are pushed, and the focus has been speed and bug fixes, Composer remains Russian roulette with your well-organised code.
Many sections of the documentation feel incomplete or oversimplified. For example, the .cursorignore feature is only briefly described, with no insights into its optimal usage or scenarios where it might be beneficial. Worse, users are encouraged to consult external resources like Google or Stack Overflow for clarification—hardly a or professional solution.
in the official documentation. Searching for “Composer” in the provided documentation results in, “You have reached your chat limit. Please try again later.” This lack of coverage undermines the credibility of the platform
I feel your frustrations, and I definitely agree that the documentation is far, far behind where it should be (and in many places materially outdated). It’s definitely been a sore point for me on a number of significant occasions.
I couldn’t work out whether your “Nice attempt…” comment was intended sincerely, or whether you were writing off the whole thread as “Joe Rogan science”. Entirely your prerogative if it’s the latter . I’ve been pretty clear that it’s compiled empirically (in the absence of docs, as you rightly say) and that your mileage may vary. Feel free to ignore it and do your own thing if you prefer, just don’t shoot the messenger
Beyond that all I can tell you is that I work on some pretty large codebases. I follow the habits I mentioned in my post every time, and while the experience is rarely perfect (much like any AI code assistance):
failing or producing suboptimal results nearly 50% of the time
sounds way, way worse than I’m now accustomed to.
But it’s entirely up to you. I have no skin in the game, I don’t work for Cursor, nor do I have any privileged information, I’m just volunteering what’s worked really well for me as an end user.
PS: Agreed on .cursorrules docs too, in fact I make reference to that part way through the thread when someone brought it up. There are some useful insights shared (not by me) in another recent thread though if you’re interested.
I’d also say don’t write off the other huge benefit of splitting out your files, which is the ability to leave stuff out of your queries. If it’s one 7kloc file, the IDE has to cram a precis of that whole thing into the context every time, burning through your context token limit. If it’s 20 x 350 line files, you can include just the one (or few) you’re interested in at that moment. Makes for much more comfortable digestion I think.
Ctrl+Shift+F is definitely a thing I’m using most of the time
together with adding comments on recently changed files and for what purpose.
Recently had to pack all my CSS JS etc in a single PHP file cause my shared webhost provider refused to accept some assets, despite all unit tests etc saying its there and the local implementation working flawlessly
Guess thats what you get with cheap hosting solutions ^^
I want to clarify that my “Nice *attempt…” comment was genuine, not sarcastic. Although the 50% failure rate I mentioned was exaggerated, success with AI tools like Composer largely depends on your project’s specifics. As the comments in your guide show, we’ve all faced Composer sessions that start failing and crash, risking script corruption. In such cases, reverting to a previous Git commit might save you, but often it leads to hours wasted relying too much on Cursor.
My thoughts are reliability depends on project scope
Composer: Ideal as a kickoff tool for smaller projects like prototypes. Define your project, write pseudocode, set up your .cursorrule, and prepare your notebook before starting. Here, functionality often takes precedence over code readability.
Chat and Supermaven: for larger, well-structured projects. They offer better contextual awareness, making them more reliable for managing complex, large-scale developments. I find myself reading the code more, which enhances understanding and drives the project efficiently.
Improving AI Coding Assistants: Beyond Monthly Updates
Instead of minor monthly updates, it might be beneficial if Cursor handles the many concerns brought up here by you and us users and updates the documentation accordingly. Currently, it feels like AI assistant companies depend on YouTube creators for superficial walk-throughs, and shocking faces. This approach is understandable, given that a new AI assistant is introduced every month. I can imagine that the time of every team member is primarily spent implementing features as quickly as possible, rather than documenting the complexities of real-world applications.
Real-World Project Evaluation
Too many AI coding assistant evaluations are forms of zero-shot prompts like building a Snake game, which don’t reflect real-world challenges that require nuanced understanding and robust contextual awareness.
Framework for Evaluating AI Code Assistants
Such frameworks are in development. However, we could have some standardized evaluations such as:
Indicators of Contextual Awareness: Develop a metric that quantifies how accurately Cursor understands and references the relevant parts of your codebase.
Context Utilization Rate: Measure the percentage of interactions where Cursor effectively uses the provided context to generate accurate responses.
Implementation: Track instances where Cursor’s suggestions directly reference the selected files or modules. A higher utilization rate indicates better contextual awareness.
Context Coverage Index:
Definition: Assess how comprehensively Cursor covers all relevant aspects of the project’s context.
Implementation: Analyze the breadth of context Cursor accesses during different phases of the project. Ensure that all critical modules, dependencies, and documentation are consistently referenced.
BTW my previous reference was to .cursorignore evermore “hush hush” ,…
But if your interested there are a few projects that systematically evaluate AI assist features I have a link somewhere to a gist that explains some guidelines for using it and testing Cursor
I am getting very good results using cursor rules to remind and enforce programming styles on the current model, my prefferred coding methods, debug statement insertions, code comment generation, math issues, error handling, console progress display etc … Thus The good coding practices the OP refers to are actually rules not just suggestions.
The advantage of rules is they are automatically applied to every prompt. Though starting every new session with a project overview and current devolpment goals is critical as well. I generate documentation that is regularly updated for session initiation purposes too.
I previously was exposed to the value of rules being applied to every prompt in my OpenAI API account. In cursor I like the rules to be technical specifications for the curent language I am using.
I would love a shadow workspace that was solely responsible for creating and maintaining documentation.