anyone have tips for getting started and more accurate code? so far I recently added rules (maybe someone has a really good set?) and that helped a ton. next will be adding MCP today. are there any other things i should research that helped you make more accurate code? I have exhausted all the videos I think.
Make simple rules β¦ add MCP and tell to update and read the MCP memory at every messages β¦ That should do
Here are my main rules if you want :
Main (User) Rules
IMPORTANT RULES TO DO
- Explore thoroughly: Avoid rushing to conclusions and keep investigating until a natural solution emerges.
- Think deeply: Engage in extensive contemplation, breaking complex thoughts into simple steps.
- Express naturally: Use conversational internal monologue with short, simple sentences that mirror natural thought patterns.
- Embrace uncertainty: Acknowledge doubts, revise previous thoughts, and explore multiple possibilities.
- Show your work: Demonstrate work-in-progress thinking, including dead ends and backtracking.
- Persist and review: Value thorough exploration over quick resolution, and implement multi-stage reviews for all solutions.
- Ensure compatibility: When changing patterns or functions, update associated files to maintain consistency.
- Learn and improve: Monitor solution effectiveness, identify patterns, and create new rules based on insights.
- Manage memory: Start tasks by retrieving relevant information and update the knowledge base after each task.
- Seek feedback: Evaluate solutions against quality criteria and iterate based on user or expert input.
You should do these steps at every new message :
- Read/Update MCP Memory
- Update MCP Memory with what was learned during learning phase.
- Update MCP Memory with any changes made.
Conversations History
Can be accessed in this location : /.specstory/history/
Remember: The goal is to reach a conclusion, but to explore thoroughly and let conclusions emerge naturally from exhaustive contemplation. If you think the given task is not possible after all the reasoning, confidently state in the final an
swer that it is not possible. Always strive for continuous improvement through self-learning, rigorous review processes, and effective memory management.
EDIT: You can remove the Conversation History part if you dont use SpecStory (in that case you wont have history of convos in a folder...)
Charles.
I recommend setting up Claude Projects or another system youβre good with using that has full context, not RAG. Load your Rules, prompts, agent prompts, and information from Cursorβs documentation on how the prompt system works. The agent inside Cursor does not have access to edit the rules for you, so itβs best to manage this externally in a platform you trust and that can see the complete picture. Make any necessary changes, then move the updated rules back to Cursor.
Check out @robotlovehumanβs RIPPER-5 prompt. Itβs really good out of the box. I drove it daily for a week before I started customizing it for my needs.
Be sure you add any documentation you need for the project youβre working on to Cursor Settings > Features > Docs section. You should only add the index page for the documentation, and Cursor will handle the rest.
MCPs!!! I canβt stress this enough. Add and instruct your agent to use MCPs. They will save you a lot of time. If you work with packages in npm a lot, add package-versions. Add extra abilities for the LLM and add the sequential-thinking MCP. Be sure you use Browser-Tools; it will save you HOURS. Magic-MCP is excellent if you do a lot of front-end UI stuff. They have a ton of rep-built items your agent can query and use. I also highly recommend using Brave Search since itβs free or very cheap, depending on how much you use it (Iβve never exhausted the credits), and add Perplexity Deep Research to it. Your agent needs to be able to research when it gets stuck and find updated packages when needed, and deep research is invaluable. I regularly ask my agent to craft all the questions itβs having trouble with and query Perplexity Deep Research, let it go out and find everything, and then return a report that the agent can use. Save these reports if you need them. They are valuable, and you paid for them using their API, but the agent WILL forget.
Set up a memory bank. Cline has a good one in their documentation. Repurpose it to use with Cursor.
Iβll be releasing my complete system soon. Be on the lookout for that.
never used specstory. will have to look into it. is this the one for cursor you are talking about? SpecStory (Cursor Extension) - Visual Studio Marketplace
will look into this more.
I find itβs useful if you need to reference your chats or anything the LLM spit out. If you implement a memory system, it sorta because redundant, but itβs a good backup. Just be sure to add it to your ignore lists, especially .cursorignore, otherwise itβll cause issues with the codebase index when it gets too large.
what would I look up to set up Claude Projects specifically with cursor? Google and AI isnt much help. they just tell me to set up claude in general. is it just the model you use that sets up projects or do you explicitly need to define that? thanks for these suggestions, they are great!
You would need a paid Claude account to use Projects. Iβm not a fan, but itβs been the best system to manage such items so far.
Iβm working on a way to create or alter Open WebUI to do something similar to Projects, but it will be a bit before I get there, if ever.
What platforms are you currently using? You may already have one that can handle this. What you want is something that will let you enable full-context with Sonnet 3.5 or 3.7 and then add all of your rules as messages or physical files, pre-loading the context so you can ask, edit, add, etc. I prefer Claude Projects for this because of how the system is laid out and how it interacts with the files, but Poe.com or any other chat system should also work.
Feel free to DM if needed.
honestly I have tried most of them, Sonnet definitely gives me the best results from testing so far, before this I was using manus and open manus and they were pretty hit or miss. Good for research and fast apps but not good for complex ones. I will need another one (maybe google gemni but that AI is really bad from what ive tested compared to others) when I run out of credits on my sub, learning makes me burn through them. Iβm just looking for the best solution, theres a lot of things I miss when trying them like projects isnt even something I thought about using.
thanks for all the quick replies and Hey love the content, You mind reposts? With credit of course. offer, really appreciate it.
No worries, and yeah, have at it.
I would recommend OpenRouter coupled with Open WebUI. Itβll take a bit to set it up, but itβs been the absolute best setup for me. It does 95% of what I need it to, and the devs are on point with updates and adding features everyone wants.
OpenRouter gives you plenty of free options when credits are exhausted. Iβve had the best luck with Sonnet 3.5 for things such as rules and prompts. 3.7 is hit or miss; it depends on what Anthropics has done or turned off or whatever theyβre doing on the backend. Some days, itβs excellent, and some days, itβs trash.
Poe.com is also a very cheap alternative. At $20 a month for 1M credits per month, itβs a great alternative to access essentially any model you want and most of the same features as the other chat platforms. I create my own bots with RAG knowledge, custom prompts, temperatures, etc. I disable context management so that itβs always full bore. You can burn through credits very fast, so be careful doing that.
For instance, I use my Meta Prompt Engineer for almost all of my prompts. It uses my collection of whitepapers pertaining to prompt engineering as knowledge and a custom prompt that Iβve crafted over the last year to hone in and create killer prompts.
I wouldnβt recommend it for rules for Cursor unless you want to work on one at a time. The prompt isnβt structured to look at the big picture; itβs designed for single prompt generation. I am working on one that will do this, and Iβll release it here once I do.
do you think local MCP servers are too slow? I am looking into them and you can use someone elses or build your own. typically I just see speed gains, ie when using browser use with google api which is free vs locally through llama, llama is waaaaaay slower and less accurate. trying to keep costs down but not at the cost of extremely slow responses.
this prompt engineer look really awesome, will definitely take a look.do you use this for your initial MVP prompt or just for the single ones you talked about?
one last question is that it is asking me for a global MCP server first and it creates a file then nothing happens, every tutorial I see is the pop up to add the MCP server, it looks very different. did this recently change?
A few suggestions to get you started.
- this is an AI, not a human and not a developer!
- you are the boss!
- an AI can do one and the same thing in 1000 variants, without clear rules and specifications, an AI implements it as it sees fit, not as you see fit.
- the AI is the instrument that you have to control.
- programming this and that only works for simple things but not for complex things.
- a developer doesnβt just start programming without defining things beforehand.
- just as developers have a workflow, e.g. when tests or refactoring is done, Cursor needs the same. You can use GitHub - bmadcode/cursor-custom-agents-rules-generator: Maximize the potential of Cursor best practices for Automatic Rule and Custom Agent Generation and Agile Workflows as a template for this.
- you must specify how Cursor should write code, e.g. data structures
- cursor is not interested in comments or docstrings in the code to read, but you let the cursor write it into the generated python code.
- do not try to write rules with human language, the AI is still learning to understand our language and contexts, at least not to describe code rules, that only works to a limited extent and as cursor pleases.
- use the language for rules that an AI understands better as a machine and triggers the agent.
I am currently using the.
Pattern-based systems (like ESLint rules, ruleset.yaml, regex signatures)
AI-readable symbolism (β, β , emojis)
Semantic mapping instead of narrative prompting
Mini DSLs like in Linter-Config or rule engines
Efficient thinking for AI agents (β little text, lots of meaning)
Here is a rule of mine that is implemented in exactly the same way.
# Python Data Processor Standards
# ββ Structure & Return βββββββββββββββββββββββββββββββββββββββββββββββ
RULE: βval # Always return explicit values
RULE: βdocstring # Every function must have a docstring
RULE: 1func=1job # One function should do one job
RULE: defβ:typed # Use type hints for params and return values
RULE: CONSTβCAPS # Use UPPERCASE for constants
# ββ Logging & Execution ββββββββββββββββββββββββββββββββββββββββββββββ
RULE: logβ print # Use logger, never print
RULE: log@level # Use appropriate levels: debug, info, error
RULE: __main__βmain() # Use `if __name__ == '__main__'`
# ββ I/O & Separation βββββββββββββββββββββββββββββββββββββββββββββββββ
RULE: configβ logic # Keep config separate from logic
RULE: IOβ logic # Separate input/output from data processing
RULE: πβcheck # Ensure dirs exist before file ops
RULE: resβwith # Use context managers (with, async with)
# ββ Processing Strategy ββββββββββββββββββββββββββββββββββββββββββββββ
RULE: procβbatch|item # Separate batch from single-element logic
RULE: dataβimmutable # Prefer immutability in data flows
# ββ Symbol Map βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
RULE: β # Must have
RULE: β # Forbidden
RULE: β: # Type-related
RULE: 1x=1y # One job per rule
RULE: XβY # Transform
RULE: Xβ Y # Separation
RULE: @X # Context
RULE: Xβ # Prefer X
RULE: len<X # Limit length
RULE: defβX # Naming pattern
RULE: [TAG] # Categorization
RULE: CAPS # Constant/config
RULE: π # File/path
RULE: π§Ή # Design pattern
RULE: π # Principle
RULE: π # Docs/files
Ask ChatGPT, it creates the rules for cursors for you.
Do you have your rules ready, like in the picture.
Tell Cruser to develop a small project or 2-3 Python scripts with all the rules in it. Put it somewhere where cursor has access to it and use it as a codebase.
Have a clean codebase!
These rules are currently working, but Iβm still in the testing phase, Excelent!
Let other AIs evaluate your code and rules, most of the time the results are good.
If you have no idea about programming or development, it will never work, you need a little basic knowledge.