I want Cursor to create the documentation of my platform and maintain it. I believe this is one of the critical step that is missing for handling complex code bases or complex/new projects.
By default today the llm leaves small, one liner comments automatically. The issue here is llms don’t understand the structure of the whole application (even if you feed the whole codebase), which leads to bugs. Even if you vibe code everything, you still need to understand the structure. That should be covered in the documentation, plus like the structure/sections of each file. If you have a proper documentation, your vibe coding experience should improve drastically because is less “guessing” the llm needs to do when completing a task. This means we should send/use part of the initial context with this mapping of the site for each call. I believe there are many smart ways to improve it.
I’ve been working on a small project called mcp-shrimp-task-manager, which actually tackles the kind of problem you’re describing.
It has this feature called “Project Rules” — basically a way to give the LLM a clear sense of the project’s internal logic: structure, naming, boundaries, etc. The idea isn’t to overload the code with comments, but to give the model something solid to anchor its decisions to when handling tasks.
It’s made a noticeable difference in how effective vibe coding can be — way less guesswork, more accurate completions.
Do you have some demo prompts in order to know what can you do with shrimp? It looks promising but after taking a look to the docs I am not sure how can I use it. thanks!
I have the same problem, LLM doesn’t follow my code standards even though I inform it of the rules to consider, when it comes to a broader context, it simply ignores the rules and starts making completely non-standard codes.
Shrimp Task Manager is essentially a prompt template system. It uses a series of structured prompts to help guide the Agent to better understand and align with your specific project.
After installing MCP in your project, simply start by telling the Agent: “init project rules”. This will guide the Agent to generate a set of project-specific rules that help it better understand your context.
When you’re ready to add a new feature or make updates, just type “plan task your description…”. Shrimp will reference the existing rules, analyze your project context, identify relevant code blocks, and guide the Agent to plan accordingly.
This process involves multiple reasoning steps, which you can review. If you notice the Agent going in a direction you didn’t intend, you can interrupt at any point and provide feedback. The Agent will incorporate your input and continue from there.
Once you’re satisfied with the planning phase, you can run the task using “execute task [task name or ID]”. If no task name is provided, it will automatically choose the highest-priority task to execute.
If you’d like all tasks to be processed automatically, you can enable “continuous mode”.
Note: Due to token limitations of the LLM, sometimes long conversations may cause context loss. In that case, simply open a new chat and re-run the task—Shrimp will resume where it left off without needing the full task or memory repeated.
Hi, Siage! I tried your project and was impressed with how well it works! The most remarkable feature is its capability to review previous tasks after interruptions and analyze dependencies before resuming execution.
However, I occasionally encountered an error when using the system. It would display: “Unexpected token ‘E’ - ‘Executing’… is not valid JSON” in cursor settings, which required multiple attempts and frequent refreshing of the MCP tool to resolve. Could you provide guidance on how to address this persistent JSON parsing issue?
Hi, thank you so much for trying out the project and for your encouraging feedback!
The JSON parsing issue you encountered (e.g., “Unexpected token ‘E’ - ‘Executing’… is not valid JSON”) is due to how Cursor currently handles console.log output within the MCP tool. This can sometimes lead to non-JSON logs being misinterpreted.
I’m aware of the issue and will be addressing it in the next version to improve compatibility and stability. Thanks again for reporting it — your input is very helpful!
Thank you for the quick response - I’m truly impressed by your efficiency! The new version feels much more stable, which is why I enthusiastically recommended your project to a colleague.
I’m wondering if your colleague installed MCP using smithery? I’ve recently received similar reports from other users experiencing occasional disconnections.
He might try running MCP locally via npx, which could be more stable. Here’s a sample configuration:
Just wanted to circle back and thank you for that great suggestion! I passed it along to my colleague, and he’s already got it running smoothly on his dev environment.
Really appreciate you taking the time to help us out with this.