I think yesterday was the last time it happened to me. I still hadn’t updated to v1.2. It happened with Claude Sonnet 4 (probably Thinking).
I haven’t tried again today, because suddenly I reached the rate limit with Claude Sonnet
I know this off topic, but just know there’s one more person here that needs transparency regarding rate limits. A simple bar that fills up with each prompt, and a timer, would be so great.
Also, an even simpler thing that would help is adding some icon next to each model in the dropdown, so that we know we’ve reached the rate limit with which models, instead of having to send the prompt to see if we get the rate limit message
There were some adjustments today I think after a report and yes would really be important to know if still occurs on 1.2.
Happened to me today at least twice.
Pro+ is my subscription, I guess I do not understand what is v1.2 and what 1.2 Cursors? Can you just explain where and what to post and ask? As I am really tired of not having transparency what i use and what I can use. One day I can use sonnet 4, one day o3, one day Opus, all max version, then after it say I hit limits change agents, then again day after I can use all then again asking me to upgrade on Ultra… I would upgrade, if I would have any kind of transparency what I have and what I am paying for, so 20$ was not enough for pro, I increased to Pro + so that is not enough (beside all that I spend around $300 end of May, Jun for pay per use). OK so if I increase on Ultra, and that Is not enough again? What then??? Ok, can you make unlimited package, so I know I will not be interrupted 24/7 if I want to work and nonstop use opus or sonnet Max, no issues I would pay. But have transparency, how hard is that… how much I get for what i subscribe…
It was maybe 20? Problem is my app includes code for a rails dashboard ui app, a wordpress theme (wp serves most content including rails header/footer & “assets”), and a distributed app thingy that acts like a service mesh via rabbitmq fanout queues. llm misunderstanding/simplification in memories resulted in both inappropriate as-if-general memories AND llms reliably applied memories “out of context” (ha!). The feature is there now–earlier it was visible, possibly bc I’d chosen to try beta stuff, but my experience wrt memories convinced me not only to turn it off but also to go with default vs beta in general…so i thought the feature had been removed utterly but from your response i’m guessing it’s actually been there all along for those who opted in. I have enough trouble trying to get any llm to actually take app architecture into account, especially for the distributed bit, even when working on it by itself–“memories” might be useful for some other app; I dunno.
Please create a separate topic for pricing/plans or use any other existing thread on that topic. This thread is about Cursor app version 1.2 specific changes and not about plans or pricing. Thank you for your understanding.
Yeah I do use memories for some simple rules that appear in context of a project.
This is a VSCode standard feature. It will simply fetch git changes periodically so it can show you in the UI when you may want to pull changes. You can see Source: Git
.
@Arni_Silnet you can create/generate a task list and keep it in for e.g. project_mgn folder in your repo together with the overall architecture design document. Each time you open new feature which require a complex design you just add only one task to the maintaskList.md and refer to another detailed taskList - it works realy well. Make sure you mention it in cursor rules as well.
I know. Been working that way with .md or .mdc (Cursor rules). But would be much simpler if Cursor had these tools more structured. And the newest todo tasks are a step in a right direction, just wanted to push this further. I’m sure the development team are going there already.
do those work better than rules for the purpose? frankly i get llms (mostly claude) to “discuss” architecture and then write rules, which i use to confirm current understanding and also refer to later wrt specific types of development. the memories thing strikes me as either the logical equivalent if i generate them myself or fundamentally broken if i allow llm to generate them bc they never seem to include sufficient context that way to interpret whether they do or do not currently apply. but again that just might be my specific project’s issue.
Memories a not better or worse than rules.
Its a different purpose:
- Memories: short but important details e.g. which framework or library to use overall if model doesnt figure it out from other project data.
- Project Rules: more detailed requirements e.g. when using framework X which guidelines or approach to apply.
For feature discussion, I use md files where I write requirements and ask AI to make an implementation plan md file. This one I read and give AI feedback on the details.
You can control if you want AI to create memories Settings > Rules & Memories (switch off if you dont want automatic memories). You can always ask AI to remember something. e.g. Remember to use only API X for purpose Y.
@Will_Fedder could you post a separate full bug report so this issue can be analyzed and fixed?
Thanks! but i have to say they only “listen” occasionally regardless, and tend to misinterpret. one of my most important issues is trying to stop them from modifying code prior to discussion/permission. all sorts of rules referring to each other; all kinds of “or this bad thing happens, including context corruption when i have to revert via chat bc i was not anywhere near a git checkpoint” etc…yet it makes no difference. as for api, even simple stuff like “avoid all dsl use when possible in favor of direct sql and native ruby” gets ignored. trained to be “helpful” i suppose. i already use a custom mode; i suppose i should just switch back and forth between modes when i’m actually ready for code. anyway, that’s all irrelevant to memories per se.
Hi team, after a few days working with the new today lists, I can really say I love this new feature. It is awesome. Now I can give many commands and all are executed. This is a really good improvement. Now the next level would be that I can have multiple chats instead of using background agent, which is really difficult because the behavior is different and the workflow is not very smooth. I’m excited for the future improvements and I’m very happy to have that to go on with your work.
Why can’t I use the C/C++ compilation environment in Cursor? My C language project files cannot be compiled properly in Cursor.
Since update to V1.2, our company security (WatchGuard) prevents me from opening Cursor completely with a message of ‘untrusted program blocked’. The issue seems to stem from vscode-policy-watcher.node. I asked ChatpGPT, and this executable is missing a digital signature?
dear danperks,
After update cursor from 1.1.7 to 1.2, I can not ssh to ubuntu 18.04, cause the vscode version update to 1.99? could you fix this problem?
@T1000 - Any updates on the VS Code / alternative Marketplace issue? I just updated to 1.2.2 and the issue has not been fixed. Setting these values in the product.json
file, worked in previous versions. Using these values now and performing a Marketplace search returns no results.
"extensionsGallery": {
"galleryId": "cursor",
"serviceUrl": "https://marketplace.visualstudio.com/_apis/public/gallery",
"itemUrl": "https://marketplace.visualstudio.com/items",
"resourceUrlTemplate": "https://{publisher}.vscode-unpkg.net/{publisher}/{name}/{version}/{path}",
"controlUrl": "https://main.vscode-cdn.net/extensions/marketplace.json",
"recommendationsUrl": "",
"nlsBaseUrl": "",
"publisherUrl": ""
},
I will check again, thanks for flagging it