I’ll throw my hat into the ring on this one as well, I feel like the responses being returned in chat and composer are more “stupid” for lack of a better term.
Hello! One of the devs here. Thanks for reporting, and certainly want to fix whatever’s going on.
If you have any situations where you think the AI has performed worse on a specific chat or composer session, please report it to us.
If you ensure Privacy Mode is off, and then come across a bad AI suggestion, send me a DM with screenshots of the issue, alongside the date and rough time of your interaction and your Cursor account email!
While we may not give feedback on each response, we’ll be looking into them to find what the AI’s were missing to help better answer your query. To keep things manageable, please only report 1-2 bad occurrences.
Thanks!
@danperks What if we do not wish to have Privacy Mode disabled? This was working wonderfully for the last several months until this change.
For the below, I’ve tried both Claude 3.5 Sonnet and GPT-4o. I’ve also removed my custom Cursor instructions in settings to see if it helps, but to no avail. I also tried to downgrade back to Cursor 0.42.4 and I still get the same results, so maybe something changed on the backend for Cursor as well for the long context beta.
its certainly difficult to explain something that is subjective like this. So I’ll give it a shot. I have a shellscript (see below) that exports PG tables to CSV. Some of the tables are partitioned and some of them are not. For the partitioned tables, we have do a select/copy. but the logic isn’t right and returns 0 rows. So I prompted Cursor with a very simple prompt that it would normally do very well on parsing if it remembered all the previous iterations in our current session creating this shell script. Instead it added a where
clause that has nothing to do with the prompt (even if the prompt isn’t the best prompt in the world). In the past, even these simple prompts have done very well at debugging, but it seems that we have to be extremely verbose on our ask and even then its producing inconsistent results. Additionally, the more complex the file is, the worse it performs. I have another example that is a React component that is about 120 lines, and it will consistently delete at least 50 lines on every prompt even if i say “do not change existing functionality”
=== cursor suggestion
now it seems like nothing is getting exported for partition tables?
[DEBUG] Exporting table: rpt.transaction_search
COPY 0
[DEBUG] Successfully exported rpt.transaction_search to CSV
=== a few other prompts that just don’t make sense
This one just added debug console lines and didn’t even try to fix the issue.
=== original script
#!/bin/bash
# Connection settings
DBNAME=pruned_data_store_api
# Output directory for CSV files
OUTPUT_DIR="/tmp/csv_exports"
echo "[DEBUG] Starting CSV export process..."
echo "[DEBUG] Database name: $DBNAME"
echo "[DEBUG] Output directory: $OUTPUT_DIR"
# Create output directory if it doesn't exist
sudo mkdir -p $OUTPUT_DIR
sudo chown postgres:postgres $OUTPUT_DIR
sudo chmod 777 $OUTPUT_DIR
# remove all files in the output directory
sudo rm -f $OUTPUT_DIR/*.csv
# Get list of all tables from all schemas
echo "[DEBUG] Getting list of tables from all schemas..."
tables=$(sudo -u postgres psql -d $DBNAME -t -c "SELECT schemaname, tablename FROM pg_tables WHERE schemaname NOT IN ('pg_catalog', 'information_schema');")
# Export each table to CSV
while IFS="|" read -r schema table; do
schema=$(echo $schema | tr -d ' ')
table=$(echo $table | tr -d ' ')
echo "[DEBUG] Exporting table: $schema.$table"
# Export to CSV with double quotes around fields and comma delimiter
if [[ "$table" == *"_search" ]]; then
# Handle partitioned tables with SELECT
sudo -u postgres psql \
-d $DBNAME \
-c "\COPY (SELECT * FROM $schema.$table) TO '$OUTPUT_DIR/${schema}_${table}.csv' WITH (FORMAT CSV, HEADER, FORCE_QUOTE *);" \
2> >(grep -v "NOTICE" >&2)
else
# Regular table export
sudo -u postgres psql \
-d $DBNAME \
-c "\COPY $schema.$table TO '$OUTPUT_DIR/${schema}_${table}.csv' WITH (FORMAT CSV, HEADER, FORCE_QUOTE *);" \
2> >(grep -v "NOTICE" >&2)
fi
if [ $? -eq 0 ]; then
echo "[DEBUG] Successfully exported $schema.$table to CSV"
else
echo "[ERROR] Failed to export $schema.$table to CSV"
fi
done <<< "$tables"
echo "[DEBUG] CSV export process completed"
I’m experiencing exactly this. Not sure why Cursor usually gets dumb lately? I’m thinking maybe there’s some kind of behind-the-scene usage throttling of the premium models. That means Cursor can show that I’m using 3.5 Sonnet but actually using gpt-4 or some other dumber models just to reduce usage cost because the response I’m getting is obviously not Sonnet’s.
I used the system in demo and it was doing a great job. Purchased the annual plan and then tried another project and it has been a headache.
THIS. I am sure of this, I had sonnet 3.5 selected, but I am SURE that the answer was being given by a model like gpt4, because of the speed of the answer and how dumb it was. It sounds like they have not only updated cursor for the worse, but also the backend, mixing models to make it depend on what it loads or wanting to cut costs, this has happened to me several times, as I say, I am sure that when cursor goes wrong it is because it is not using the model it claims to be. I hope this will be solved.
Every time I downgrade I magically somehow come back to 0.43 lol tf
these supports are so lazy. did you even see his reply. lol
Honestly I think it’s just a glitch that with this update suddenly Cursor can’t submit entire files. You have to manually “Select All” and NOT include the first line of the file - all other lines can remain selected - and then hit the keyboard shortcut to send the contents to the Chat window - and it’s back to working.
I noticed the horror-show of incredibly poor responses suddenly, too, and I took a methodical approach and finally just ASKED the AI if it saw my code and it said “no” only when the entire file was selected; I narrowed it down bit by bit until realizing the above glitch.
Hope this helps! This is clearly an urgent bug and I hope Cursor attends to it ASAP.
Thanks!
Yes, same here. Exactly the same issues.
I even tried with changing from Claude to chatgpt, same issues after 2 chats.
Rules all ignored
I’m going to try to just use the API directly and see if it’s cursor.
Found part of the problem, after harassing “claude-3-5-sonnet-20241022” (I have my reservations that it’s actually using the latest Sonnet, at least at first pass) it decides to only read a small amount of files and in chunks of 50 lines
- It found 57 files
- It decides to only read 4, and in chunks of 50 lines and skips lines 50-200, 250-400, 450-500 in the largest
Of course the model cannot respond adequately when the proper context isn’t provided, @truell20 @danperks did you guys recently introduce a triage/router model to select semantically relevant files and cap the chunk size? Or did you guys change the RAG retriever/reranker? It seems overall there was a push in 0.43 at limiting context input to the foundational model$
i m not able to use the composer as my free trial is not expired yet so please tell me the solution what to do
I’ve encountered an inconvenience:
When I need the composer to rewrite code using a specific Flutter package,
it doesn’t understand what I mean, so I need to add documentation.
However, sometimes the packages are published on GitHub without proper documentation, requiring me to have the composer thoroughly read through the public repository, especially the code itself.
This makes me think the current documentation feature may not be adequate when handling GitHub repositories as documentation sources, since it usually still get the rough correct picture to the target repository package.
Completely agree too, I am a supporter and having used Cursor for over 6 months, it has become a LOT less intelligent, esp after the last ‘upgrade’.
If I enable my own models with m,y own paid api with anthropic or openai it become much smarter.
It feel like they are using a lot less intelligent models even if ‘Cl;aude-Sonnet’ is selected.
I ran a test with a basic codeblock that had simple errors.
The new version of Cursor vs my own different paid for API with Anthropic
No surprises what happened, My own Antropic got the three errors immediately, Cursor took four attempts with the third attempt linking to web to figure it out.
The current model update has killed any semblance of memory, it is significantly less intelligent in its replies, seemingly ignores my custom instructions to the point of for a while I was back to copy pasting from a browser based Ai back into cursor adn not using the inbuilt models.
I get the money men want value from their investment so utilising less intelligent models in the pipeline workflow to save costs will kill this app.
Your users are your income. It is so inefficient to use now that I have downgraded yet that has killed my workflow, so I am sort of in limbo .
tldr
Cursor was brilliant , efficient and intelligent, now it is back in school learning , with no memory, makes a LOT of mistakes and is highly inefficient.,
How many tokens are taken up over a month with it offering to answer further questions? just stop!!! Im here asking questions so I will n naturally ask more, why keep offering and wqswting tokens for everyone.
how do we turn off auto update? cursor is only usable if we can keep it on 4.2, but it keeps auto updating to 4.3
currently only option is to un-install then re-install and don’t back out of your app otherwise it will update. So just leave your computer on or repeat the process every time.
Same it’s just been sitting on generating … for idk 5 MINUTES.
I’m paying for this crap service. $20 dollars a month that doesn’t even work half of the time.
And those 500 requests goes so fast idk it’s not even worth it.
Would be nice to have an unlimited request plan instead of this non-working stuff.
Honestly paying $20 a month just isn’t worth it for me anymore. It worked just fine when I bought it some months ago but it’s been on a steep decline over the past months.
I’m seriously considering just ditching cursor and just using some vscode extension like continue with an API instead.
same here, not sure if it’s claude or cursor. I ask the composer to auto debug and test, but after each code change it doesn’t generate test command unless I told it to do so, which is not the case the last time (about 4 days ago) I use it.
I think @Eduard and @daniel347x nailed it- I’m not having issues when I explicitly give it the context of the file I’m working on with the @file_name.py syntax.
To prevent updates in 0.42.2
In the settings
‘update.enableWindowsBackgroundUpdates": False
in the settings, and it will not update in the background and on redo.
I know I’ve been stupid for a few days.
Today I even had the disaster of deleting all the files in a directory.
Luckily I had a backup and restored it,
Also, the composer doesn’t seem to modify files unless I tell it to.
I’m seriously considering switching to Windsurf.