When starting Cursor for the first time, a Cursor-tutorial project is shown, providing examples of how to use Cursor.
However, there are no actual solutions written by a human that offer correct or reasonable solutions, so we can’t easily check if Cursor got it right or not. I’m referring to these examples:
bug_finding.rs
/* Simply, ask the chat (Cmd+L/Ctrl+L) where the bugs in the code are. */
explaining_code.c
/* Simply, ask the chat (Cmd+L/Ctrl+L) what this code does. */
finding_code.ts
/* Simply, ask the chat (Cmd+L/Ctrl+L) which method lets you find the children of a folder. */
language_translation.rs
Simply, copy the following python code, hit Cmd+K, ask it to translate some code to rust, paste in the code, and hit enter.
In particular, I’m doing an evaluation beetwen Cursor and Github Copilot and they give different answers to the questions above, so if I can show that Copilot gets it wrong while Cursor gets it right it would help promoting Cursor adption in our company.