Reflection on how I use Cursor

Hello,

Long-time Cursor user here (nearing 15 months I think). I was initially writing this to other thread where Composer problems were discussed, but decided to start a new topic if someone finds this helpful and want’s to discuss how they use it.

Languages and frameworks I have worked with Cursor:

  • Javascript/Typescript backends and frontends (Express, React, SvelteKit, Svelte 4/5)
  • Python (Jupyter notebooks, simple scripts, FastAPI etc.)

A little bit of reflection on my use and current heuristics that I have on Cursor use:

  • Cursor tab works pretty nicely everywhere. Using and liking it 100%. Usually very helpful and helps with cognitive burden a lot
  • Cmd-K inline chat. Very nice to work with smaller pieces of code such as quick modifications on functions etc. I almost always work with automatic scope without manual selection.
  • Chat
    • Using Claude Sonnet 3.5 about 80% of the time. Capable of simple refactorings, create single file boilerplate etc.
    • In my experience o1-preview has tackled more complex refactorings better than Sonnet on single file changes. If Sonnet does not deliver, I try o1-preview next.
  • Composer:
    • Have tried to do more complex refactorings spanning multiple files with success rate near zero. I am quite sure more complex tasks are just overwhelming to current frontier language models.
    • Perhaps useful if need to create some new simple(ish) multi file spanning stuff.
  • Codebase and documentation RAG
    • Working on medium sized codebases that I am familiar with so not that heavy user. Have used sometimes on unfamiliar codebases to find things. However I have a nagging feeling of RAGs in general that they might miss some things.
    • Docs, have added some docs to work with generating new code and sometimes it might yield nice initial boilerplate. However, I don’t feel good without internalising the lib core functionalities myself since LLM hallucinating stuff bites you pretty hard and you just need to know the stuff to some degree.

Need to adjust expectations a bit in the middle of the growing “programmers are not needed anymore” hype. Models are not quite there to robustly work on existing codebases that are not toy programs. Composer is an ambitious take, but I think we’ll need to wait for more intelligent models for it to work more robustly. And to be honest, I am a bit skeptical of this to work unless we can connect the AIs to feedback loop so that they can correct their actions (write and run test cases etc.)

Also, since LLMs cannot know what they are not able to do, they try to deliver even if the input is garbage (vague specs), and they don’t have enough context to realistically deliver the feature (hallucinations on API and whatnot). They still try and you might end up in the loop of LLM “being sorry” all the time.

1 Like