What's Your Go-To AI Model for Coding and Any Cool Tips?

Hey everyone,

I’m Alex, and I’m curious to hear what AI models you all are using for coding. I’ve seen options like Claude Sonnet 3.5/3.7, Gemini 2.5 exp, DeepSeek r1, etc.—what’s been working for you? Also, do you think AI models that “think” (using chain-of-thought) actually make a difference when coding, or is it more just theory?

I’m especially interested in tips for handling big Python projects (around 10k lines of code). What models or settings do you find most effective, and are there any must-use configurations you’ve discovered? Whether it’s a setting tweak or a neat workflow tip that has saved you time, I’d love to hear about it.

Thanks a lot for sharing your experiences, and I’m looking forward to picking up some new ideas!

Cheers,
Alex

2 Likes

I work on a game I wrote most the code for, in lua, using a C++ open source engine, so your mileage may vary.
3.7 is ok for new things, but way too trigger happy in agent mode to break things that are only remotely related to the task given. I stopped using it, apart from ask mode. You can work around by adding requirements not to touch anything that is not 100% necessary, but even then, it can screw up.
3.5 is good, but less capable.

Gemini 2.5-pro-exp-03-25 still breaks a bit too much things, but not as much as 3.7. It has been my goto model lately.

I put rules not to touch things I didn’t ask for, but to no avail. What works when I made changes is to spend one request on re-analyzing my code, without changing anything (something like: parse the updated code and tell me how…). You can also just integrate it to a request to change something, as long as you insist sufficiently for it not revert your changes.

One thing I like a lot is to ask it: I want to do X, how is it done in file Y, as being the only developer, I didn’t write much documentation, and rediscovering how my code works years later would be long and thankless otherwise.

2 Likes

Thank you for the great advice!

For most of my time with Cursor I only used Claude 3.5. Once 3.7 came out I mostly use that. I experimented with “auto” mode, which decides on-the-fly which LLM to use. I got good results until I found out it was charging me for Max models, so I just use Claude 3.7 now.

I’m thinking about trying some new models though. This benchmark is handy for seeing what models are best at what. I haven’t checked it in awhile but it looks like o3-mini and gemini-2.5-pro-exp are topping the charts now, so I’m gonna give them a spin today.

I haven’t used the thinking models much yet. I think they can be useful if you need help brainstorming solutions to complicated problems… but in my experience, the AI gets less useful when the problem or task gets more complex. I get more consistent quality by keeping individual tasks & chats small and focused.

I also work on a huge & ancient codebase. Here’s what has helped me:

  • Set up a good .cursorrules file. I use mine to specify stack technology versions, code style and naming conventions, core models and relationships, and “business terminology” about the application. NOTE: AI doesn’t always follow rules perfectly, sadly this is a shortcoming of LLMs in general. Sometimes you have to remind it :joy:
  • Have the AI work on one specific task at a time. When moving to a new task, start a new chat
  • Prompt quality is critically important. Can’t stress that point enough. Take the time to write out clear prompt messages that outline exactly what you want to accomplish, set expectations, and provide as much context as possible.
  • Agent mode has a lot of power and can make a lot of changes quickly, as well as run terminal commands. However, I typically just use chat in Ask mode. It can still suggest edits to files, but you’ll have to select and apply the suggestions manually. This can be slower, but provides more control; sometimes Agent gets carried away and breaks a bunch of stuff before you realize what it’s doing.

I also am a big fan of “plan first, execute later”, especially for bigger tasks. Use Ask mode, work with the AI to create an implementation plan, have it write out exactly what files it’s going to change and why. Then if you’re happy with the plan, you can switch to Agent mode and tell it “ok, execute the planned changes, don’t make any other edits” and watch the magic happen.

1 Like

Thank you so much for taking the time to help me. It is greatly appreciated!

1 Like

This is the way

You don’t need to use ask mode, I never have (though ask mode might be the best way to do it!)

You can also just create a md file in your project and ask cursor to create the plan for you.

You can send your code to Gemini’s ai studio and get it to create a plan.

Lots of ways to do it.

I would also add that getting cursor to write tests is a fantastic use. Then when it makes changes, run your tests.

1 Like

Another technique I have found very useful is to create small memory banks for cursor to use. This is a little like .cursorrules, but I just create a separate directory called ‘cursor-files’

In this folder, I have files such as:

  • application_overview.md (highlevel overview of the entire application)
  • testing_guide.md (explicit directions on how I want cursor to write tests)
  • individual_service_overview.md (for important large services, I explicitly describe how a service works)
  • database_schema.txt (my whole database)

Whenever I have cursor doing a specific task, for example testing, I will send the testing_guide.md with the prompt.

Whenever cursor misunderstands a specific area of my application, I’ll create a new file explaining the area that was misunderstood. Then I have that explanation for future use whenever I need it.

The most use file is the application_overview.md and I’ll always tell cursor:

‘Implement the project defined in project_spec.md, refer to the application_overview.md when you need to’

The above is an amazingly powerful prompt.

Yes, I’m sure you could do this with .cursorrules, but I don’t want the application_overview to be sent with every single prompt…

If anyone has improvement suggestions, let me know!

2 Likes

That’s an amazing idea, I’m going to try it out!