Just kidding, you dont even need to hire me, I will literally just do it for free.
It’s very rare to see a company skyrocket to heights so quickly, like Cursor, and immediately come crashing down. In this chapter for Cursor, I’m naming Icarus. I want to explain very clearly how, as a business, you can navigate and get out of these issues and escape the same fate as Icarus.
First, why should you listen to me? Well, you should listen to me because I was one of the earliest contributors and testers of RooCode, and helped bring major features that exist now, like “Orchestrator” mode, to fruition. I have used earlier coding models to punch way above their weight class; my use of even Claude 3.5 Sonnet could easily go toe to toe with the average user using Claude 4. I know that sounds crazy, but it’s not as hard as you would think if you correctly understand how to use these models to the fullest of their potential
My offer is simple, and I will list my plan here. If you hire me, I will save your company millions, make the cursor at least 2x as good, improve the pricing model marketing to bring back customers, and finally compose a campaign to redeem yourself.
Here is my plan →
(dont faint at the numbers, I will explain how this will be possible to do)
Revert pricing changes to 1000 requests a month with unlimited slow queue, at the same $20mo pricing.
instead of asking users to start a new chat, simply increase the amount of requests used based off of the context used. Remove all max mode and make them the default models for everyone. and notify the user once their chat is about to receive a bump in request usage due to context length.
This puts the User in control, and if they want to keep the chat going, they can, and if they want to send long messages, they can, and if they want to attach a lot, they can
The beauty of this is you could even show the user how much “estimated” context their files and message length will roughly contribute to their current “request tier” before they even send the message. Then, as the user sends longer or more messages with more files, it will show them a progress bar on how close they are to the next tier before they will start taking more request for that chat. This prevents users from ■■■■■-nilly sending the “dumb vibe” questions, which waste the customer’s money
part of your issue with spending cost at cursor is what im calling the “dumb vibe” phenomenon, which costs your company millions in losses. What’s going on is that users will start many new chats trying to solve a problem that could have been much more efficiently solved the first time with a better model. So by withholding the models from the user, you effectively lose more money because the user uses a dumb model for a hard question more times than just using the smart model once. This creates a negative feedback loop where the user ends up costing the cursor more money, still isn’t able to complete the task, and leaves frustrated, and at the end of this process, maybe even cancels their subscription, which costs the cursor even more.
—
If a model like even opus, or o3 costs more to use, this still works, because all you have to do is change how fast the users goes through the “usage” tiers for context based off the api pricing the model has. So if a model as a high api cost, just make the request level go up faster then models where the api is cheap, so for example if o3 is 3x as expensive as kimik2, then make it so that it takes 3 request at 100k context with o3 but kimik2 at a 100k context is still at the level where its 1 request.
this prioritizes people to use there first message in a chat to be the “planning” or efficient prompting right off the bat with a good model, and then naturally guides them to choosing cheaper models to continue the chat, this literally what your wanting people to do with auto, but just reframed in a way which puts the user in complete control
The reason this saves you so much money is that users won’t be sending nearly as many requests to the cursor.
Make sense? Here’s the next idea →
Let me rewrite Cursor’s system prompt. I’ll make it better, and I will prove that I made it better with verifiable results. I’m confident your “lobotomizing” ai models and making them stupid so i will fix this issue and make the cursor perform even when new models come out so you dont have to be sitting on the sidelines waiting to optimize cursor for these new models, ill make it so it “just works” out of the box at a much higher success rate and also requires less specialization on a per model basis.
next idea →
Launch an ad campaign where users send the same prompt to a normal model without cursor, and then send one to agent mode after I make everything better, and the ad will be just showing how much better the results are from the cursor than from the base model. Then annouce the reversed pricing changes and doubling usage. Congratulations you are going to get 1M+ users immediately from the press and youtube for how amazing you are and “Cursor redemption arc” will give you guys amazing exposure and completely smash everyones expectations.
finally show a new benchmark task showing “Cursor 2.0 (back to basics update)” which shows how cursor preforms twice as good as 1.2. Also, you can even make fun of yourself or make a skit: “Sorry guys, we were using Cursor to improve itself, and it ended catastrophically, so we hired a real software engineer, and now we have doubled the amount of requests you can use and made the cursor 2x as smart lol.”
Right now, Cursor makes the base models stupider, and I’m back to copy and pasting from Google AI Studio because it’s better than Cursor, which is ridiculous
I have so many more ideas, but im getting tired of manually writing this out, i might need ai to finish the rest of this post lol.
my only condition is that as soon as cursor does 2.0 you guys just say “thanks tristan” and i will play the engineer that fixed your business for free, and maybe you can give me some equity in the company since you will actually be valuable agian