[Solved] Add Claude 3 models

I think this was not the intention of the Cursor team. The main reason is that Claude’s price is so high. If we didn’t, we’d have to raise the price, but that’s hard to accept.

So, we either accept that it’s slow, or we accept that it’s expensive. There’s no third option.

3 Likes

i want to make sure the cursor team gets the pricing right the first time…

…or the second…

4 Likes

This seems like an issue on Poe’s side, rather than an issue with Opus being expensive.
They obviously just didn’t do the maths.

Getting it wrong once, and then doubling the price! Amazing.

I see you weren’t saying that though. There’s more to consider, right. Like gym memberships assume that not everyone is gonna show up at once.

It seems odd though because you’d think the cost for Cursor could be calculated by the standard gpt4 usage times however much Claude is, which surely would leave room for more than 10 uses/day.

I wonder if more than expected users went to Poe because the Claude web app isnt great and people were keen to use Claude 3. Possibly that was the issue

5 Likes

Thinking about it further actually that’s like 300/month right, so not far off the gpt-4 usage. Cursor is just expensive compared to the competition.

Phind - 500 uses per day for Claude 3/gpt-4
Cody - unlimited uses for both

Technically Cursor is unlimited too, but nobody wants slow usage. To be in a queue waiting on a response whilst trying to get work done.

3 Likes

To be fair to the devs of cursor, there making improvements to the IDE experience itself. By maintaining a fork of vs code with additional features this adds a lot of dev work.

Maybe it’s just random chance, but I found that the reference is work a lot better in Cursor compared to VS code.

2 Likes

I think to be fair, Cursor just works better. For me cody the codebase indexing didn’t work as well. They also don’t have @Docs which I love on Cursor. One of the most irritating part of Cody is that you can’t pin the chat on the sidebar, so it lives with your open files. The output of their chat is nothing special. I was liking Cursor and Codeium the most, I still use both. Copilot++ is pretty weak (at least in Elixir), but codeium autocomplete is very good and it’s free.

My guess is a lot of company are burning VC money, so that’s why they don’t care to charge $9/month and that you use whatever on their api. But at some point the music will stop.

2 Likes

Thanks for reviving my deleted message… not sure how that was possible.
The indexing doesn’t work any better in Cursor in my experience. In fact I’ve had the best experience with Cody, but all options aren’t quite there yet. You’d think Cody would lead with this, given that it’s owned by Sourcegraph and that’s pretty much their business.

The fact that the chat is in the editor bothered me greatly, but after a day or so I realised it was actually great - just not something I’m used to. It means I can ctrl+p and access current and historical chats easily. The advantage isn’t immediately obvious.

We have no idea what deals they or Cursor have with Anthropic or/and OpenAI. It seems likely they’ll be working at a loss… but according to the media, so are MS with copilot. It’s pretty standard for new businesses or products to run at a loss.

You can see it as ‘burning VC money’, but in my opinion they’re a well established company. They want to succeed. They won’t be burning money for fun. There will be a business plan in play.

FWIW I think Cursor is a great product. I just really want Opus. I think it’s significantly more enjoyable to work with and helps me more.

I appreciate it’s expensive, and I have no idea what the costs associated with the business are so maybe 10 a day is a good amount for $20/month. However, compared with Cody or Phind it’s a long way from competitive pricing. Particularly Cody, as it offers all of the main features of Cursor, and has features that Cursor doesn’t have as I mentioned previously.

You think you need Claude-3, but you don’t.

That’s just because nothing better has come out yet.

image

Here some napkin map (that seems like POE was unable to do).

I use the assumption that people don’t normally flush their chat history, so that the 10k context window is most of time maxed out (averaged at 8000 token). Output on average I put 500 token (that’s approx the length I get back on average).

As you can see, Opus cost much less than GPT4, which they were using until very recently (and they were not complaining about the cost of it). Which bring me to the agreement that they have with OpenAI and the “slow” generation. My guess is they get an reserved inference capacity at a discounted price. So when the pipeline is full, request start queuing up (hence the slow vs fast). Which I personally think is a very good approach to maximize your usage and minimize your cost.

With anthropic I guess they don’t have anything similar, so they really pay per request. At 10 request per day, it give you about 300 requests per month (assuming that someone use it every day to the maximum, which won’t happen). But looking at the table above will help you understand why they are complaining about the cost.

And then you can do, how many request I can get on a budget of $9 and $20 using the API:
image

The above is mainly for chat. The Cmd+k would probably use less since the context window might be overall much smaller on average.

Following the same logic you will think that Github Copilot would lead as well since they are owned by Microsoft which own OpenAI (well kind of), yet they are lagging behind by a lot. They are still stuck on GPT4 with knowledge up to 2021. On top of that, they are Github… so they have access to all your repo and everything. They also have the first mover effect since they have been the first doing this (with codex).

To me, all this just show how a small team like Cursor that is hyper focused on delivering with direct contact with their community is incredible and should not be underestimated.

2 Likes

This is true

Have been working with Claude-3-opus for a week now.

It isn’t strictly better than gpt-4-turbo, especially when given the same context size. For python programming, it’s knowledge is often a couple years out of date (especially with SQLAlchemy). Also, doesn’t know nearly as much about py libraries as 4-turbo does.

Great in a lot of other areas, but I won’t get into those.

3 Likes

You should watch this

1 Like

What time zone is the limit set to? SF time (PDT)? This way I can better schedule my prompt scheme throughout the day.

show me gpt-4.5-turbo and i’m ready to rock :wink:

1 Like

+100.

“if you aren’t paying for the product, you are the product.”

let’s support cursor with subscriptions so that they don’t need to sell our data down the road. honestly they’ve changed the game and in my opinion deserve to make some margin on top of the API costs.

or if we don’t want to pay money, but contribute our time - then help aider or one of the dozen “open devin” projects by contributing to their open source codebases.

1 Like

It’s great to see that Claude access has rolled out today! I’m really looking forward to long context mode - for me, this would be a huge plus.

2 Likes

The original point was that they were burning VC money, not making users the product.

Cody:

Sourcegraph Partner LLMs will not retain any input or output from the model, including embeddings, beyond the time it takes to generate the output (“Zero Retention” ).

Cursor:

By default does store users prompts and code, right? The user needs to know to uncheck the box in settings.

Let’s talk in facts please rather than just bashing the competition for offering a cheaper product and blindly saying it’s because they’re burning VC money or making us the product.

I think all the company’s discussed here are reputable. I don’t have any issues with either.

It’s not fair to make claims like these without any basis.

3 Likes

ok

https://www.crunchbase.com/funding_round/sourcegraph-series-d--b1dceff0