V0.30 -- Faster Copilot++, Claude

Just rolled out 0.30 to all users. It comes with:

  • Faster Copilot++: We’ve made Copilot++ ~2x faster! This speed bump comes from a new model / faster inference. ~50% of users are already on this model, and it will roll out to everyone over a few days. If you’d like to enable the model immediately, you can control your model in the bottom bar of the editor.
  • Stable Claude Support: All the newest Claude models are available for Pro and API key users. Head to Settings > Models to toggle them on. Pro users get 10 requests / day for free and can keep using Claude at API-key prices for subsequent requests.

Does that mean we need to have an api key entered to continue using claude, or do you bill us from your end. If that is the case, how can we track usage/knowing when 10 requests have been used.

if you use more than 10 request, you can see that you are using slow request. Or you can enable the usage-based mode at this page:
usage dashboard

1 Like

Thanks! I haven’t seen the extra usage payments yet! That makes sense.

No need to add an API Key.

You just need to hit one button to opt-in to usage based pricing once you hit that limit. You’ll get charged the same amount that Anthropic would have charged you for those subsequent requests.

Let us know if you have feedback. We thought granting 10 claude req / day (i.e. 300 req / mo) + usage-based after that struck the right balance of sustainably covering costs, while giving pro users freedom.

1 Like

This broke WSL & remote servers again, it’s the 404 on the code server download issue. I think I made an issue on github for it, maybe I found a fix last time. Application is currently unusable.

Could you try 0.30.3? Just released a new update that should fix this.

@truell20 … perhaps instead of charging 20 USD for pro users… you could charge 30 and give faster and better access to models like claude opus… users are always willing to pay a little more provided that they get access to much better models. i think the added 10 USD will easily cover your difference in cost…

i would most certainly pay a little more if it means… making my life easier

so the pricing plan would be:

  • free
  • pro
  • pro plus
  • pro plus plus
  • pro plus plus plus

    :sweat_smile: :joy:

good joke tangjun… there are only 3 plans now… free… pro and business. a plan name like advanced could bridge the gap between pro and advanced in price point. so Free - Pro-Pro±business does not seem all that bad…

Thank you so much for the update! I’m getting “invalid API key” error for Anthropic. Cursor version is 0.30.3. Checked the same key is working in another API call. Tried other keys and they all returned the same error.

Followed the steps laid out in the thread below and it’s working now.

I am having the same problem as @sangmin, but the steps mentioned to correct it did not work for me. I keep getting a message informing my APY Key is invalid, even though I have generated 3 different keys :frowning:

Per here:

@truell20 can we please also have support for using our aws creds? So we can access claude via aws/bedrock/claude-3?

client = AnthropicBedrock(
# Authenticate by either providing the keys below or use the default AWS credential providers, such as
# using ~/.aws/credentials or the “AWS_SECRET_ACCESS_KEY” and “AWS_ACCESS_KEY_ID” environment variables.
# Temporary credentials can be used with aws_session_token.
# Read more at Temporary security credentials in IAM - AWS Identity and Access Management.
# aws_region changes the aws region to which the request is made. By default, we read AWS_REGION,
# and if that’s not present, we default to us-east-1. Note that we do not read ~/.aws/config for the region.

aws grants claude-3 access almost immediately vs I am still waiting (ie weeks/months) on anthropic for api access @truell20

Has anybody tested if the amazon api is faster?

Issue was fixed!

Cursor is useless at the moment. Why are you nerfing the responses? Heck, Claude will not respond with more than 1200 tokens, no continue button anymore. Back using Visual Studio after suppoirting you all for many many months, original discord. Why not remove the max_token value on the API and let it do its thing? 1200 tokens is nothing.