Feature request for product/service
Cursor IDE
Describe the request
Cursor Compose model with 4x is super and the correct direction, well done all. This increased Cursor’s value, but we now need to go further and increase model inference speed by 200%-400%. Here is how: https://www.cerebras.ai/ is key here. Cerebras can run the Cursor Compose model inference faster thet anything available today. A partnership between Cursor and Cerebras to run the Cursor Compose model on Cerebras’ hardware is a match thet no one can match in the industry today.
The result is that Cursor can provide inference thet is blisteringly fast —a game changer in the industry. This new service should be priced by speed for the user, with a bucket of speed tokens that accumulate over each hour based on the plan, up to a max capacity. Then the user cna use the tokens as required to get the speed based on their Cursor plan.
Please consider this idea plaeas and take it up the chane at Cursor.