One of the most useful hacks I’ve come up with is to manually save .mhtml copies of entire documentation portals locally then reference Cursor to it. The result is that Cursor figures out how to user it by folder reference and then becomes extremely dependent on it, resulting in much better syntax.
This seems like something that should be a feature of cursor. I can point the AI at a webpage but it doesn’t have the same effect.
Simply saving things like the Tailwind, Descope, and Tokens Studio documentation to a local folder has saved me HOURS of debugging pain.
But it’s a PITA to sign up.
I’d love to see the Cursor team noodle on the best way to automate this.
Oh snap. I just did something similar with tokens from our Tailwind config.
For work, my company has a name-spaced npm package for our Tailwind config. I have a local node script folder that fetches the latest package, extracts design tokens, and creates a minified JSON file with those core colors/spacings and inserts it into my main project folder to which I have the file on a local gitignore so I can just @file it whenever I need to ask it about a possible class.
Agreed. Would be awesome to figure a better way, but there is a part of me that likes controlling the amount of tokens I possibly can use in a request versus just use @docs which might require more tokens in a request. I’m not too up to date on how that works so that might not be accurate…
I actually had it working on keeping Tailwind in sync with tokens studio and in the documentation token I had directives like “tw sync”, “two override” and “custom”, in brackets to avoid collisions with actual documentation. TW sync meant “this is exactly what is in TW so don’t write it out to the theme extension” and the other two are pretty obvious. But having Cursor directly write a bunch of Tailwind defaults into my tokens.json in a way that then allowed designers to sync them with Figma? it was nice to have coordination with the actual documentation.
It’s the kind of thing that Cursor could make a lot more efficient, though really doing it properly would require stripping the data out of the docs and making it more of a markup project, which would require community maintenance. So there are a lot of caveats to doing things the “right” way but that’s why I called it a hack.
As a hack it probably does eat a lot of tokens but if you’re worried about it you can always just run slow requests and mutitask. I find I’m less wasteful with AI when I run on slow mode anyway. Fast mode causes me to do things like have the AI fix an import instead of doing a click & tab.
1 Like