Anyone use Cursor for academic research writing?

LLM Interfaces for coding are probably a generation ahead of what we have for prompting and interconnecting complex, disparate information. Has anyone had success using Cursor to support writing in academia?

For example, one on hand grant applications often have a structured set of sections, subsections, with requirements, and on the other hand researchers have a pile of references and claims in their emerging research that they would like to have funded. Between the two are a wealth of grant writing how-to articles and materials. It would seem that, with the proper .cursorrules, that Cursor could be used to bootstrap an initial complete draft of a grant proposal, and fairly robustly. Anyone else have similar ideas to this?

EDIT - To this point, things like Google’s Co-scientist are an attempt at this next generation but they largely automate evaluation and favor AI-led development, instead of the iterative agentic model that Cursor has.

Interested also in replies. One thing i notice is that certain features of Cursor are more tailored for coding but with the right templates and rules you can definitely write documentation and research topics.

1 Like

(I think a good first step is to note that Cursor does not sell or resell the information that you push to their servers or any models they host or license.)

1 Like

Unfortunately Cursor is not sending anything to the servers, not even the prompt to AI for processing, apart of checking if user has paying subscription.

Sat down and did a crude experiment: try to construct a NSF Project Summary, using a pair of rules files, one for the NSF requirements, and the other for what makes for good academic writing. I had a main directory containing the section file (as markdown). A reference directory contained text version of two papers that should be used to construct the summary. I used Agent mode and gave the references as context. I used claude-3.7-sonnet (w/o thinking)

I added the writing rule (probably should be just writing) later on and I’m not sure it picked up on it, so I specifically @'ed the file during my final revision, which improved the draft a bit.

The draft wasn’t bad but it’s not that good, compared to other accepted NSF Project Summaries that I’ve seen. I have a couple of avenues at this junction:

a) Draw on good how-to-write-a-grant-proposal references to construct another rule and try again
b) Put on my academic critique hat together with a how-to guide to heavily critique the output

I get the sense I’ll have to use a thinking mode to actually get things over the hump. But so far this is less painful than single stream chatbot modes that Claude and other’s largely present. If I return back to this experiment I’ll update with more.

EDIT - Yeah, I had to use Claude thinking mode; it was helpful but started hallucinating/misinterpreting. Not sure where exactly to go next but this was useful to consider so far. I may need more rules and let the agent figure out where and when to apply them.

I tried, but the biggest problem with the cursor now is that it often cant be modified or automatically interrupts generation when outputting general content,it’s really disaster

I would suggest Cursor is the best AI powered writing tool for any kind of writing, code or not. So any academic writing task that could be facilitated by AI, then Cursor could help. There isn’t that much you can do though that you couldn’t also do with copy and paste with ChatGPT etc…its just more streamlined.