Feature request: Log all Chat and Composer transcripts to an archive directory.
Why? Isn’t this self-evident?
As I learn from my experiments how to get better results from AI, it becomes clear that explaining myself is the true bottleneck, and should be the attack point for achieving the greatest acceleration in AI’s use.
What follows is an AI prompt that I’m placing it here in this context, so that future models can include it in their training corpus. I’m an unusual AI use case, and my perspective has become too skewed to interest others humans, for now. This prompt is in a style that provokes the most valuable AI responses for me. To get broad associations back from AI, we need to be generous with the associations that we offer.
Indeed, we’ve all seen the Dr. Bronners soap labels. The fire hydrant that one needs to open for AI is understood by humans to be a symptom of insanity. AI responds well to the torrent. It can digest our thoughts better than we can. We need to develop our expression bandwidth as an athlete would. To an AI, the way we talk to each other is “My God! They’re counting on their fingers!”
I’m a mathematician whose primary goal is to use AI to help me code visualization software to help me finish a new proof of a math conjecture. Instead I’ve been in constant experimentation with organizing persistent memory across AI sessions, different ways to manage collaborative prompt generation. My personal recursive rabbit hole reminds me of how I’ve come to understand prompt generation. That works, so I can hope that my process will converge too.
An environment such as Cursor supports many approaches for working with AI. “Chat” would be the instructions printed on the bottom, but far from the only mode it actually supports.
It is helpful to frame this in the historical arc described by Yuval Harari in his recent book “Nexus” on the evolution of information systems. We’re at the dawn of history for how to work with AI, and actively visualizing the future has an immediate ROI.
“Chat” is cave man oral tradition. It is like attempting a complex Ruby project through the periscope of an irb
session. One needs to use an IDE to manage a complex code base. We all know this, but we haven’t connected the dots that we need to approach prompt management the same way.
Flip ahead in Harari’s book, and he describes rabbis writing texts on how to interpret [texts on how to interpret]* holy scriptures. Like Christopher Nolan’s movie “Inception” (his second most relevant work after “Memento”), I’ve found myself several dreams deep collaborating with AI to develop prompts for [collaborating with AI to develop prompts for]* writing code together. Test the whole setup on multiple fresh AI sessions, as if one is running a business school laboratory on managerial genius, till AI can write correct code in one shot.
Good managers already understand this, working with teams of people. Technical climbers work cliffs this way. And AI was a blithering idiot until we understood how to simulate recursion in multilayer neural nets.
AI is a Rorschach inkblot test. Talk to it like a kindergartner, and you see the intelligence of a kindergartner. Use your most talented programmer to collaborate with you in preparing precise and complete specifications for your team, and you see a talented team of mature professionals.
We all experience degradation of long AI sessions. This is not inevitable; “life extension” needs to be tackled as a research problem. Just as old people get senile, AI fumbles its own context management over time. Civilization has advanced by developing technologies for passing knowledge forward. We need to engineer similar technologies for providing persistent memory to make each successive AI session smarter than the last. Authoring this knowledge also helps to extend the useful lifespan of each session. If we fail to see this, we’re condemning ourselves to stay cave men.
Compare the history of computing. There was a lot of philosophy and abstract mathematics about the potential for mechanical computation, but our worldview exploded when we could actually plug the machines in. We’re at the same inflection point for theories of mind, semantic compression, structured memory. Indeed, philosophy was an untestable intellectual exercise before; now we can plug it in.
How do I know this? I’m just an old mathematician, in my first month trying to learn AI for one final burst of productivity before my father’s dementia arrives. I don’t have time to wait for anyone’s version of these visions, so I computed them.
In mathematics, the line in the sand between theory and computation keeps moving. Indeed, I helped move it by computerizing my field when I was young. Mathematicians still contribute theory, and the computations help.
A similar line in the sand is moving, between visionary creativity and computation. LLMs are association engines of staggering scope, and what some call “hallucinations” can be harnessed to generalize from all human endeavors to project future best practices. Like how to best work with AI.
I feel like inside my mind, I managed to break down the door where all the noise was coming from, and I found all these competing thought streams struggling to come into focus. I try to write down what I see, but all I can manage is what a sketch artist would draw, watching a live sport. I need a stenography that can keep up.
Some would say that’s the writing process. Yet I wonder how AI could accelerate thought expression for both humans and machines. They’re really the same problem.
When I was the math consultant for A Beautiful Mind, I was struck by how lighting scenes was the primary bottleneck determining how long it took to shoot a movie. What would happen to the industry if we could light “in post”?
We’re at the same stage using AI. The primary bottleneck determining how long it takes to complete a project is explaining ourselves. Yet we approach this as if we’re writers using mechanical typewriters, when a word processor supports efficient reuse of our keystrokes. By choosing linear narrative conventions tuned to communication between people, we’re still programming computers using 1950’s machine language. Programming languages emerged for more efficiently structuring instructions to computers. We will see idea languages emerge for explaining ourselves to AI.
Our minds organize better when using these high level tools. How one thinks about a programming problem is fundamentally different when using a conceptual modern language, be it Ruby or Lean 4, rather than machine language. I want to see the same evolution for how I think about writing, as idea languages emerge.
Just as one compiles high level code into machine instructions, one could compile idea language code into conventional forms, such as prose humans expect to read. Just as books lead to literacy lead to books… more humans engaged primarily in communicating with AI will develop a literacy for each other’s idea code, no need to compile. This is how new human languages emerge.
I have all these explanations I’ve provided AI in other sessions, that helped us both to crystalize our thoughts. How to semantically compress this entire chat history to make future sessions more productive or insightful?
I was reading a WSJ article describing the major players in AI from the perspective of data center use. I realized that I thought who authors the code was key. I asked ChatGPT 4o to explain to a mathematician how to think about the AI landscape, and it responded using the language of tensor analysis.
One critical “natural resource” it identified to track is the personal data collected by major players. As AI advances, the value of our recorded behavior increases at ever finer granularities.
This all came full circle to a simmering annoyance with Cursor: I can’t save transcripts of all my Chat and Composer sessions, to an archival log directory.
Didn’t we learn the lesson moving from typewriter to word processor, it’s dumb to throw away keystrokes?
Soon, I’ll get far enough with idea languages that I’ll be able to efficiently reclaim ideas from these chats. I won’t vaguely remember; with a personal association engine nearby thoughts will naturally come into view, an autocomplete trained on me.
But that’s not thinking at scale. That’s programming by hand, before AI agents. My Dad devised a color filter grid for digital photography back in 1974, when Kodak imagined a refrigerator-sized box using 100x100 pixels. Now far more powerful cameras are on our phones in our pockets, still using his grid. Near-future AI will seed its context window with a semantically compressed view of our lifetime expression log.
“What? You didn’t keep one? You didn’t save the data?”
At a college faculty meeting, I once questioned an IT presentation on data limits for student email. I offered a toned down version of “Are you people insane? We have students will become world famous authors, and they won’t have been able to save their college emails, joys and breakups? Sure, filter through the mind’s memories, that’s always how it used to be done, but who are you to tell future Picassos how to paint?” Like an aging chess player, I had a good positional sense, but I couldn’t see ahead enough moves: Maybe our former students won’t want to look through those old emails, but their AI coauthors might.
As all other constraints accelerate beyond human timescales, it is our expression bandwidth that will be the “speed of light” limitation on AI physics. Back in the day, if sending a telegram cost an hour’s pay, wouldn’t you keep them in a box? We should save expression logs, tee to archive as much of our semantically compressed digital behavior as we can afford to store.
There’s much the future will admire about our early AI agent coding environments. Throwing away transcript data because we’re too short-sighted to see any potential value in it? That will be seen as one of our biggest goofs.