How to structure logging so Cursor can actually understand and use it?

I’m working on a project where the backend is mostly Python, frontend is React / Next.js with ShadCN, and I’m using Supabase for auth + database.

Right now, my logs are very detailed, but Cursor doesn’t seem to understand them well when I ask it to debug issues. It often misses the relevant parts or doesn’t follow the trace logically.

So I have a few questions:

  • Is there a preferred logging format or style that makes Cursor’s AI understand logs better?

  • Should I be using structured logging (e.g., JSON logs) instead of plain print logs?

  • Are there any recommended libraries for Python logging that AI agents parse more clearly (e.g., loguru, structlog, or standard logging with custom formatters)?

  • Do people pipe logs to something like Logtail, Sentry, or Supabase logs and then reference them to Cursor? Or keep everything local?

My goal is simply:

When I paste logs into Cursor, I want the AI to quickly identify where the failure is happening and why.

I just wrote a command for it to use browser mode and log itself. It seemed to have problems loading the terminal logs so I asked it how to log the server side and it changed to updating a file instead - works like a charm now. Now it can see console + terminal without me having to do anything and it uses the browser to simulate the exact issue very reliably.

Occasionally I’ll need to give it some specifics for testing but my command includes everything in it including where to find the test user info etc. Whenever a normal command flow isn’t able to resolve the bug I just add the debug command in and let it run for a few minutes.

The final step in the command is a report in chat and cleanup of logging etc.

This is great ! I wanted to build something similar, do you wanna create an extension ? :slight_smile: