I’m using Cursor to prototype and reason through a small fintech-related project, specifically a balance inquiry flow for an NBAD-bank-balance-check-style banking experience. This is not about the bank itself, but about modeling a realistic user flow and data handling logic before any real integration happens.
The part I’m stuck on is how to best use Cursor when working through ambiguous or incomplete requirements. For example, in real banking apps you often have multiple balance states available balance, current balance, pending debits, delayed updates from the core system. When I try to sketch the logic and UI behavior together (mock API responses, edge cases, fallback states), the conversation with the agent sometimes drifts into assumptions that wouldn’t hold in an actual banking environment.
I’m currently keeping lightweight JSON mocks and some pseudo-backend logic in the repo and using Cursor to iterate on the flow, but I’m not sure if there’s a cleaner way to guide the agent so it stays grounded in realistic constraints. Things like stale balances, timeout states, or user frustration when numbers don’t match are important to capture early, even at the prototype stage.
For people using Cursor on product or workflow design problems rather than pure coding tasks, how do you structure your prompts, files, or context so the agent stays aligned with real-world behavior? I’m trying to treat Cursor as a thinking partner here, not just a code generator, and I’m still dialing that in.
Hey, interesting use case! For this kind of task, there are a few approaches that can help:
Project Rules for domain constraints
Create .cursor/rules/banking-constraints.mdc with realistic constraints for your domain:
---
description: "Banking domain constraints for balance inquiry flow"
alwaysApply: true
---
When working with balances, keep in mind:
- Available balance != current balance (pending transactions)
- Core banking systems have a 5 to 30 sec update delay
- If the API times out, show a "stale" indicator, not an error
- The user may see different numbers in different places, that is normal, explain why
Edge cases to cover:
- @mocks/timeout-response.json
- @mocks/stale-balance.json
Reference files instead of descriptions
Keep mock responses and edge cases in separate files and reference them via @filename in rules or in chat. That way the agent sees concrete examples, not abstract descriptions.
If it drifts, correct it explicitly
When the agent starts making unrealistic assumptions, don’t just continue. Add that constraint to the rule file. Over time you’ll build up a set of guardrails for your domain.
For prototyping a realistic bank balance flow in Cursor, keep it simple and grounded. Use separate JSON mocks for available balance, current balance, pending transactions, and stale updates. Set domain rules in a .cursor/rules file so the agent knows the constraints, like short update delays or timeouts. Reference these mock files rather than describing them in chat. Correct the agent immediately when it drifts, adding new constraints as needed. This way, your prototype stays realistic, handles edge cases, and reflects what users would actually experience with FAB or NBAD-style balances.
Thanks Martina, this helps clarify a lot of the modeling side. I’m still trying to ground the prototype in something close to how NBAD-style balance checks work in practice, especially around explaining balance differences and update delays to users.
If you know of any useful online guides, docs, or public references that explain these balance inquiry patterns (even at a high level), I’d really appreciate it if you could share them. I want to make sure the flow reflects real-world behavior and not just assumptions.