Personas within rules

On teams, we often have developers with varying levels of experience working in the same codebase. Currently, the AI assistant provides the same level of detail and context to all users, which leads to two issues:

  1. Junior developers may miss crucial learning opportunities when solutions are too terse
  2. Senior developers waste time reading through explanations they don’t need

It would be great if there was a profile-based system built on top of rules that adjusts AI interactions based on developer experience level and preferences.

Core Components

1. Developer Profiles

# .cursor/profiles/developer.yaml
name: "Jane Smith"
experience_level: "junior_mle"
interaction_preferences:
  explanation_depth: "detailed"  # detailed, standard, minimal
  include_context: true
  show_alternatives: true
  learning_focus: ["■■■ ML framework", "yyy infrastructure"]

2. Team Configuration

# .cursor/config/team.yaml
default_level: "mid_mle"
team_requirements:
  documentation: "required"
  testing: "required"
  code_review: "required"
learning_paths:
  junior_mle:
    required_context: ["architecture", "best_practices", "failure_modes"]
  senior_mle:
    required_context: ["system_impact", "performance"]

Interaction Examples

Junior MLE Experience

# User query: "How do I set up model tracking?"

# AI Response:
"""
Let me walk you through setting up model tracking:

1. Architecture Context:
   - ■■■ tracks experiments in a central database
   - Artifacts are stored in configured storage ...

2. Implementation Steps:
   [detailed step-by-step guide]

3. Best Practices:
   - Why we log specific metrics
   - How to structure experiments
   - Common pitfalls to avoid

4. Example Implementation:
   [fully documented code example]

5. Verification Steps:
   [testing and validation steps]
"""

Senior MLE Experience

# Same query: "How do I set up model tracking?"

# AI Response:
"""
Here's the setup for our stack:

```python
■■■.set_tracking_uri(TRACKING_URI)
■■■.set_experiment(experiment_name)

with ■■■.start_run():
    ■■■.log_params(params)
    ■■■.log_metrics(metrics)
    ■■■.pytorch.log_model(model, "model")
\```
"""

Benefits

  1. Personalized Learning

    • Junior developers receive comprehensive context and explanations
    • Mid-level developers get balanced information
    • Senior developers get straight-to-the-point solutions
  2. Team Efficiency

    • Reduces time spent reading unnecessary explanations
    • Ensures consistent knowledge sharing
    • Maintains code quality across experience levels
  3. Standardized Onboarding

    • Built-in learning paths for new team members
    • Consistent explanation of team practices
    • Gradual reduction in explanation depth as developers progress
  4. Project Management

    • Teams can define required knowledge areas
    • Track developer progression through reduced context needs
    • Ensure critical information is always shared
1 Like

Great ideas here - you can do this with custom agents! Soon cursor will have a file format to configure these, but for now its through a gui - in the agent select, there is a custom option, and aside from the options there is an advanced panel that allows you to give it custom instructions, exactly like what you are describing!

Until there is a file in the project standard to define these, you could just share the instrucitons and description to paste into the gui for setting these up - I have a file format I use for now, waiting for the official version from cursor.

See these docs here Cursor – Custom modes

And here is the sample of the file format I currently use to share these modes -