Dotnet build results in serialization-heat-death when run repeatedly in a large solution

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

(yes cursor wrote this summary)
Summary
Chat sessions experience unrecoverable connection failures with ConnectError: [internal] Serialization error in aiserver.v1.StreamUnifiedChatRequestWithTools when processing output from .NET build commands in large multi-project solutions.

Steps to Reproduce

Steps to Reproduce

Environment

  • OS: Windows 10.0.22631
  • Shell: PowerShell
  • Workspace: Large .NET solution with 33+ C# projects
  • Solution Structure: *.slnx with projects across Database, Service, Worker, Queue, etc.

Reproduction Steps

  1. Open a large .NET solution with 25+ projects in Cursor

  2. Start a chat session requesting systematic build verification or error fixing

  3. AI executes multiple sequential dotnet build commands like:

    dotnet build src/Project1/Project1.csproj
    dotnet build src/Project2/Project2.csproj
    dotnet build src/Project3/Project3.csproj
    # ... continues for 15-20+ projects
    
  4. Each build outputs 50-200 lines of MSBuild diagnostics including:

    • NuGet package restoration messages
    • Compiler diagnostics
    • Dependency resolution chains
    • Asset compilation logs
    • File paths and references
    • Build time/success messages
  5. After 10-15 build commands, the chat becomes unresponsive

  6. Connection drops permanently with the serialization error

  7. Session cannot be recovered without full restart

Actual Behavior

  • Silent accumulation until catastrophic failure
  • No warning or indication of impending issue
  • Complete session loss requiring restart
  • All conversation context lost

Expected Behavior

Root Cause Analysis

Volume Calculations

  • Per-project build output: ~100-150 lines
  • 20 sequential builds: ~2,000-3,000 lines of output
  • Average line length: 80-120 characters
  • Total text accumulation: ~200KB-400KB in conversation context

Serialization Issue

The StreamUnifiedChatRequestWithTools protobuf message has implicit size limits. When conversation history containing massive build outputs is serialized:

  1. Message size exceeds protobuf limits (likely 4MB or similar)
  2. Serialization fails before transmission
  3. Connection drops with internal error
  4. No graceful fallback or recovery mechanism

Why Build Output is Problematic

MSBuild output contains:

  • Extremely long file paths (150+ chars)
  • Repeated diagnostic patterns across projects
  • JSON-formatted dependency graphs
  • XML namespace references
  • Dense compiler metadata

This creates high-entropy, non-compressible text that balloons conversation context.

Expected Behavior

One of:

  1. Automatic summarization of tool outputs exceeding size thresholds
  2. Truncation of large outputs with user notification
  3. Graceful degradation - drop oldest messages to stay under limits
  4. Warning when approaching serialization limits
  5. Context streaming instead of full context serialization

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.7.38 (user setup)
VSCode Version: 1.99.3
Commit: fe5d1728063e86edeeda5bebd2c8e14bf4d0f960
Date: 2025-10-06T18:18:58.523Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.22631

For AI issues: which model did you use?

sonnet-4.5 thinking

For AI issues: add Request ID with privacy disabled

Request ID: 14169781-7f5e-4742-b5c5-632b5c8caef7

Additional Information

Workaround Discovered

Use the read_lints tool instead of dotnet build:

// Instead of:
run_terminal_cmd("dotnet build project.csproj")

// Use:
read_lints(["src/ProjectDirectory"])

Benefits:

  • Returns only actual compiler errors (10-50 lines)
  • Pre-parsed, structured output
  • No MSBuild noise
  • No NuGet restore spam
  • 95% reduction in output volume

Additional Context

Frequency

  • 100% reproducible in large .NET solutions
  • Occurs within 10-15 minutes of systematic debugging
  • More likely with solutions containing 20+ projects

Impact

  • Severe: Complete session loss
  • Forces restart and context loss
  • Disrupts debugging workflows
  • Makes systematic multi-project fixes impossible

Related Issues

This likely affects any scenario involving:

  • Large multi-project builds (Java/Maven, Node/monorepos, etc.)
  • Verbose CLI tools (npm install, terraform apply, etc.)
  • Log file analysis
  • Any tool generating 100+ lines per invocation when called repeatedly

Suggested Fixes

Short Term

  1. Add output truncation - Limit tool results to 500 lines with “… X additional lines truncated”
  2. Add conversation size monitoring - Warn when approaching limits
  3. Implement automatic summarization - Collapse large tool outputs after processing

Long Term

  1. Streaming context windows - Don’t serialize entire history
  2. Semantic compression - Store tool outputs in compressed/indexed form
  3. Increase serialization limits - If technically feasible
  4. Tool output pagination - Return large outputs in chunks with continuation tokens

Testing Recommendations

  1. Create test with chat session executing 20 commands producing 100 lines each
  2. Monitor message sizes during serialization
  3. Identify exact size threshold where failure occurs
  4. Implement limits 20% below failure threshold
  5. Add telemetry for conversations approaching limits

Files/Logs Available

  • Full conversation history up to disconnection
  • Request ID: 14169781-7f5e-4742-b5c5-632b5c8caef7
  • Reproducible test case available

Severity: High - Complete feature failure
Frequency: Common in enterprise development scenarios
Workaround Available: Yes (use alternative tools)
Data Loss: Yes (entire conversation context)

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor