How we built a 1st Place AI "Digital Guardian" in 48 hours at Cursor Hackathon Hamburg 🏆

Hi everyone!

Our team recently competed in the Cursor 2-Day AI Hackathon in Hamburg, Germany’s largest stage for AI innovation. Out of 400+ builders, we were honored to take home both the 1st Place Grand Prize and the Fan Favorite Award.

The Mission: Proactive Protection for the Elderly

Elderly phone scams are a global epidemic, but current solutions are reactive—the damage is usually done by the time anyone finds out.

We built an autonomous AI agent that acts as a proactive shield. It screens and intercepts suspicious calls in real-time, neutralizing threats before they ever reach the vulnerable family member. Our motto for the weekend: “Proactive protection, not reactive regret.”

The Build Experience with Cursor

Shipping a high-stakes safety product in 48 hours meant we couldn’t afford any friction in our workflow. Cursor was our “secret weapon” for:

  • Rapid Iteration: We used Agent mode to handle complex agent logic and quickly pivot when we needed to refine our interception engine.

  • API Integration: Integrating OpenAI, Google Gemini, and ElevenLabs was seamless. We spent less time looking at docs and more time shipping logic.

  • Clean Code under Pressure: We were able to strip away “nice-to-have” features to ensure our core safety engine was bulletproof, all while Cursor helped us maintain high code quality.

Beyond the coding, the energy at Bucerius Law School was incredible. We were privileged to receive feedback from an expert judging panel representing a wide range of industry leaders.

A huge thanks to the hosts Alex, Vlady, and Ramin for bringing that Silicon Valley intensity to Hamburg, and to the Cursor team for building a tool that truly lets builders move at the speed of thought.

We’re now looking to take this prototype further and build a real company to protect families worldwide.

Team: Myself Amal Mohan K, Rita Barbosa, Sripal Udyavar and Sharvari Bhagwat.

Check out some event photos Link .

3 Likes

Man, running an LLM in a safety product means latency and reliability are everything.

We can’t just let the model ramble. That’s why the core interception engine forces structured output, we use Pydantic and Instructor to make the LLM return all critical decisions as clean JSON objects.

This is huge for stabilizing the agent. It guarantees the model can only pick one of the defined actions: neutralize, monitor, or escalate.

When we get clean JSON back, we skip wasting time parsing messy natural language in the main reaction loop. It just makes the whole process much faster.

1 Like

hi @Amal_Mohan_K thank you so much for sharing with all of us here.

It’s an incredible story, on one side because of your Hackathon win, but also because of the more important topic you address with your app.

Congratulations on your win and excited to see where you take it from here.

1 Like