

AI Hackathon: Beyond Feedback & Transforms Customer Engagement
.png)

AI Hackathon: 8 Hours to an AI Hackathon Winner
The energy at the AI Hackathon MFV was electric. The air buzzed with the hum of laptops, the rapid-fire typing of code, and the passionate discussions of teams trying to build something magical in just eight hours. My team, “Bante” was right in the thick of it, fueled by coffee and a shared vision. By the end of the day, our vision—the User Engagement Assistant—not only came to life but also earned us one of the three coveted winning awards.
This post is the story of that incredible 8-hour journey. I’ll take you behind the scenes of how our five-person team designed, built, and deployed an AI-powered platform to decode customer feedback. We'll dive into our tech stack, explore some code, and share the challenges we overcame. Most importantly, I hope this story inspires you to jump into your next hackathon and see what you can create under pressure.

The Problem: Drowning in a Sea of Customer Feedback
Every business wants to be customer-centric, but what does that really mean? It means listening. The problem is, modern businesses get feedback from a dozen channels at once: support emails, survey responses, chat logs, app reviews, and social media comments. It’s a torrent of unstructured data.
We identified several core challenges that businesses face:
- Information Overload: Teams spend countless hours manually sifting through thousands of comments, trying to connect the dots. It’s inefficient and prone to human bias. Important insights get buried.
- Reactive Support: By the time a customer complains loudly enough to get noticed, they’re already unhappy. Businesses are stuck playing catch-up, fixing problems after they’ve caused damage.
- Unexpected Customer Churn: A customer who seems perfectly happy one day might cancel their subscription the next. The warning signs were probably there, hidden in their feedback, but no one saw them in time.
- Guesswork-Based Decisions: Without a clear, data-driven picture of customer sentiment, product roadmaps and business strategies are often based on gut feelings rather than actual customer needs.
We wanted to build a tool that acts like an intelligent filter and a translator, turning that noisy flood of feedback into a clear, actionable signal.
Our Solution: The User Engagement Assistant
Our answer was the User Engagement Assistant: an AI-powered analytics platform that ingests raw customer feedback and automatically surfaces critical insights. It identifies customer pain points, predicts churn risk, and even suggests concrete actions to improve the user experience.
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Next.js Web │ ◄───► │ FastAPI Backend │ ◄───► │ Azure OpenAI │
│ UI (React) │ │ (Python) │ │ GPT-4o/Mini │
└─────────────────┘ └──────────────────┘ └─────────────────┘
▲
│
▼
┌────────────────┐
│ SQLite/Agent │
│ Storage (Agno) │
└────────────────┘
We designed the system with a modern, microservices-oriented architecture to keep things modular and allow our team to work in parallel.
Our Tech Stack Rationale
With only 8 hours, every technology choice had to count. We needed speed, power, and familiarity.
- Backend (FastAPI & Python): FastAPI was a no-brainer. Its asynchronous capabilities are perfect for handling I/O-bound tasks like calling external APIs (hello, OpenAI!). Python's rich ecosystem with libraries like
PandasandSQLAlchemygave us powerful data manipulation tools right out of the box. - AI Layer (Azure OpenAI & Agno): We used Azure OpenAI to access the powerful new
GPT-4oandGPT-4o-minimodels. To orchestrate our AI agents, we used the Agno framework, which simplifies state management, prompt engineering, and interaction with LLMs, letting us build specialized agents quickly. - Frontend (Next.js 14 & React 18): Next.js provides a fantastic developer experience and allowed us to build a dynamic, responsive UI incredibly fast. We paired it with Tailwind CSS for rapid styling, Radix UI for accessible components, TanStack React Query for elegant server-state management, and Recharts for beautiful, interactive data visualizations.
- Database (SQLite): For a hackathon, SQLite is the perfect choice. It's serverless, zero-configuration, and integrated directly into our application via SQLAlchemy. It was more than enough to handle agent memory and state for our demo.
This stack gave us the perfect blend of rapid development and high performance, allowing us to focus on building features instead of fighting with configurations.
Getting Started is Simple
We didn't just want our tool to be powerful; we wanted it to be incredibly easy to use. We designed the user experience to be as intuitive as possible. In just a few clicks, anyone can turn raw feedback into actionable intelligence.

Step 1: Upload Your Data

Drag and drop your customer feedback files (emails, surveys, support tickets) directly into the application. We support common formats like Excel, CSV, and text files, and all data is handled with enterprise-grade privacy protection.
Step 2: Let the AI Work Its Magic

Our system automatically analyzes your data using advanced AI. This processing step, which handles everything from text cleaning to pattern recognition, typically takes just a few minutes, regardless of the data volume. No technical setup or configuration is required from the user.
Step 3: Review Actionable Insights
Once the analysis is complete, you receive clear, actionable reports in plain English. Our visual dashboards are designed to show trends, patterns, and priorities at a glance, with specific recommendations for improving the customer experience. Some sample analysis results:




Step 4: Take Confident Action
Armed with these data-driven insights, your team can implement the suggested improvements, monitor progress with ongoing analysis of new feedback, and continuously optimize your customer experience strategy based on what your customers are actually saying.
How It Works: Building the AI Pipeline
The magic of the User Engagement Assistant happens in the backend. When a user uploads a file (like a CSV of survey responses), it kicks off a multi-stage, asynchronous pipeline.
Step 1: Document Processing and Pain Point Detection
First, we had to clean and understand the raw text. We built a DocumentProcessor that normalizes text, extracts keywords, and, most importantly, identifies "pain points."
For the hackathon, we started with a simple but effective pattern-matching algorithm using regex to find phrases indicating frustration.
# A simplified look at our pain point detection
import re
def identify_pain_points(text: str) -> list[str]:
pain_point_patterns = [
r'\b(frustrated|annoying|angry|upset|disappointed)\b',
r'\b(problem|issue|bug|error|fail|broken)\b',
r'\b(slow|sluggish|unresponsive|laggy)\b',
# ... and many more patterns
]
identified_points = []
text_lower = text.lower()
for pattern in pain_point_patterns:
# Find all matches for the pattern
matches = re.finditer(pattern, text_lower)
for match in matches:
# Extract a window of context around the match
start = max(0, match.start() - 50)
end = min(len(text), match.end() + 50)
context = text[start:end].strip()
identified_points.append(context)
return list(set(identified_points)) # Return unique pain points
This approach gave us a quick way to flag problematic feedback before passing it to the more sophisticated AI agents for deeper analysis.
Step 2: AI Agent Orchestration with Agno
This is where the real intelligence comes in. We designed a system of specialized AI agents built on the Agno framework. Each agent has a specific job, from analyzing sentiment to generating actionable recommendations.
Here’s how we configured our base agent to communicate with Azure OpenAI and use a SQLite database for memory:
# app/agents/base_agent.py
from agno import Agent, AzureOpenAI
from agno.storage import SqliteStorage
class UserEngagementAgent:
def __init__(self, agent_name: str, model_name: str = "gpt-4o"):
self.agent = Agent(
name=agent_name,
model=AzureOpenAI(
id=model_name,
azure_deployment=model_name,
api_key=self.azure_settings.api_key,
azure_endpoint=self.azure_settings.endpoint,
api_version=self.azure_settings.api_version
),
storage=SqliteStorage(
table_name="user_engagement_analysis",
db_file="user_engagement_assistant.db"
),
instructions=[
"You are an expert analyst specializing in Voice of Customer feedback.",
"Provide comprehensive, data-driven insights.",
"Always structure your responses in JSON format."
],
markdown=False
)
We created a hierarchy of these agents: one for general Voice of Customer (VoC) analysis, another for Customer Support (CS) inquiries, and a special agent dedicated to suggesting solutions for the identified pain points.
Step 3: From Analysis to Actionable Insights
Once the pain points were identified, we aggregated them and calculated severity and frequency scores. But we didn't stop there. We used a PainPointActionsAgent to generate concrete, actionable suggestions for each major issue.
To optimize performance and reduce API calls (a critical hackathon concern!), we batched the pain points into a single prompt for the LLM.
# A simplified view of the batch action generation
def generate_batch_actions(self, pain_points_data: list[dict]) -> dict:
# Format all pain points into a single, large prompt
formatted_pain_points = []
for pp_data in pain_points_data:
formatted_pain_points.append(
f"Pain Point ID: {pp_data['id']}\n"
f"- Category: {pp_data['category']}\n"
f"- Description: {pp_data['pain_point']}\n"
f"- Frequency: {pp_data['frequency']}"
)
# A single, powerful LLM call for all pain points
response = self.agent.run(
f"Generate 3 actionable recommendations for each of the following pain points. "
f"Return a single JSON object where keys are Pain Point IDs. "
f"Pain Points:\n{''.join(formatted_pain_points)}"
)
# Parse the JSON response
# ... (code to handle potential markdown wrappers like ```json)
batch_actions = json.loads(response.content)
return batch_actions
Hackathon Challenges: Thriving Under Pressure
Building all this in 8 hours was a frantic race against time. We faced several hurdles:
- Time, The Ultimate Boss: Our biggest enemy was the clock. We had to be ruthless with our scope. We started by building the absolute core pipeline: upload -> process -> display text. Only after that worked did we add the AI analysis, charts, and churn prediction. This iterative approach saved us from having a half-finished, non-working product at the end.
- Team Coordination: With five people, clear communication was vital. We split into sub-teams: two on the FastAPI backend and AI integration, two on the Next.js frontend, and one person acting as a "floater" for DevOps, documentation, and putting out fires. We had quick sync-ups every hour to ensure the frontend and backend contracts were aligned.
- The Inevitable Bugs: Around hour six, we hit a wall. Our frontend couldn't communicate with our backend due to a classic hackathon demon: CORS errors. Another tricky bug was parsing the JSON output from the LLM, which sometimes came wrapped in markdown code blocks (
```json ... ```). Debugging under pressure is intense, but staying calm and working the problem systematically got us through it.
Our biggest takeaway? A solid plan and clear roles are more important than frantic coding.
Results and Impact: From Demo to Winner
When our turn came to demo, everything clicked. We uploaded a sample CSV of customer feedback, and within seconds, our dashboard lit up with insights.
We showed the judges a clear breakdown of customer sentiment, a prioritized list of pain points with AI-generated solutions, and a list of users at high risk of churning. The feedback was overwhelmingly positive. They were impressed by the completeness of the solution, the polished UI, and the real-world business value it offered.
Hearing "Bante team" announced as one of the winners was an unforgettable moment. It was a validation of our teamwork, our technical choices, and our crazy idea to build a full-stack AI application in a single day.
Beyond the hackathon, the User Engagement Assistant has immense potential. By automating feedback analysis, it can help businesses:
- Increase Customer Retention by proactively addressing issues.
- Reduce Support Costs by identifying the root causes of problems.
- Make Data-Driven Decisions to build products customers truly love.
Conclusion: Lessons and Next Steps
The AI Hackathon MFV was more than just a competition; it was a crucible that forged our team and our idea into something real.
Our key takeaways:
- Modern tools are superpowers: Frameworks like FastAPI, Next.js, and Agno drastically accelerate development. What used to take weeks can now be prototyped in hours.
- Teamwork is the secret sauce: A diverse team with clear roles can accomplish incredible things. Trust and communication are your most valuable assets.
- Start small, build fast: Focus on a walking skeleton (a bare-bones, end-to-end version of your product) and then layer on features. It's the best way to ensure you have something to show at the end.
This project was an amazing sprint, and we're just getting started. We plan to clean up the code, add more data source integrations, and refine our AI models.
If you're an aspiring developer, I can't recommend hackathons enough. They are the ultimate learning experience. You'll code, collaborate, problem-solve, and push your limits in ways you never thought possible.
Thank you for reading!


How to create your own gem in Ruby

