

AI Mini Challenge: Technical Reflection

1. Introduction
The realm of Artificial Intelligence is constantly evolving, presenting new challenges and opportunities for innovation. At Money Forward, we're always eager to push the boundaries of what's possible, and the recent AI Mini Challenge presented a fantastic opportunity to do just that. This internal competition challenged teams to explore and implement AI solutions to real-world problems. It was an intensive yet incredibly rewarding experience, and we're excited to share our journey and the technical insights we gained.
We are a team of engineers at Money Forward who came together for the AI Mini Challenge, bringing a strong mix of backend and frontend expertise. As backend engineers, we specialize in building robust and scalable systems using Golang, leveraging its performance and concurrency capabilities to tackle demanding technical problems.
Our team was a well-balanced combination of strengths and experience. Harold, a Senior Golang Backend Engineer, brought deep expertise in designing and implementing complex backend architectures. Henry, also a Senior Golang Backend Engineer, contributed his strong background in system optimization and performance tuning. Mikel, another Senior Golang Backend Engineer, provided key insights into data processing and system integration. Phil, an Associate Golang Backend Engineer, supported the backend efforts with a focus on clean, scalable solutions. On the frontend side, Nikk, our Senior Frontend Engineer, played a crucial role in delivering a smooth and intuitive user experience.
Together, we approached the challenge from multiple angles, combining our skills to solve technical problems collaboratively. Our shared passion for building high-quality systems and delivering thoughtful user experiences made this an exciting and rewarding project.
2. Background & Decision Rationale
As a team of backend and frontend engineers, we saw the AI Mini Challenge as more than just a competition - it was a unique opportunity to learn, experiment, and build something practical with AI technologies. While we’re experienced in building scalable backend systems, we’ve always been intrigued by the growing presence of AI in software development. We believed this challenge would give us the push we needed to move from curiosity to implementation.
The motivation to participate was strongly tied to a real pain point in our current project: documents are scattered across various platforms, making it difficult to retrieve and understand important information efficiently. We wanted to tackle this fragmentation by applying AI to automatically summarize and consolidate content, helping users quickly grasp the essence of documents without having to go through each one manually.
We approached the problem with a mindset of learning-by-doing. Our idea was to combine traditional API-based systems with AI capabilities - which eventually led us to conceptualize what we called the MCP Protocol, a method of handling unstructured data using a combination of data pipelines, embedding models, and prompt-based summarization.
Motivation for Participation
Several factors drew us to this challenge:
Real-world application: We weren’t interested in building a theoretical prototype. We wanted to explore how AI can be embedded into actual workflows to solve everyday problems, like document overload and knowledge fragmentation.
Emerging trends: The increasing adoption of AI copilots, context-aware search, and document understanding tools in the industry showed us that this area is gaining real traction. We wanted to explore how we could ride that wave using technologies accessible to us.
Team growth: On a personal level, many of us were eager to explore Generative AI more deeply - not just in isolated experiments, but as part of a full-stack system integrated with databases, APIs, and UI.
Curiosity about AI infrastructure: We were especially interested in learning how to work with tools like Vector Databases, embedding models, and platforms like Azure OpenAI to create a functional end-to-end pipeline.
Prior Experience & Knowledge
Before the challenge, most of our exposure to AI came from following industry news, reading documentation, and occasionally experimenting with public LLM APIs like ChatGPT. Some of us had completed online courses or tinkered with small side projects, but we had yet to apply AI to a production-oriented system.
Our team had foundational knowledge in Generative AI concepts and prompt engineering, but we had never worked with tools like Vector Databases or Azure’s OpenAI services. We were also unfamiliar with the practical considerations of designing retrieval-augmented generation (RAG) systems or managing performance trade-offs in embedding-based search.
Participating in this challenge helped us identify and fill several gaps:
We learned how to build pipelines that transform documents into vector embeddings, store them in a VectorDB, and retrieve relevant content efficiently.
We developed a better understanding of prompt design for multi-stage summarization, especially for long or complex documents.
We gained hands-on experience orchestrating these systems under our internal architecture via the MCP Protocol.
Through this challenge, we not only deepened our AI knowledge but also broke down some of the misconceptions we initially had - for example, the assumption that high-quality summarization could be achieved using a single prompt or API call. Instead, we discovered that chaining multiple AI calls, using context-aware inputs, and integrating with our system’s existing structure yielded far better results.
In the end, this project wasn’t just about summarizing documents - it was about learning how to apply AI thoughtfully and effectively within our team’s existing skillset and workflow.
3. Problem Statement
In fast-paced engineering teams, information overload is a persistent challenge. Internal news, announcements, meeting notes, and project updates are scattered across multiple platforms - Slack threads, Confluence pages, Notion docs, and emails. While this promotes transparency, it also results in fragmented communication, making it difficult for team members to stay fully informed without spending significant time parsing through every channel.
We observed that valuable context often gets buried in lengthy messages or spread across platforms. This leads to missed updates, duplicated discussions, and slower decision-making. What our team really needed was a lightweight, intelligent solution that could automatically summarize internal content and provide concise, context-rich overviews - without requiring manual effort.
The AI Mini Challenge gave us the perfect opportunity to prototype such a solution: a Summary Assistant that uses AI to digest and distill internal communications into clear summaries. Our vision was a tool that could:
Aggregate content from various internal sources (e.g., Slack, documentation tools)
Identify the most relevant information
Generate accurate, concise summaries using natural language generation
To do this effectively, we explored the Model-Context Protocol (MCP) - a systematic approach for designing AI workflows that integrates model selection, prompt construction, and contextual input/output handling. By adopting MCP, we aimed to structure our summarization pipeline in a modular, reusable, and extensible way that fits cleanly into our existing architecture.
Through this project, we aimed to answer key questions:
How can we leverage the Model-Context Protocol to integrate AI capabilities seamlessly into our systems?
What are the best practices for building a multi-stage, configurable summarization pipeline?
How do we ensure that AI-generated summaries are accurate, context-aware, and trustworthy?
Can we create a tool that engineers will actually use daily, rather than just a demo?
Solving this problem wasn’t just about easing communication - it was also a chance to upskill in applied AI development, integrate new tools like Vector Databases and Azure OpenAI, and expand our internal toolkit (e.g., enabling future Cursor plugin support). The challenge helped us lay a practical foundation for building AI-assisted features into our daily workflows.
4. Proposed Solution
To address the problem of scattered and overwhelming internal information, we developed a Summary Assistant - an AI-powered tool designed to fetch, filter, and summarize internal communications such as project updates, meeting notes, and announcements. Our goal was to provide short, accurate summaries that are easy to consume, context-aware, and seamlessly integrated into existing workflows.
Architecture Overview
At the core of our solution is the Model-Context Protocol (MCP), which allows us to define a clean pipeline for orchestrating model usage, handling context, and triggering tool-based actions (e.g., Notion search API). The flowchart below illustrates how a user message moves through the system, highlighting how AI interactions are structured using the MCP approach.
How It Works
- Send a Message
A user starts the interaction with a request - for example, “Summarize updates related to Feature X.” - Generate User Message with System Prompt
The request is wrapped with a system prompt that instructs the AI model on how to interpret and respond. We experiment with different prompt strategies to control tone, length, and formatting. - Call OpenAI API with MCP Configuration
Using the MCP tools, we manage the context passed to the model and enable dynamic invocation of external tools (like Notion). - Handle Response in Chunks
Due to limitations in context window size, we implemented chunking logic to divide large documents or conversations into manageable pieces, summarizing each chunk before combining them into a final summary. - Tool Invocation (if needed)
If the model identifies that it needs more data (e.g., specific documents), it calls the Notion search API via MCP’s tool mechanism.
→ Filter Content
The returned results are filtered within MCP’s tool call to ensure only relevant and recent content is passed back to the model. - Regenerate Message if Needed
Based on the new context, the system may regenerate the message and repeat from step 2 to ensure the model has the most accurate and complete input. - Render Final Summary
The final step compiles the model’s summarized response into a human-readable message, ready to be displayed.
Preparation and Setup
To get started, we:
- Researched the Model-Context Protocol, especially its application in dynamic tool-based interaction and summarization.
- Resources included:
- Introducing the Model Context Protocol – Anthropic
- Notion API documentation
- MCP tool integration examples with GitHub and internal dev articles
- Resources included:
- Explored summarization use cases and analyzed various LLM models (e.g., GPT-3.5 vs GPT-4) for their summarization accuracy and token limits.
- Set up the technical environment:
- Acquired and managed API keys for OpenAI and Notion
- Configured testing sandboxes and logging for debugging AI interactions
Learning Strategies and Experiments
We divided the work across team members based on expertise:
- Backend engineers focused on API integration, chunking logic, and data flow control
- Frontend work was minimal but focused on rendering clear outputs and supporting UX
- MCP setup and tool orchestration were handled collaboratively
Some key learning activities included:
- Testing different context window sizes to determine optimal chunk sizes for summarization
- Prompt engineering to fine-tune tone, verbosity, and content filters
- Trial-and-error with tools: Initially, we planned to store all content in a VectorDB, but later pivoted to calling Notion directly using MCP tools, reducing complexity
Technical Challenges and Strategy Shifts
- Context Too Long
Solved by implementing chunking logic and learning to chain summaries with memory - Initial VectorDB Complexity
We pivoted from embedding-based search to tool-based retrieval, directly querying Notion and passing filtered results to the model - Prompt Accuracy
We iterated on several system prompts to make sure the AI summarized appropriately, especially for highly technical or ambiguous inputs
In summary, our proposed solution brought together structured AI workflows (via MCP), prompt engineering, and tool-based integration to create a powerful yet simple AI assistant that solves a very real internal pain point. The architecture we designed is modular and extensible - capable of handling other similar use cases beyond summarization.
5. Expected Impact
Skills Acquired and Improved
This challenge provided a hands-on opportunity to strengthen and expand our technical capabilities, particularly in applied AI development. Key skills we developed include:
- Prompt Engineering: We learned to craft and iterate on system prompts that guide AI behavior in complex, multi-turn tasks like summarization. Our early prompts were inconsistent, but through testing and refinement, we achieved more accurate and readable outputs.
Model-Context Protocol (MCP): We became proficient in using the MCP to structure AI interactions and enable tool calling, gaining a solid understanding of how to build context-aware pipelines and orchestrate external APIs dynamically. - Chunking and Context Management: We implemented a custom chunking strategy to handle documents exceeding token limits - something we had no practical experience with before this challenge.
- Tool Integration: We learned how to integrate with Notion's API and later explored potential connections with other platforms like Jira and Confluence for future expansion.
Before the challenge, our AI knowledge was largely theoretical. After building and testing a real system end-to-end, we now have practical skills in building LLM-based tools with meaningful user value. Beyond the technical side, we were also surprised by how impactful small changes in prompt wording or response handling could be - something we underestimated early on.
Personal Growth
This project was more than just technical upskilling - it fundamentally changed how we approach problems. Previously, we often looked at challenges through the lens of what could be solved with traditional programming. Now, we consider how AI can augment solutions, especially for tasks involving unstructured data, like summarizing documents or automating knowledge retrieval.
We also grew in soft skills:
- Collaboration: Dividing tasks, coordinating integration points, and merging ideas across frontend and backend roles helped strengthen our communication and teamwork.
- Adaptability: When our original plan to use a VectorDB introduced unnecessary complexity, we quickly reassessed and pivoted to tool-based querying via MCP - resulting in a simpler and faster implementation.
A key moment of insight came when we saw how well the AI could extract actionable insights from long updates - something we previously assumed would require advanced fine-tuning. It shifted our mindset from “can AI do this?” to “how can we make AI do this better?”
Future Applications and Next Steps
The work we’ve done so far lays the foundation for further AI-driven enhancements in our workflows. We see immediate opportunities to:
- Integrate more data sources, such as Confluence and Jira, enabling the assistant to summarize across all major communication platforms used by our teams.
- Improve summarization quality by experimenting with more advanced chunking techniques, refining prompts, and possibly incorporating feedback loops for continual learning.
- Deliver summaries in more accessible formats, such as daily or weekly Slack digests, tailored per user or team.
- Expand to other use cases, including summarizing PR discussions, spec reviews, or even generating release notes - all following the same AI-enabled workflow.
- Share our knowledge internally, through documentation or workshops, so more teams can build their own AI assistants using MCP and the patterns we’ve established.
- Continue AI learning by exploring areas like retrieval-augmented generation (RAG), prompt chaining, and lightweight fine-tuning strategies.
This challenge has sparked lasting momentum within the team. What began as a simple idea - summarizing internal updates - has evolved into a broader shift in how we build. AI is no longer an abstract concept, but a tool we understand, trust, and are excited to apply.


AI Mini Challenge: From Idea to Real-World Impact
