

MFV AI Mini Challenge: Lessons Learned

The MFV AI Mini Challenge, which concluded on June 27th, marked not an end, but a significant beginning in MFV's journey towards integrating Artificial Intelligence into its core operations. The challenge was designed to leverage AI's transformative power to address "human-hour" challenges, shifting MFV's approach from merely "using AI" to actively "building with it." The groundbreaking initiatives sparked by the winning submissions are envisioned as practical, real-world solutions that will optimize efficiency and strategically harness AI's cutting-edge capabilities within the organization.
Guiding Innovation: Our Esteemed Judges and Coordinators
The success of the AI Mini Challenge was deeply rooted in the vision and dedication of our technical leaders, who served as both mentors and evaluators. This event fostered crucial cross-office collaboration, uniting leaders from both our Hanoi and Ho Chi Minh City offices, including Melvin, Squall, Bruce, Aldo, Theta, and Bobby. Beyond their specialized team leadership roles, some also hold positions as Section Managers, further enriching the synergy of the judging panel. Their involvement was comprehensive, ranging from meticulously designing challenge problems and scoring criteria—aligned with management's desired outcomes—to diligently evaluating solutions for their real organizational value. This dedicated leadership ensured the challenge's integrity and deepened the understanding of Forwardians' practical AI capabilities.
Our Comprehensive Evaluation Criteria
Each team approached the challenge with a clear focus on solving real, day-to-day problems encountered in the workflow. For example, projects under the "Your Code and Docs Helper" theme aimed to reduce the burden for both newcomers and mentors by offering instant access to code explanations, documentation lookups, and impact analysis for code changes. Meanwhile, submissions for "Summary Assistant" tackled the equally pressing issue of information overload. By leveraging AI to summarize internal news, project updates, and announcements, these tools helped streamline internal communication and made key updates easier to digest, especially for busy teams.
For any AI-based tool to transition from concept to practical application, it must meet stringent criteria. During the AI Mini Challenge, our judging framework focused on key aspects crucial for real-world impact:
Functionality: We assessed if the tool accurately analyzed code and documents, ensuring query results were relevant with correct references.
Usability: A straightforward, intuitive web interface and an efficient query process were vital for new users.
Performance: Tools needed to respond quickly (within 5 seconds for basic queries) and demonstrate stable integration with platforms like GitHub and Kibela.
Scalability: We looked for potential to extend support to additional programming languages or document platforms.
Innovation: Creative features, such as product flow suggestions or external best practice recommendations, set submissions apart.
Plus Points: Teams earned additional credit for including valuable features like an AI-driven Product Flow Tour or Best Practice Suggestions drawn from external insights.
Judges’ Critiques and Challenges Faced
Overall, most projects demonstrated significant potential to genuinely solve the company's internal problems, particularly by addressing time-consuming, repetitive tasks and the substantial effort required for onboarding new or existing employees, as well as upscaling project documentation (e.g., explaining code and business logic), which can typically take up to weeks. The AI-based tools were expected to alleviate these issues. What truly stood out was the participants' mindset and spirit. Regardless of whether they were in teams of two or five, all members displayed remarkable ownership, adaptability, and teamwork under time pressure. It was evident that beyond merely "competing," participants were genuinely invested in building something useful—reflecting a strong culture of innovation and collaboration.
While some teams went above and beyond, integrating advanced Large Language Model (LLM) features or fine-tuned prompts to suit better internal context, a crucial aspect for competition purposes was adherence to the challenge requirements. Although some projects offered innovative solutions to other valuable organizational problems, strict adherence to the given brief was paramount for fair evaluation.
The technical review aspects were rigorous. Judges scrutinized source code for readability and appropriate programming language use aligned with company standards. Adherence to security protocols was critical to prevent sensitive company documents from becoming public, emphasizing the need for internal, secure AI tools. Projects were also assessed for their scalability across multiple company projects.
The evaluation process itself was both exciting and demanding, posing several challenges for the judges. It was time-consuming, requiring judges to manually watch demos, meticulously read slides, compare approaches, and identify any deviations from the prompt. Functional correctness was consistently prioritized. The high quality of the code, clarity of documentation, and completeness of the demo sessions made the scoring process exceptionally challenging—in a positive way. One of the biggest challenges was time constraint, as each team had a limited demo time, yet their ideas were often rich in functionality and demanded deeper evaluation. Additionally, the diversity of technical stacks used—from frontend-heavy implementations to backend-focused AI integration—required the judges to collectively cover a wide range of technical expertise. It was also challenging to evaluate not just technical correctness, but also problem relevance, user experience, and scalability, all within a compressed timeframe.
Collaborative Spirit Among Judges
From the judging side, collaboration was absolutely key. Judges continuously exchanged observations, shared technical insights, and cross-validated their understanding of each project. Since the submissions varied significantly in scope and depth, they leveraged each other’s domain expertise to ensure a balanced and fair assessment. There was a strong spirit of constructive discussion among judges—not merely to decide a winner, but to recognize the unique strengths of each solution. The judging process, in that sense, also became a valuable space for shared learning and mutual respect.
Stay tuned for our upcoming AI Hackathon!


Strengthen your team with appreciation (Emotip edition)
