MessageMatrix Scales AI-Powered Event Concierge with Amazon Bedrock

GOStack partnered with MessageMatrix to modernize its AI-powered event concierge platform for high-traffic conferences. By replacing a third-party RAG solution with a scalable, AWS-native architecture on Amazon Bedrock, GOStack reduced event onboarding time by 60% and onboarding costs by 45%. The new foundation enables self-service knowledge base management for organizers, supports thousands of attendee interactions during live events, and allows MessageMatrix to scale across verticals while introducing specialized AI agents.

OVERVIEW

Information

  • Client: Message Matrix
  • Industry: Event Technology, Conversational AI
  • Project Type: Generative AI Platform Modernization
  • Services: AWS EKS, Amazon Bedrock, Amazon Bedrock Knowledge Bases, Amazon Titan Text Embeddings, Amazon OpenSearch, Amazon S3, AWS Lambda, Amazon ECS, Amazon CloudWatch, Amazon CloudFront, Amazon Route 53, Elastic Load Balancing, FinOps, Infrastructure as Code, GitOps

Intro

MessageMatrix provides a self-serve, multi-agent AI engagement platform trusted by the UK’s leading event organisers. Their AI concierge handles every delegate question — from agenda and speakers to logistics and travel — via WhatsApp and SMS, allowing event teams to focus on high-value activities. The platform is designed to deliver flawless engagement for conferences, exhibitions, awards dinners and festivals, reaching virtually every attendee without requiring an app download.

As their client base grew, the initial architecture, which relied on an externally managed Retrieval-Augmented Generation (RAG) system, began to show its limitations. MessageMatrix engaged GOStack to build a new foundation for their AI platform on AWS, seeking greater control, scalability and operational efficiency.

The Challenge

The previous platform was holding back MessageMatrix’s ability to scale and innovate.

  • Limited Control Over Knowledge Retrieval: Key parts of the RAG pipeline, such as document chunking, embeddings and ranking, were controlled by a third-party provider. This limited the team’s ability to systematically improve answer quality, forcing them to rely on manual prompt tuning rather than structured knowledge management.
  • Operational Risk from Third-Party Dependency: The reliance on an external provider created significant operational risk. Outages or platform updates on the provider’s side could directly affect live chatbots during active events, where thousands of attendees depended on the assistant for real-time information.
  • Slow Onboarding of New Events: Preparing a new event was a manual, multi-week process involving document restructuring, spreadsheet-based content validation and expert prompt adjustments. This three-to-four-week onboarding cycle became a major bottleneck as event volume increased.
  • Lack of Scalability and Expansion: The architecture was tightly coupled to the third-party RAG provider, making it difficult to extend beyond the initial use case or expand into new verticals without significant manual workarounds.
  • Limited Operational Visibility: The system lacked observability into retrieval behaviour and model interactions, making it difficult to diagnose incorrect responses during live conferences.

Our Solution

GOStack redesigned the MessageMatrix product into a scalable, AWS-native Generative AI system powered by Amazon Bedrock. The new architecture provides automated knowledge management, reliable AI responses and rapid deployment across large-scale conference environments.

AWS-Native AI Architecture: The platform now implements RAG using Amazon Titan Text Embeddings and Amazon OpenSearch Serverless for scalable semantic retrieval, with foundation models hosted on Amazon Bedrock. This approach allows for the orchestration of multiple models to balance performance and cost, while keeping responses grounded in verified event knowledge and eliminating vendor lock-in.

Automated Knowledge Ingestion: Event documentation, including PDFs, spreadsheets and markdown content, is stored in Amazon S3 as the central knowledge repository. AWS Lambda functions process and standardize incoming content through metadata tagging and configurable chunking, creating a consistent ingestion pipeline that significantly reduces manual event preparation.

Knowledge Quality Validation and Automated Testing: To maintain high-quality knowledge bases, the platform provides tools for event organizers to validate uploaded content and identify gaps before publishing. This includes automated checks for missing information, guidance on knowledge structure and automated test scenarios simulating attendee questions.

Responsible AI, Governance, and Operational Visibility: The architecture follows AWS Responsible AI and generative AI best practices by enforcing grounded responses and controlled knowledge sources. Model interactions, retrieval outputs and system metrics are monitored through Amazon CloudWatch, enabling operational visibility, governance and rapid diagnostics during live events.

Scalable and Resilient Infrastructure: The solution runs on a fully AWS-native architecture using Amazon ECS, Elastic Load Balancing, Amazon CloudFront and Amazon Route 53, designed for high availability, automatic scaling and secure operation during large conferences with thousands of simultaneous user interactions.

Results and Benefits

The modernization delivered measurable improvements in operational efficiency and platform scalability.

  • 60% Reduction in Event Onboarding Time: Automation of knowledge ingestion and standardized content pipelines reduced event onboarding time from weeks to days.
  • 45% Reduction in Onboarding Cost Per Event: Standardized RAG pipelines significantly reduced the need for manual prompt tuning and expert intervention.
  • Faster AI Testing and Validation Cycles: Automated test scenarios and content validation checks accelerated QA processes and reduced manual testing effort.
  • Improved Reliability During Live Events: By removing dependency on the third-party RAG provider, the platform eliminated a key operational risk, providing greater stability and predictable AI behavior during high-traffic conferences.
  • Scalable Infrastructure for High-Traffic Environments: The platform can now support thousands of simultaneous attendee interactions during live conferences without performance degradation.

Transformation Impact

The transition to an AWS-native Generative AI architecture transformed MessageMatrix from an early-stage AI concierge into a scalable AI platform for event engagement. The company now has full control over its AI infrastructure, enabling self-service for event organizers and paving the way for expansion into new verticals with specialized AI agents for sales and sponsor engagement. By adopting AWS Generative AI best practices, MessageMatrix now operates a production-grade AI platform capable of supporting large conferences and enabling future AI-driven product capabilities.

About GOStack

GOStack is an AWS Advanced Tier Services Partner specialising in platform modernisation, DevOps, GitOps and data analytics on AWS. We help technology companies build and run modern, scalable and cost-efficient cloud platforms. We also embed the engineering practices that make those platforms sustainable long-term.

Why Partner with Us for Generative AI Platforms?

Building a production-grade Generative AI application requires more than just a model; it requires a deep understanding of data pipelines, RAG architectures, operational visibility and Responsible AI principles. We have a proven track record of building scalable, reliable and cost-efficient AI platforms on AWS. If you’re ready to move your AI application from prototype to production, let’s talk. Contact us to architect for success.