We’re heading to AWS Summit Warsaw. Meet us at our booth on 6 May 2026!
MessageMatrix Scales AI-Powered Event Concierge with Amazon Bedrock
GOStack partnered with MessageMatrix to modernize its AI-powered event concierge platform for high-traffic conferences. By replacing a third-party RAG solution with a scalable, AWS-native architecture on Amazon Bedrock, GOStack reduced event onboarding time by 60% and onboarding costs by 45%. The new foundation enables self-service knowledge base management for organizers, supports thousands of attendee interactions during live events, and allows MessageMatrix to scale across verticals while introducing specialized AI agents.

5
min read
GOStack
Key metrics achieved
45%
Operational Cost Reduction
80%
Data Processing Speed Improved By
3X
AI Hallucinations Reduced

Message Matrix
Intro
MessageMatrix provides a self-serve, multi-agent AI engagement platform trusted by the UK’s leading event organisers. Their AI concierge handles every delegate question — from agenda and speakers to logistics and travel — via WhatsApp and SMS, allowing event teams to focus on high-value activities. The platform is designed to deliver flawless engagement for conferences, exhibitions, awards dinners and festivals, reaching virtually every attendee without requiring an app download.
As their client base grew, the initial architecture, which relied on an externally managed Retrieval-Augmented Generation (RAG) system, began to show its limitations. MessageMatrix engaged GOStack to build a new foundation for their AI platform on AWS, seeking greater control, scalability and operational efficiency.
The Challenge
The previous platform was holding back MessageMatrix’s ability to scale and innovate.
Limited Control Over Knowledge Retrieval: Key parts of the RAG pipeline, such as document chunking, embeddings and ranking, were controlled by a third-party provider. This limited the team’s ability to systematically improve answer quality, forcing them to rely on manual prompt tuning rather than structured knowledge management.
Operational Risk from Third-Party Dependency: The reliance on an external provider created significant operational risk. Outages or platform updates on the provider’s side could directly affect live chatbots during active events, where thousands of attendees depended on the assistant for real-time information.
Slow Onboarding of New Events: Preparing a new event was a manual, multi-week process involving document restructuring, spreadsheet-based content validation and expert prompt adjustments. This three-to-four-week onboarding cycle became a major bottleneck as event volume increased.
Lack of Scalability and Expansion: The architecture was tightly coupled to the third-party RAG provider, making it difficult to extend beyond the initial use case or expand into new verticals without significant manual workarounds.
Limited Operational Visibility: The system lacked observability into retrieval behaviour and model interactions, making it difficult to diagnose incorrect responses during live conferences.
Our Solution
GOStack redesigned the MessageMatrix product into a scalable, AWS-native Generative AI system powered by Amazon Bedrock. The new architecture provides automated knowledge management, reliable AI responses and rapid deployment across large-scale conference environments.
AWS-Native AI Architecture: The platform now implements RAG using Amazon Titan Text Embeddings and Amazon OpenSearch Serverless for scalable semantic retrieval, with foundation models hosted on Amazon Bedrock. This approach allows for the orchestration of multiple models to balance performance and cost, while keeping responses grounded in verified event knowledge and eliminating vendor lock-in.
Automated Knowledge Ingestion: Event documentation, including PDFs, spreadsheets and markdown content, is stored in Amazon S3 as the central knowledge repository. AWS Lambda functions process and standardize incoming content through metadata tagging and configurable chunking, creating a consistent ingestion pipeline that significantly reduces manual event preparation.
Knowledge Quality Validation and Automated Testing: To maintain high-quality knowledge bases, the platform provides tools for event organizers to validate uploaded content and identify gaps before publishing. This includes automated checks for missing information, guidance on knowledge structure and automated test scenarios simulating attendee questions.
Responsible AI, Governance, and Operational Visibility: The architecture follows AWS Responsible AI and generative AI best practices by enforcing grounded responses and controlled knowledge sources. Model interactions, retrieval outputs and system metrics are monitored through Amazon CloudWatch, enabling operational visibility, governance and rapid diagnostics during live events.
Scalable and Resilient Infrastructure: The solution runs on a fully AWS-native architecture using Amazon ECS, Elastic Load Balancing, Amazon CloudFront and Amazon Route 53, designed for high availability, automatic scaling and secure operation during large conferences with thousands of simultaneous user interactions.
Results and Benefits
60% Reduction in Event Onboarding Time: Automation of knowledge ingestion and standardized content pipelines reduced event onboarding time from weeks to days.
45% Reduction in Onboarding Cost Per Event: Standardized RAG pipelines significantly reduced the need for manual prompt tuning and expert intervention.
Faster AI Testing and Validation Cycles: Automated test scenarios and content validation checks accelerated QA processes and reduced manual testing effort.
Improved Reliability During Live Events: By removing dependency on the third-party RAG provider, the platform eliminated a key operational risk, providing greater stability and predictable AI behavior during high-traffic conferences.
Scalable Infrastructure for High-Traffic Environments: The platform can now support thousands of simultaneous attendee interactions during live conferences without performance degradation
From Black Box to Full Control
We were using a third-party RAG. It was limited, nothing we could modify. GOStack built us a system we actually own and can grow with. This has opened up new possibilities and our clients can now self-serve.
Douglas Orr
CEO & Founder
MessageMatrix
Ready to get started?
Book a free, no-obligation call with one of our AWS-certified engineers. We'll listen to your challenges, share honest advice, and only recommend next steps if we genuinely think we can help.

Share this post with others
Related posts
GOStack partnered with a global HealthTech innovator to migrate their AI-powered medical chatbot from a self-hosted, non-compliant architecture to a production-grade, enterprise-compliant Generative AI platform on AWS. By leveraging Amazon Bedrock and a serverless architecture, GOStack enabled the client to eliminate infrastructure overhead, meet enterprise compliance requirements and accelerate their entry into the regulated healthcare market.
GOStack built and deployed an internal, AI-powered DevOps Agent on AWS to augment its engineering workflows with secure, tool-assisted intelligence. The solution automates infrastructure analysis, accelerates incident diagnostics and standardizes security validation, resulting in a significant reduction in manual review time and faster Mean Time to Resolution (MTTR) for complex incidents. | As a provider of cloud engineering and AI solutions for organizations in regulated industries, GOStack manages a wide array of complex AWS environments. With a growing footprint of customer and internal platforms, the platform engineering team faced increasing complexity in managing secure infrastructure, validating configurations and resolving incidents efficiently.


