Transforming a Legacy Betting Infrastructure into a Scalable, Cloud-Native iGaming Platform
Modernized a fragmented legacy betting infrastructure with 30+ monolithic services across 80+ repositories into a unified, cloud-native microservices platform, reducing time-to-market by weeks and enabling rapid multi-brand expansion with operational agility.
The Challenge
A prominent global iGaming operator, managing multiple brands across diverse markets, faced serious technical and operational bottlenecks caused by an aging backend and fragmented infrastructure. The legacy ecosystem consisted of 30+ monolithic services scattered across 80+ repositories, where even minor updates required disproportionate effort and coordination. Performance degraded under peak traffic, and launching new brands or markets was slow and error-prone. The root challenge was a lack of momentum: the platform couldn't evolve at the pace business needs demanded.
The Solution
Rather than executing a full rewrite, we partnered with the client to perform a guided architectural transformation by introducing BetSymphony as a modular, cloud-native backbone for the entire stack. We decomposed monolithic services into independently deployable microservices, implemented Kubernetes orchestration with auto-scaling, and built tenant-aware logic for parallel multi-brand operations. DevOps automation with shared CI/CD pipelines and Infrastructure as Code dramatically reduced deployment time and technical debt.
Technologies Used
Results & Impact
Client Summary
A prominent global iGaming operator, managing multiple brands across diverse markets, faced serious technical and operational bottlenecks caused by an aging backend and fragmented infrastructure.
The challenge was to replace this brittle legacy ecosystem with a future-ready architecture capable of scaling rapidly while supporting multi-brand operations with minimal friction and technical debt.
Industry Context
Industry: iGaming & Sports Betting
Engagement: Platform Modernization & Architecture Transformation
Technology: Cloud-Native Microservices, Kubernetes, Multi-Tenant SaaS Backbone
Business Challenge
The client’s legacy ecosystem consisted of 30+ monolithic services scattered across 80+ repositories, where even minor updates required disproportionate effort and coordination.
Key Pain Points
Performance Issues:
- Marketing teams were bottlenecked by engineering constraints
- Performance degraded under peak traffic
- System couldn’t handle sudden traffic spikes during major sporting events
Slow Market Expansion:
- Launching new brands or markets was slow and error-prone
- Each new brand required significant custom development
- Market opportunities were missed due to slow deployment
Technical Debt:
- Beyond isolated technical issues, the root challenge was a lack of momentum
- The platform couldn’t evolve at the pace business needs demanded
- Development teams spent more time maintaining legacy code than building new features
- Knowledge silos created by fragmented codebase
Operational Overhead:
- Manual deployment processes were error-prone
- Difficult to track changes across 80+ repositories
- Testing was time-consuming and incomplete
- No standardized approach to monitoring or logging
Strategic & Technical Solution
Rather than executing a full rewrite, we partnered with the client to perform a guided architectural transformation by introducing BetSymphony as a modular, cloud-native backbone for the entire stack.
Core Transformation Included:
The transformation was comprehensive, touching every layer of the platform while maintaining business continuity throughout the process.
Cloud-Native, Multi-Tenant Architecture
Microservices Decomposition
The Approach:
Replaced legacy services with a modular set of independently deployable services, each handling a specific business domain:
- Account services - User management, authentication, KYC
- Payments - Deposits, withdrawals, payment gateway integration
- Betting logic - Odds calculation, bet placement, settlement
- Risk engines - Real-time risk management and exposure control
- Bonus engines - Promotions, free bets, loyalty rewards
- Content management - CMS for marketing pages and promotions
Architecture:
┌─────────────────────────────────────────┐
│ API Gateway + Load Balancer │
└──────────────┬──────────────────────────┘
│
┌──────────┴──────────┐
│ │
┌───▼─────────┐ ┌─────▼────────┐
│ Account │ │ Betting │
│ Service │ │ Service │
└───┬─────────┘ └─────┬────────┘
│ │
┌───▼─────────┐ ┌─────▼────────┐
│ Payment │ │ Risk │
│ Service │ │ Service │
└───┬─────────┘ └─────┬────────┘
│ │
└──────────┬──────────┘
│
┌──────▼──────┐
│ Kafka │
│ Event Bus │
└─────────────┘
Benefits:
- Independent deployment and scaling
- Team autonomy per service
- Easier testing and maintenance
- Technology flexibility per service
Kubernetes Orchestration
Implementation:
Deployed services on Kubernetes clusters with:
# Example service deployment with auto-scaling
apiVersion: apps/v1
kind: Deployment
metadata:
name: betting-service
namespace: betsymphony
spec:
replicas: 3
selector:
matchLabels:
app: betting-service
template:
metadata:
labels:
app: betting-service
spec:
containers:
- name: betting-service
image: betting-service:latest
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: betting-service-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: betting-service
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Key Features:
- Dynamic autoscaling - Automatically scales based on load
- Self-healing mechanisms - Automatic restart of failed pods
- Container-level resource isolation - Guaranteed resource allocation
- Rolling updates - Zero-downtime deployments
- Health checks - Automated liveness and readiness probes
Results:
- Platform reliably handles traffic surges
- Automatic recovery from failures
- Efficient resource utilization
- Predictable performance under load
Multi-Tenant Design
The Challenge:
Built tenant-aware logic to enable parallel operations across multiple brands without code duplication or environment fragmentation.
Implementation:
// Tenant context middleware
class TenantMiddleware {
async resolveTenant(req: Request): Promise<Tenant> {
// Extract tenant from subdomain, header, or token
const tenantId = this.extractTenantId(req);
// Load tenant configuration from cache
const tenant = await this.tenantCache.get(tenantId);
if (!tenant) {
throw new TenantNotFoundError(tenantId);
}
// Attach to request context
req.tenant = tenant;
return tenant;
}
}
// Service with tenant-aware data access
class BettingService {
async placeBet(userId: string, betData: BetData, tenant: Tenant) {
// Use tenant-specific configuration
const config = tenant.bettingConfig;
// Apply tenant-specific limits
if (betData.stake > config.maxStake) {
throw new StakeLimitError();
}
// Store in tenant-partitioned database
const bet = await this.db.bets.create({
tenantId: tenant.id,
userId,
...betData
});
// Use tenant-specific odds provider
const odds = await this.oddsProviders
.get(tenant.oddsProvider)
.getOdds(bet.selection);
return this.processBet(bet, odds, config);
}
}
Tenant Isolation:
- Separate database schemas per tenant
- Tenant-specific feature flags
- Brand-specific themes and configurations
- Isolated payment configurations
Benefits:
- Brand autonomy - Each brand can customize without affecting others
- Faster market entry - New brands deployed in hours, not weeks
- Cost efficiency - Shared infrastructure across all brands
- Simplified operations - Single platform to maintain
DevOps, CI/CD & Automation
Shared CI/CD Pipelines
The Solution:
Centralized pipelines automated build, test, and deployment workflows across services, significantly reducing release cycle time and human error.
GitOps Workflow:
# .gitlab-ci.yml - Shared pipeline template
stages:
- test
- build
- deploy
variables:
DOCKER_REGISTRY: registry.betsymphony.io
KUBE_NAMESPACE: production
test:
stage: test
script:
- npm ci
- npm run test:unit
- npm run test:integration
- npm run lint
coverage: '/Statements\s+:\s+(\d+\.\d+)%/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
build:
stage: build
script:
- docker build -t $DOCKER_REGISTRY/$CI_PROJECT_NAME:$CI_COMMIT_SHA .
- docker tag $DOCKER_REGISTRY/$CI_PROJECT_NAME:$CI_COMMIT_SHA $DOCKER_REGISTRY/$CI_PROJECT_NAME:latest
- docker push $DOCKER_REGISTRY/$CI_PROJECT_NAME:$CI_COMMIT_SHA
- docker push $DOCKER_REGISTRY/$CI_PROJECT_NAME:latest
only:
- main
- develop
deploy:
stage: deploy
script:
- helm upgrade --install $CI_PROJECT_NAME ./helm-chart
--set image.tag=$CI_COMMIT_SHA
--namespace $KUBE_NAMESPACE
--wait
environment:
name: production
url: https://$CI_PROJECT_NAME.betsymphony.io
only:
- main
Pipeline Features:
- Automated testing (unit, integration, E2E)
- Security scanning (SAST, dependency vulnerabilities)
- Container image building and scanning
- Automated deployment to Kubernetes
- Rollback capabilities
- Environment-specific configurations
Results:
- 80% reduction in deployment time
- Near-zero deployment failures
- Consistent process across all services
- Faster feedback loops for developers
Infrastructure as Code (IaC)
Implementation:
Turned manual deployments into repeatable templates using IaC tooling (Helm, Terraform), which ensured consistency across environments and faster provisioning for new brands.
Terraform Example:
# Main infrastructure definition
module "brand_infrastructure" {
source = "./modules/brand"
brand_name = var.brand_name
environment = var.environment
region = var.aws_region
# Database configuration
db_instance_class = "db.r5.xlarge"
db_allocated_storage = 100
db_multi_az = true
# Kubernetes configuration
k8s_node_count = 5
k8s_node_type = "m5.xlarge"
# Redis configuration
redis_node_type = "cache.r5.large"
redis_num_cache_nodes = 3
tags = {
Project = "BetSymphony"
Environment = var.environment
Brand = var.brand_name
}
}
# Output endpoints
output "api_endpoint" {
value = module.brand_infrastructure.api_endpoint
}
output "database_endpoint" {
value = module.brand_infrastructure.database_endpoint
}
Helm Chart for Service Deployment:
# values.yaml - Configurable deployment parameters
replicaCount: 3
image:
repository: betting-service
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 8080
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: api.brand.com
paths:
- path: /
pathType: Prefix
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 20
targetCPUUtilizationPercentage: 70
Benefits:
- Reproducible environments - Dev, staging, and prod are identical
- Version controlled - All infrastructure changes tracked
- Faster provisioning - New brands deployed in hours
- Reduced errors - Automated, tested configurations
- Easy rollback - Revert to previous infrastructure state
Service Mesh & Observability
Implementation:
Added telemetry, service-to-service routing controls, and centralized logging to maintain operational visibility and streamline debugging under load.
Observability Stack:
# Prometheus monitoring configuration
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: betting-service
spec:
selector:
matchLabels:
app: betting-service
endpoints:
- port: metrics
interval: 30s
path: /metrics
---
# Grafana dashboard for service metrics
apiVersion: v1
kind: ConfigMap
metadata:
name: betting-service-dashboard
data:
dashboard.json: |
{
"dashboard": {
"title": "Betting Service Metrics",
"panels": [
{
"title": "Request Rate",
"targets": [
{
"expr": "rate(http_requests_total{service=\"betting-service\"}[5m])"
}
]
},
{
"title": "Error Rate",
"targets": [
{
"expr": "rate(http_requests_total{service=\"betting-service\",status=~\"5..\"}[5m])"
}
]
},
{
"title": "Response Time p95",
"targets": [
{
"expr": "histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))"
}
]
}
]
}
}
Distributed Tracing:
// OpenTelemetry instrumentation
import { trace, context } from '@opentelemetry/api';
class BettingService {
async placeBet(betData: BetData): Promise<Bet> {
const tracer = trace.getTracer('betting-service');
return tracer.startActiveSpan('placeBet', async (span) => {
try {
span.setAttribute('bet.userId', betData.userId);
span.setAttribute('bet.amount', betData.stake);
// Validate bet
await this.validateBet(betData);
span.addEvent('bet_validated');
// Check odds
const odds = await this.getOdds(betData.selection);
span.setAttribute('bet.odds', odds);
span.addEvent('odds_retrieved');
// Place bet
const bet = await this.createBet(betData, odds);
span.addEvent('bet_placed');
span.setStatus({ code: SpanStatusCode.OK });
return bet;
} catch (error) {
span.recordException(error);
span.setStatus({
code: SpanStatusCode.ERROR,
message: error.message
});
throw error;
} finally {
span.end();
}
});
}
}
Observability Features:
- Real-time metrics and dashboards
- Distributed tracing across services
- Centralized log aggregation
- Alert rules for anomalies
- Performance profiling
- Business KPI tracking
Operational & Feature Enablement
Third-Party Integrations
The Approach:
Normalized and abstracted integrations with external provider APIs (payment gateways, KYC services, odds feeds) using API gateways and connector layers to reduce coupling and simplify onboarding.
Integration Architecture:
// Abstract payment provider interface
interface PaymentProvider {
processDeposit(request: DepositRequest): Promise<DepositResult>;
processWithdrawal(request: WithdrawalRequest): Promise<WithdrawalResult>;
checkStatus(transactionId: string): Promise<TransactionStatus>;
}
// Concrete implementations for different providers
class StripeProvider implements PaymentProvider {
async processDeposit(request: DepositRequest): Promise<DepositResult> {
// Stripe-specific implementation
}
}
class PayPalProvider implements PaymentProvider {
async processDeposit(request: DepositRequest): Promise<DepositResult> {
// PayPal-specific implementation
}
}
// Payment service using strategy pattern
class PaymentService {
private providers: Map<string, PaymentProvider>;
async processPayment(
userId: string,
amount: number,
provider: string
): Promise<PaymentResult> {
const paymentProvider = this.providers.get(provider);
if (!paymentProvider) {
throw new Error(`Provider ${provider} not configured`);
}
// Unified error handling and retry logic
return this.withRetry(async () => {
return await paymentProvider.processDeposit({
userId,
amount,
currency: 'EUR'
});
});
}
}
Benefits:
- Easy to add new providers
- Consistent error handling
- Simplified testing (mock providers)
- Reduced vendor lock-in
Brand Autonomy
Implementation:
Implemented theming and feature toggles that allowed separate brands to operate with custom UI/UX and promotions without branching the underlying code.
Feature Flag System:
// Feature flag configuration per tenant
interface TenantConfig {
tenantId: string;
brandName: string;
features: {
liveBetting: boolean;
cashout: boolean;
virtualSports: boolean;
cryptoPayments: boolean;
socialFeatures: boolean;
};
theme: {
primaryColor: string;
secondaryColor: string;
logo: string;
favicon: string;
};
limits: {
maxBetStake: number;
maxDailyDeposit: number;
withdrawalProcessingTime: number;
};
}
// Feature-flag aware service
class BettingService {
async placeBet(betData: BetData, tenant: TenantConfig): Promise<Bet> {
// Check if feature is enabled for this brand
if (betData.isLive && !tenant.features.liveBetting) {
throw new FeatureNotEnabledError('Live betting not available');
}
// Apply tenant-specific limits
if (betData.stake > tenant.limits.maxBetStake) {
throw new LimitExceededError('Stake exceeds maximum');
}
// Process bet with tenant context
return this.processBet(betData, tenant);
}
}
Brand Customization:
- Independent themes and branding
- Feature toggles per brand
- Custom promotions and bonuses
- Localized content and languages
- Brand-specific payment methods
Scalability for Market Expansion
The Solution:
Enabled new markets to go live rapidly by provisioning brand instances and feature sets via automation instead of manual reconfiguration.
Automated Brand Provisioning:
# Single command to provision new brand
./scripts/provision-brand.sh \
--brand-name "BetStar" \
--region "eu-west-1" \
--environment "production" \
--features "liveBetting,cashout,crypto" \
--theme "blue"
# Script automates:
# 1. Creates Kubernetes namespace
# 2. Deploys all microservices
# 3. Provisions databases and caches
# 4. Configures DNS and SSL certificates
# 5. Sets up monitoring and alerts
# 6. Runs smoke tests
# 7. Notifies team when ready
Quantitative & Qualitative Outcomes
After modernization, the platform delivered measurable improvements across all key metrics:
Performance & Reliability
The platform began reliably handling traffic surges with horizontal scaling and without service downtime.
Metrics:
- 99.99% uptime during peak events
- Sub-100ms API response times
- 10x improvement in concurrent user capacity
- Zero downtime deployments
- Automatic recovery from failures
Reduced Technical Debt
Legacy code was progressively retired in favor of a maintainable, modular codebase.
Achievements:
- 80+ repositories consolidated to 15 microservices
- 70% reduction in codebase size
- 90% test coverage for critical paths
- Standardized coding practices across teams
- Documented APIs and architecture
Faster Time-to-Market
New brands and features were deployed weeks faster via CI/CD and templated architecture, compared with months under the previous setup.
Before vs After:
- New brand deployment: 3 months → 1 week
- Feature release cycle: 2 weeks → 2 days
- Bug fix deployment: 1 week → 2 hours
- Market entry: 6 months → 1 month
Operational Efficiency
Teams shifted from firefighting legacy issues to building new capabilities confidently thanks to automated workflows and a cloud-ready stack.
Team Impact:
- 80% reduction in operational incidents
- 50% more time for feature development
- Developer satisfaction improved significantly
- Onboarding time reduced from weeks to days
Reusable, Flexible Design
Modules are plug-and-play across brands; themes and configuration changes no longer require code changes.
Business Benefits:
- Rapid brand launches for new markets
- Easy A/B testing of features
- Simplified compliance across jurisdictions
- Cost-effective scaling
Technical Highlights
Containerization & Orchestration
Technology: Kubernetes, Docker
Implementation:
- All services containerized for consistency
- Kubernetes for orchestration and scaling
- Helm charts for deployment management
- Auto-scaling based on metrics
Microservices & Domain-Driven Decomposition
Approach: Loosely coupled backend modules
Structure:
- Services organized by business domain
- Event-driven communication via Kafka
- API gateway for external access
- Service mesh for inter-service communication
Shared CI/CD Automation
Technology: GitOps workflows
Features:
- Automated testing and deployment
- Infrastructure as Code
- Canary and blue-green deployments
- Automated rollbacks on failure
Multi-Tenant Architecture
Design: Brand partitioning, tenant contexts
Capabilities:
- Isolated data per tenant
- Shared infrastructure
- Tenant-specific configurations
- Feature flags per brand
Observability & Resilience
Tools: Central logging, metrics, self-healing deployments
Monitoring:
- Prometheus for metrics collection
- Grafana for visualization
- ELK stack for log aggregation
- Distributed tracing with Jaeger
- Automated alerting
API Abstraction Layer
Purpose: Clean external service integrations
Benefits:
- Unified interface for third-party services
- Easy provider switching
- Consistent error handling
- Simplified testing
Conclusion
By replacing fragmented, legacy components with a unified, scalable, cloud-native architecture, the client achieved true operational agility—accelerating development cycles, improving system stability, and enabling new brand rollouts with confidence and speed.
This transformation illustrates how modern architectural patterns and automation can turn technical debt into a strategic advantage in the competitive iGaming industry.
Key Takeaways
✅ Modernization doesn’t require a full rewrite - Incremental transformation works
✅ Microservices enable team autonomy - Independent development and deployment
✅ Automation is crucial - CI/CD and IaC dramatically improve efficiency
✅ Multi-tenancy reduces costs - Shared infrastructure for multiple brands
✅ Observability enables confidence - Know what’s happening in production
Long-Term Impact
The modernized platform became a competitive advantage, enabling:
- Faster market entry in new jurisdictions
- Rapid feature development and testing
- Cost-effective scaling as business grows
- Improved developer experience and retention
- Better customer experience through reliability
Ready to modernize your legacy platform? Contact us to discuss how we can transform your infrastructure for the cloud-native era.