- Published on
MongoDB Production Deployment: Complete Guide
- Authors
- Name
- Mamun Rashid
- @mmncit
MongoDB Production Deployment: Complete Guide
Welcome to Part 8 of our MongoDB Zero to Hero series. After learning Node.js integration, it's time to understand how to deploy MongoDB applications in production environments with high availability, security, and performance.
Production Deployment Checklist
Pre-Deployment Requirements
- Hardware/infrastructure planning
- Security configuration
- Backup and recovery strategy
- Monitoring and alerting setup
- Performance optimization
- High availability setup (replica sets)
- Horizontal scaling planning (sharding)
- Disaster recovery plan
Infrastructure Planning
Hardware Requirements
Minimum Production Specs
# Small Application (< 1M documents)
CPU: 4 cores
RAM: 8GB
Storage: 100GB SSD
Network: 1Gbps
# Medium Application (1M - 10M documents)
CPU: 8 cores
RAM: 16GB
Storage: 500GB SSD
Network: 1Gbps
# Large Application (> 10M documents)
CPU: 16+ cores
RAM: 32GB+
Storage: 1TB+ SSD (NVMe preferred)
Network: 10Gbps
Storage Considerations
# Recommended filesystem: XFS or EXT4
# Mount options for performance
/dev/sdb1 /data/db xfs defaults,noatime,logbufs=8,logbsize=256k 0 0
# Disable transparent huge pages
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
# Set readahead value
blockdev --setra 32 /dev/sdb1
Cloud Deployment Options
AWS Deployment
# EC2 Instance Types
# General Purpose: m5.large, m5.xlarge, m5.2xlarge
# Memory Optimized: r5.large, r5.xlarge, r5.2xlarge
# Storage Optimized: i3.large, i3.xlarge for high IOPS
# EBS Volume Types
# gp3: General purpose SSD (recommended)
# io2: Provisioned IOPS SSD (high performance)
# Example Terraform configuration
resource "aws_instance" "mongodb_primary" {
ami = "ami-0abcdef1234567890"
instance_type = "m5.xlarge"
vpc_security_group_ids = [aws_security_group.mongodb.id]
subnet_id = aws_subnet.private.id
ebs_block_device {
device_name = "/dev/sdb"
volume_type = "gp3"
volume_size = 500
throughput = 125
iops = 3000
}
tags = {
Name = "MongoDB-Primary"
Environment = "production"
}
}
MongoDB Atlas (Managed Service)
// Connection string for Atlas
const uri =
'mongodb+srv://username:password@cluster0.abcde.mongodb.net/myapp?retryWrites=true&w=majority';
// Atlas features:
// - Automated scaling
// - Built-in monitoring
// - Automated backups
// - Security features
// - Global clusters
Security Configuration
Authentication and Authorization
Enable Authentication
// mongod.conf
security:
authorization: enabled
keyFile: /etc/mongodb/keyfile
// Create admin user
use admin
db.createUser({
user: "admin",
pwd: "securePassword123!",
roles: [
{ role: "userAdminAnyDatabase", db: "admin" },
{ role: "readWriteAnyDatabase", db: "admin" },
{ role: "dbAdminAnyDatabase", db: "admin" },
{ role: "clusterAdmin", db: "admin" }
]
})
// Create application user
use myapp
db.createUser({
user: "appuser",
pwd: "appPassword123!",
roles: [
{ role: "readWrite", db: "myapp" }
]
})
Role-Based Access Control (RBAC)
// Custom roles for fine-grained access
db.createRole({
role: 'readOnlyAnalytics',
privileges: [
{
resource: { db: 'analytics', collection: '' },
actions: ['find', 'listCollections', 'listIndexes'],
},
],
roles: [],
});
// Grant role to user
db.grantRolesToUser('analyticsUser', ['readOnlyAnalytics']);
// Application-specific roles
db.createRole({
role: 'orderManager',
privileges: [
{
resource: { db: 'ecommerce', collection: 'orders' },
actions: ['find', 'insert', 'update', 'remove'],
},
{
resource: { db: 'ecommerce', collection: 'products' },
actions: ['find'],
},
],
roles: [],
});
Network Security
Firewall Configuration
# UFW (Ubuntu)
ufw allow from 10.0.0.0/8 to any port 27017
ufw allow from 192.168.0.0/16 to any port 27017
ufw deny 27017
# iptables
iptables -A INPUT -p tcp --dport 27017 -s 10.0.0.0/8 -j ACCEPT
iptables -A INPUT -p tcp --dport 27017 -j DROP
SSL/TLS Encryption
# mongod.conf
net:
port: 27017
bindIp: 0.0.0.0
tls:
mode: requireTLS
certificateKeyFile: /etc/ssl/mongodb/mongodb.pem
CAFile: /etc/ssl/mongodb/ca.pem
allowConnectionsWithoutCertificates: false
allowInvalidHostnames: false
Generate SSL Certificates
# Generate CA private key
openssl genrsa -out ca-key.pem 2048
# Generate CA certificate
openssl req -new -x509 -days 365 -key ca-key.pem -out ca.pem
# Generate server private key
openssl genrsa -out server-key.pem 2048
# Generate server certificate request
openssl req -new -key server-key.pem -out server.csr
# Sign server certificate with CA
openssl x509 -req -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server.pem -days 365
# Combine server key and certificate
cat server-key.pem server.pem > mongodb.pem
Encryption at Rest
# mongod.conf - Enterprise feature
security:
enableEncryption: true
encryptionKeyFile: /etc/mongodb/encryption-key
# Community alternative: LUKS encryption
cryptsetup luksFormat /dev/sdb
cryptsetup luksOpen /dev/sdb mongodb-encrypted
mkfs.ext4 /dev/mapper/mongodb-encrypted
High Availability with Replica Sets
Replica Set Architecture
Primary Node Secondary Node Secondary Node
| | |
+---------------+-----------------+
Replica Set
Setting Up Replica Sets
Configuration Files
# Primary node (rs-primary.conf)
systemLog:
destination: file
path: /var/log/mongodb/mongod.log
logAppend: true
storage:
dbPath: /data/db
journal:
enabled: true
net:
port: 27017
bindIp: 0.0.0.0
replication:
replSetName: 'myapp-replica'
security:
authorization: enabled
keyFile: /etc/mongodb/keyfile
# Secondary nodes use similar config with different paths
Initialize Replica Set
// Connect to primary and initialize
rs.initiate({
_id: 'myapp-replica',
members: [
{
_id: 0,
host: 'mongodb-primary:27017',
priority: 2,
},
{
_id: 1,
host: 'mongodb-secondary1:27017',
priority: 1,
},
{
_id: 2,
host: 'mongodb-secondary2:27017',
priority: 1,
},
],
});
// Check replica set status
rs.status();
// Check replica set configuration
rs.conf();
Advanced Replica Set Configuration
// Add arbiter (lightweight voting member)
rs.addArb('mongodb-arbiter:27017');
// Add secondary with specific options
rs.add({
host: 'mongodb-secondary3:27017',
priority: 0, // Cannot become primary
hidden: true, // Hidden from application
tags: { usage: 'backup' },
});
// Configure read preferences in application
const client = new MongoClient(uri, {
readPreference: 'secondaryPreferred',
readConcern: { level: 'majority' },
writeConcern: { w: 'majority', j: true },
});
Failover and Recovery
Automatic Failover
// Monitor replica set health
while (true) {
try {
const status = rs.status();
const primary = status.members.find((m) => m.state === 1);
console.log(`Primary: ${primary.name}`);
// Check member health
status.members.forEach((member) => {
if (member.health !== 1) {
console.warn(`Member ${member.name} is unhealthy: ${member.stateStr}`);
}
});
} catch (error) {
console.error('Error checking replica set:', error);
}
sleep(30000); // Check every 30 seconds
}
Manual Failover
// Force a specific member to become primary
rs.stepDown(120); // Step down current primary for 120 seconds
// Freeze a member to prevent it from becoming primary
rs.freeze(300); // Freeze for 300 seconds
// Unfreeze a member
rs.freeze(0);
Horizontal Scaling with Sharding
Sharding Architecture
Application
|
Mongos Router
|
Config Servers (Replica Set)
|
+----------+----------+----------+
| | | |
Shard 1 Shard 2 Shard 3
(RepSet) (RepSet) (RepSet)
Setting Up Sharded Cluster
1. Config Server Replica Set
# Start config servers
mongod --configsvr --replSet configReplSet --port 27019 --dbpath /data/configdb1
mongod --configsvr --replSet configReplSet --port 27020 --dbpath /data/configdb2
mongod --configsvr --replSet configReplSet --port 27021 --dbpath /data/configdb3
# Initialize config replica set
mongo --port 27019
rs.initiate({
_id: "configReplSet",
members: [
{ _id: 0, host: "config1:27019" },
{ _id: 1, host: "config2:27020" },
{ _id: 2, host: "config3:27021" }
]
})
2. Shard Replica Sets
# Shard 1
mongod --shardsvr --replSet shard1ReplSet --port 27018 --dbpath /data/shard1
# Initialize shard1 replica set...
# Shard 2
mongod --shardsvr --replSet shard2ReplSet --port 27028 --dbpath /data/shard2
# Initialize shard2 replica set...
3. Mongos Router
# Start mongos
mongos --configdb configReplSet/config1:27019,config2:27020,config3:27021 --port 27017
# Connect to mongos and add shards
mongo --port 27017
sh.addShard("shard1ReplSet/shard1-a:27018,shard1-b:27018,shard1-c:27018")
sh.addShard("shard2ReplSet/shard2-a:27028,shard2-b:27028,shard2-c:27028")
4. Enable Sharding
// Enable sharding for database
sh.enableSharding('ecommerce');
// Create index for shard key
db.orders.createIndex({ customerId: 1, orderDate: 1 });
// Shard collection
sh.shardCollection('ecommerce.orders', { customerId: 1, orderDate: 1 });
// Check sharding status
sh.status();
// Monitor data distribution
db.orders.getShardDistribution();
Choosing Shard Keys
Good Shard Key Characteristics
// 1. High Cardinality
sh.shardCollection('users', { userId: 1 }); // Good: unique values
// 2. Even Distribution
sh.shardCollection('orders', { customerId: 1, timestamp: 1 }); // Good: spreads data
// 3. Query Isolation
sh.shardCollection('logs', { date: 1, appId: 1 }); // Good: queries target specific shards
// ❌ Bad shard keys
sh.shardCollection('orders', { status: 1 }); // Low cardinality
sh.shardCollection('logs', { timestamp: 1 }); // Monotonically increasing
Compound Shard Keys
// Effective compound shard key
sh.shardCollection('products', {
category: 1, // Medium cardinality, good for queries
productId: 1, // High cardinality, ensures distribution
});
// Zone sharding for geographic distribution
sh.addShardTag('shard1', 'US-EAST');
sh.addShardTag('shard2', 'US-WEST');
sh.addTagRange(
'ecommerce.users',
{ region: 'US-EAST', userId: MinKey },
{ region: 'US-EAST', userId: MaxKey },
'US-EAST',
);
Monitoring and Alerting
Built-in Monitoring Tools
Database Profiler
// Enable profiling for slow operations (>100ms)
db.setProfilingLevel(1, { slowms: 100 });
// Profile all operations
db.setProfilingLevel(2);
// Check profiling status
db.getProfilingStatus();
// Query profile collection
db.system.profile.find().sort({ ts: -1 }).limit(5).pretty();
// Analyze slow queries
db.system.profile.aggregate([
{ $match: { millis: { $gt: 1000 } } },
{
$group: {
_id: '$command.find',
avgTime: { $avg: '$millis' },
count: { $sum: 1 },
},
},
{ $sort: { avgTime: -1 } },
]);
MongoDB Compass Monitoring
// Real-time performance metrics
// - Query performance
// - Index usage
// - Document validation
// - Schema analysis
// Connect to production with read-only user
const compassUser = {
user: 'compass-readonly',
pwd: 'compassPassword',
roles: [
{ role: 'read', db: 'myapp' },
{ role: 'clusterMonitor', db: 'admin' },
],
};
External Monitoring Solutions
Prometheus + Grafana
# docker-compose.yml
version: '3.8'
services:
mongodb-exporter:
image: percona/mongodb_exporter:0.32
ports:
- '9216:9216'
environment:
- MONGODB_URI=mongodb://monitor:password@mongodb:27017
command:
- '--mongodb.uri=mongodb://monitor:password@mongodb:27017'
- '--collect-all'
- '--compatible-mode'
prometheus:
image: prom/prometheus
ports:
- '9090:9090'
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
grafana:
image: grafana/grafana
ports:
- '3000:3000'
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
Prometheus Configuration
# prometheus.yml
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'mongodb'
static_configs:
- targets: ['mongodb-exporter:9216']
scrape_interval: 30s
Key Metrics to Monitor
// Database metrics
const metrics = {
// Performance
operationsPerSecond: 'mongodb_op_counters_total',
queryExecutionTime: 'mongodb_mongod_op_latencies_latency_total',
cacheHitRatio: 'mongodb_mongod_wiredtiger_cache_bytes_currently_in_the_cache',
// Resources
memoryUsage: 'mongodb_memory',
diskUsage: 'mongodb_mongod_storage_engine_persistent_cache_bytes',
connectionCount: 'mongodb_connections',
// Replication
replicationLag: 'mongodb_mongod_replset_member_replication_lag',
replicationOplogWindow: 'mongodb_mongod_replset_oplog_tail_timestamp',
// Sharding
chunkCount: 'mongodb_mongos_sharding_chunks_total',
balancerState: 'mongodb_mongos_sharding_balancer_enabled',
};
Alerting Rules
Critical Alerts
# alerting-rules.yml
groups:
- name: mongodb-critical
rules:
- alert: MongoDBDown
expr: up{job="mongodb"} == 0
for: 1m
labels:
severity: critical
annotations:
summary: 'MongoDB instance is down'
- alert: MongoDBHighMemoryUsage
expr: (mongodb_memory{type="resident"} / mongodb_memory{type="virtual"}) > 0.9
for: 5m
labels:
severity: critical
annotations:
summary: 'MongoDB memory usage is high'
- alert: MongoDBReplicationLag
expr: mongodb_mongod_replset_member_replication_lag > 10
for: 2m
labels:
severity: warning
annotations:
summary: 'MongoDB replication lag is high'
- alert: MongoDBSlowQueries
expr: rate(mongodb_mongod_op_latencies_latency_total[5m]) > 1000
for: 5m
labels:
severity: warning
annotations:
summary: 'MongoDB has slow queries'
Backup and Recovery
Backup Strategies
1. Mongodump/Mongorestore
# Full database backup
mongodump --host replica-set/primary:27017,secondary1:27017,secondary2:27017 \
--username backup_user \
--password backup_password \
--authenticationDatabase admin \
--out /backup/$(date +%Y%m%d)
# Compress backup
tar -czf mongodb-backup-$(date +%Y%m%d).tar.gz /backup/$(date +%Y%m%d)
# Collection-specific backup
mongodump --db myapp --collection users --out /backup/users-$(date +%Y%m%d)
# Restore from backup
mongorestore --host mongodb:27017 \
--username restore_user \
--password restore_password \
--authenticationDatabase admin \
/backup/20240120
2. Filesystem Snapshots
# LVM snapshot backup
lvcreate -L1G -s -n mongodb-snapshot /dev/vg0/mongodb-data
# Mount snapshot
mkdir /mnt/mongodb-backup
mount /dev/vg0/mongodb-snapshot /mnt/mongodb-backup
# Copy data
rsync -av /mnt/mongodb-backup/ /backup/filesystem-$(date +%Y%m%d)/
# Cleanup
umount /mnt/mongodb-backup
lvremove /dev/vg0/mongodb-snapshot
3. Cloud Backups
// AWS S3 backup script
const AWS = require('aws-sdk');
const { exec } = require('child_process');
const fs = require('fs');
class MongoBackupService {
constructor() {
this.s3 = new AWS.S3();
this.bucket = process.env.BACKUP_BUCKET;
}
async createBackup() {
const timestamp = new Date().toISOString().slice(0, 10);
const backupPath = `/tmp/mongodb-backup-${timestamp}`;
try {
// Create mongodump
await this.executeCommand(`
mongodump --host ${process.env.MONGODB_URI}
--out ${backupPath}
--gzip
`);
// Compress backup
const archivePath = `${backupPath}.tar.gz`;
await this.executeCommand(`tar -czf ${archivePath} -C ${backupPath} .`);
// Upload to S3
const fileStream = fs.createReadStream(archivePath);
const uploadParams = {
Bucket: this.bucket,
Key: `mongodb-backups/backup-${timestamp}.tar.gz`,
Body: fileStream,
};
await this.s3.upload(uploadParams).promise();
// Cleanup local files
await this.executeCommand(`rm -rf ${backupPath} ${archivePath}`);
console.log(`Backup completed: backup-${timestamp}.tar.gz`);
} catch (error) {
console.error('Backup failed:', error);
throw error;
}
}
async restoreBackup(backupKey) {
const tempPath = `/tmp/restore-${Date.now()}`;
try {
// Download from S3
const downloadParams = {
Bucket: this.bucket,
Key: backupKey,
};
const data = await this.s3.getObject(downloadParams).promise();
fs.writeFileSync(`${tempPath}.tar.gz`, data.Body);
// Extract backup
await this.executeCommand(`
mkdir -p ${tempPath} &&
tar -xzf ${tempPath}.tar.gz -C ${tempPath}
`);
// Restore with mongorestore
await this.executeCommand(`
mongorestore --host ${process.env.MONGODB_URI}
--drop
${tempPath}
`);
// Cleanup
await this.executeCommand(`rm -rf ${tempPath} ${tempPath}.tar.gz`);
console.log('Restore completed successfully');
} catch (error) {
console.error('Restore failed:', error);
throw error;
}
}
executeCommand(command) {
return new Promise((resolve, reject) => {
exec(command, (error, stdout, stderr) => {
if (error) {
reject(error);
} else {
resolve(stdout);
}
});
});
}
}
// Schedule automated backups
const cron = require('node-cron');
const backupService = new MongoBackupService();
// Daily backup at 2 AM
cron.schedule('0 2 * * *', async () => {
try {
await backupService.createBackup();
} catch (error) {
console.error('Scheduled backup failed:', error);
// Send alert notification
}
});
Point-in-Time Recovery
Oplog Backup
# Continuous oplog backup
while true; do
mongodump --host replica-set/primary:27017 \
--db local \
--collection oplog.rs \
--query '{"ts":{"$gt":{"$timestamp":{"t":'$(date +%s)', "i":1}}}}' \
--out /backup/oplog/$(date +%Y%m%d_%H%M%S)
sleep 300 # Every 5 minutes
done
Recovery Process
# 1. Restore from full backup
mongorestore --drop /backup/full/20240120
# 2. Apply oplog entries up to specific point
mongorestore --oplogReplay \
--oplogLimit "1642680000:1" \
/backup/oplog/
Performance Optimization
Query Optimization
// Use explain() to analyze queries
db.orders.find({ customerId: ObjectId('...'), status: 'pending' }).explain('executionStats');
// Create optimal indexes
db.orders.createIndex({ customerId: 1, status: 1, createdAt: -1 });
// Use aggregation pipeline for complex queries
db.orders.aggregate([
{ $match: { status: 'completed', createdAt: { $gte: new Date('2024-01-01') } } },
{ $group: { _id: '$customerId', totalSpent: { $sum: '$total' } } },
{ $sort: { totalSpent: -1 } },
{ $limit: 10 },
]);
Connection Pooling
// Optimized connection configuration
const client = new MongoClient(uri, {
maxPoolSize: 100, // Maximum connections
minPoolSize: 10, // Minimum connections
maxIdleTimeMS: 300000, // Close connections after 5 minutes idle
serverSelectionTimeoutMS: 5000, // How long to try selecting a server
socketTimeoutMS: 45000, // How long a send or receive on a socket can take
heartbeatFrequencyMS: 10000, // How often to check server status
bufferMaxEntries: 0, // Disable mongoose buffering
bufferCommands: false, // Disable mongoose buffering
});
Write Concern Optimization
// Balance consistency and performance
const writeOptions = {
// Production: Ensure writes are acknowledged by majority
writeConcern: {
w: 'majority',
j: true, // Wait for journal
wtimeout: 5000, // Timeout after 5 seconds
},
// High-throughput scenarios: Reduce durability for speed
writeConcern: {
w: 1, // Only primary acknowledgment
j: false, // Don't wait for journal
wtimeout: 1000,
},
};
// Use appropriate write concern per operation
await db.orders.insertOne(order, { writeConcern: { w: 'majority', j: true } });
await db.logs.insertOne(logEntry, { writeConcern: { w: 1 } });
What's Next?
You've learned how to deploy MongoDB in production environments. Next, explore Monitoring and Maintenance to learn about ongoing operational practices, or dive into Advanced Topics for expert-level MongoDB features.
Series Navigation
- Previous: MongoDB with Node.js Integration
- Next: MongoDB Monitoring and Maintenance
- Hub: MongoDB Zero to Hero - Complete Guide
This is Part 8 of the MongoDB Zero to Hero series. Production deployment is critical for real-world applications - follow these practices to ensure reliability, security, and performance.
Enjoyed this post?
Subscribe to get notified about new posts and updates. No spam, unsubscribe anytime.
By subscribing, you agree to our Privacy Policy. You can unsubscribe at any time.
Discussion (0)
This website is still under development. If you encounter any issues, please contact me