Network Optimization - Bits on the Wire
Learning Objectives
Analyze network overhead in XRPL transaction propagation and confirmation
Optimize connection management for high-throughput applications
Design geographic deployment strategies that minimize propagation latency
Implement transaction compression and encoding optimizations
Configure network parameters for different deployment scenarios
In Lesson 2, we established that consensus takes ~2,500ms—a fixed cost of deterministic finality. But the remaining ~1,400ms of a typical 3,900ms transaction includes:
- Network submission: 50-300ms
- Transaction propagation: 200-500ms
- Confirmation broadcast: 200-500ms
This is the latency you can actually influence.
For ODL payments where every second matters, or DEX trading where speed provides advantage, optimizing these network components provides real competitive value.
XRPL uses a gossip protocol for transaction propagation:
Transaction Propagation Flow:
Time 0ms: Client submits to Server A
┌─────────┐
│Server A │
└────┬────┘
│
Time 50ms: Server A broadcasts to peers
┌────┴────┐────────────┐
│ │ │
┌────▼────┐ ┌──▼───┐ ┌─────▼────┐
│Server B │ │Srv C │ │Server D │
└────┬────┘ └──┬───┘ └─────┬────┘
│ │ │
Time 100ms: Each server broadcasts to its peers
│ │ │
┌────▼────┬────▼───────────▼────┐
│ Network-wide spread │
└────────────────────────────────┘
Time 200-400ms: Transaction reaches all validators
Propagation Characteristics:
Factor | Impact on Latency | Notes
--------------------|-------------------|------------------
Number of hops | +50-100ms/hop | Typically 2-4 hops
Geographic distance | +50-150ms | Speed of light
Server connectivity | ±100ms | Well-connected = faster
Network congestion | +0-500ms | Variable
Message size | +0-50ms | Larger = slowerCurrent Protocol:
Message flow for one transaction:
1. Client → Server (submit): 1 message
2. Server → Peers (relay): N messages (N = peer count)
3. Each peer → their peers: N² messages total
4. Until all nodes have transaction
Redundancy: Each node may receive same transaction
from multiple peers (deduplication required)
Efficiency: O(N log N) messages for N nodes
Better than O(N²) naive broadcast
Worse than O(N) optimal routing
Optimization Opportunities:
Strategy | Bandwidth Saved | Latency Impact
----------------------|-----------------|----------------
Dedup before relay | 30-50% | None
Bloom filter announce | 20-30% | +10-20ms
Compact encoding | 40-60% | -5-10ms
Direct validator conn | N/A | -50-100ms
How to Measure:
// Submit same transaction to multiple servers simultaneously
// Observe when each reports "validated" status
async function measurePropagation(txBlob) {
const servers = [
'wss://s1.ripple.com', // US East
'wss://s2.ripple.com', // US West
'wss://xrpl.ws', // Europe
'wss://xrplcluster.com', // Multi-region
];
const startTime = Date.now();
const results = await Promise.all(
servers.map(async server => {
const client = new xrpl.Client(server);
await client.connect();
// Subscribe to transaction
const validated = await client.request({
command: 'subscribe',
streams: ['transactions'],
});
// Submit (only to first server)
// Measure when others see it
return {
server,
propagationTime: Date.now() - startTime
};
})
);
return results;
}
```
- Persistent connection
- Bidirectional communication
- Low overhead after handshake
- Supports subscriptions (real-time updates)
- Handshake: 50-150ms (one-time)
- Message latency: 1-5ms (after connected)
- Reconnection: ~100-300ms
Best for: Applications needing real-time updates,
high-frequency submission
```
- Request-response only
- New connection per request (or pooled)
- Higher overhead
- Simpler implementation
- Per-request overhead: 50-100ms
- No subscription capability
- Polling required for updates
Best for: Simple integrations, infrequent queries
```
High-Throughput Connection Pool:
class XRPLConnectionPool {
constructor(servers, connectionsPerServer = 5) {
this.pools = new Map();
for (const server of servers) {
this.pools.set(server, {
connections: [],
roundRobin: 0,
});
// Pre-establish connections
for (let i = 0; i < connectionsPerServer; i++) {
this.addConnection(server);
}
}
}
async addConnection(server) {
const client = new xrpl.Client(server);
await client.connect();
client.on('disconnected', () => {
this.reconnect(server, client);
});
this.pools.get(server).connections.push(client);
}
getConnection(preferredServer = null) {
const pool = preferredServer
? this.pools.get(preferredServer)
: this.selectBestPool();
// Round-robin selection
const idx = pool.roundRobin % pool.connections.length;
pool.roundRobin++;
return pool.connections[idx];
}
selectBestPool() {
// Select pool with lowest latency / highest availability
// Implementation depends on monitoring data
}
}
Pool Sizing Guidelines:
Use Case | Connections/Server | Servers | Total
----------------------|--------------------|---------|---------
Low volume (<10 TPS) | 2 | 2 | 4
Medium (10-100 TPS) | 5 | 3 | 15
High (100-500 TPS) | 10 | 4 | 40
Very High (500+ TPS) | 20 | 5+ | 100+
Each connection can handle ~50-100 TPS sustainably.
More connections = more concurrent requests.
Geographic Selection:
User Location → Optimal Servers
US East: s1.ripple.com, xrplcluster.com (us-east)
US West: s2.ripple.com, xrplcluster.com (us-west)
Europe: xrpl.ws, xrplcluster.com (eu-west)
Asia: xrplcluster.com (ap-east), self-hosted
Latency reduction: 50-200ms by using geographically close servers
```
Load Balancing:
class ServerSelector {
constructor(servers) {
this.servers = servers.map(s => ({
url: s.url,
region: s.region,
latency: 0,
errorRate: 0,
lastCheck: 0,
}));
}
async healthCheck() {
for (const server of this.servers) {
const start = Date.now();
try {
await this.ping(server.url);
server.latency = Date.now() - start;
server.errorRate *= 0.9; // Decay errors
} catch (e) {
server.errorRate = Math.min(1, server.errorRate + 0.2);
}
server.lastCheck = Date.now();
}
}
selectServer(userRegion) {
// Filter to same region if possible
const regional = this.servers.filter(
s => s.region === userRegion && s.errorRate < 0.3
);
const candidates = regional.length > 0 ? regional : this.servers;
// Select lowest latency with acceptable error rate
return candidates
.filter(s => s.errorRate < 0.5)
.sort((a, b) => a.latency - b.latency)[0];
}
}
```
Current XRPL Encoding:
Transaction formats:
JSON (human readable):
{
"Account": "rN7n3473SaZBCG4dFL83w7a1RXtXtbk2D9",
"Amount": "1000000",
"Destination": "rfkE1aSy9G8Upk4JssnwBxhEv5p4mn2KTy",
"Fee": "12",
"Flags": 0,
"Sequence": 1,
"TransactionType": "Payment",
"SigningPubKey": "...",
"TxnSignature": "..."
}
Size: ~450 bytes
Binary (canonical):
Size: ~200 bytes (56% smaller)
Always use binary for submission - JSON is for display only.
Transaction Compression:
Technique | Compression | CPU Cost | When to Use
---------------------|-------------|----------|-------------
None | 0% | None | Single transactions
Gzip | 60-70% | Low | Batch submissions
Brotli | 70-80% | Medium | High latency networks
Custom (field-aware) | 50-60% | Low | Protocol level
For single transactions: Don't compress (overhead > savings)
For batches (10+ tx): Gzip provides good trade-off
```
Batch Submission:
// Instead of 100 individual submissions:
for (const tx of transactions) {
await client.submit(tx); // 100 round trips
}
// Batch submission (if supported):
const batch = transactions.map(tx => tx.tx_blob);
const compressed = gzip(JSON.stringify(batch));
await client.submitBatch(compressed); // 1 round trip
- Network round trips: 100 → 1
- Bandwidth: ~45KB → ~15KB (with compression)
- Total latency: ~5,000ms → ~100ms
Minimal Response Requests:
// Full response (default):
const result = await client.request({
command: 'account_info',
account: 'r...',
});
// Returns ~2KB of JSON
// Minimal response:
const result = await client.request({
command: 'account_info',
account: 'r...',
ledger_index: 'validated',
// Specify only fields you need (if API supports)
});
// Cache aggressively for read-heavy workloads
```
Multi-Region Deployment:
┌─────────────────────────────────────────────────────────────┐
│ Global Load Balancer │
│ (Route by user geography) │
└────────────┬────────────────┬────────────────┬──────────────┘
│ │ │
┌──────▼─────┐ ┌──────▼─────┐ ┌──────▼─────┐
│ US-EAST │ │ EUROPE │ │ ASIA │
│ │ │ │ │ │
│ ┌────────┐ │ │ ┌────────┐ │ │ ┌────────┐ │
│ │App Srv │ │ │ │App Srv │ │ │ │App Srv │ │
│ └───┬────┘ │ │ └───┬────┘ │ │ └───┬────┘ │
│ │ │ │ │ │ │ │ │
│ ┌───▼────┐ │ │ ┌───▼────┐ │ │ ┌───▼────┐ │
│ │XRPL │ │ │ │XRPL │ │ │ │XRPL │ │
│ │Server │ │ │ │Server │ │ │ │Server │ │
│ └────────┘ │ │ └────────┘ │ │ └────────┘ │
└────────────┘ └────────────┘ └────────────┘
│ │ │
└────────────────┼────────────────┘
│
┌─────────▼─────────┐
│ XRPL Network │
│ (Validators) │
└───────────────────┘
- US user → EU server → XRPL: 150ms + 3,900ms = 4,050ms
- US user → US server → XRPL: 30ms + 3,900ms = 3,930ms
Savings: 120ms (consistent, every transaction)
```
When to Run Your Own:
Scenario | Own Server? | Reason
-------------------------|-------------|-------------------------
Low volume (<10 TPS) | No | Public servers sufficient
Medium volume (10-100) | Maybe | Depends on latency needs
High volume (100+) | Yes | Avoid rate limits, control
Latency critical | Yes | Eliminate external hops
Regulatory requirements | Yes | Data residency
High availability needed | Yes | Control uptime
XRPL Server Configuration for Performance:
# rippled.cfg optimizations
[server]
port_rpc_admin_local
port_peer
port_ws_public
[port_ws_public]
ip=0.0.0.0
port=6006
protocol=wss
admin=
Increase peer connections for better propagation
[peers_max]
50
Larger transaction queue
[transaction_queue]
minimum_txn_in_ledger_standalone=1000
minimum_txn_in_ledger=1000
target_txn_in_ledger=10000
Memory settings for performance
[node_size]
huge
[ledger_history]
256 # Keep recent ledgers in memory
```
- account_info (with specific ledger_index)
- ledger (historical)
- transaction (by hash)
- server_info
- submit (transactions)
- subscribe (streams)
- account_info (current)
- book_offers (live order book)
Cache Strategy:
class XRPLCache {
constructor(ttlSeconds = 5) {
this.cache = new Map();
this.ttl = ttlSeconds * 1000;
}
async get(key, fetchFn) {
const cached = this.cache.get(key);
if (cached && Date.now() - cached.timestamp < this.ttl) {
return cached.value;
}
const value = await fetchFn();
this.cache.set(key, { value, timestamp: Date.now() });
return value;
}
// For account_info with specific ledger (immutable)
async getHistoricalAccount(account, ledgerIndex) {
const key = account:${account}:${ledgerIndex};
return this.get(key, () =>
this.client.request({
command: 'account_info',
account,
ledger_index: ledgerIndex,
})
);
}
}
```
Message Overhead Analysis:
Single Payment Transaction:
- WebSocket frame: 6 bytes
- JSON wrapper: ~50 bytes
- Transaction blob: ~200 bytes
- Total: ~256 bytes
- WebSocket frame: 6 bytes
- JSON response: ~300 bytes
- Total: ~306 bytes
- WebSocket frame: 6 bytes
- JSON notification: ~500 bytes
- Total: ~506 bytes
Total bandwidth per transaction: ~1,068 bytes
At 1,500 TPS: ~1.6 MB/s per client connection
Binary Protocol (Potential Amendment):
Current JSON:
{"command":"submit","tx_blob":"12000022..."}
~100 bytes overhead
Binary equivalent:
[0x01][2-byte length][blob]
~4 bytes overhead
Savings: 96 bytes per message
At 1,500 TPS: ~144 KB/s saved
Status: Not currently implemented
Would require protocol changes
```
Transaction Relay Optimization:
Current: Full transaction relayed to all peers
Proposed: Transaction hash announcement first
- Server A receives transaction
- Server A broadcasts hash to peers (32 bytes)
- Peers check if they have it
- Only peers without it request full transaction
Savings: Eliminates redundant full-transaction relays
~50% bandwidth reduction in relay traffic
```
Connection Parameters:
const client = new xrpl.Client('wss://s1.ripple.com', {
// Increase timeouts for stability
connectionTimeout: 10000,
// Enable compression if supported
perMessageDeflate: true,
// Keep-alive for long-lived connections
keepAlive: true,
keepAliveInterval: 30000,
});
// Handle reconnection gracefully
client.on('disconnected', (code) => {
if (code !== 1000) { // Abnormal close
setTimeout(() => client.connect(), 1000);
}
});
```
✅ Connection pooling eliminates handshake overhead—persistent connections are faster
✅ Binary encoding is 50%+ smaller than JSON—always use binary for submission
✅ Running your own server eliminates external dependencies—better control and latency
⚠️ Public server rate limits under high load—not well documented
⚠️ Long-term WebSocket connection stability—varies by provider
📌 Relying solely on public servers for production—single point of failure
📌 Aggressive caching of live data—can cause stale reads
📌 Complex multi-region without proper failover—increases failure modes
Network optimization can save 200-400ms per transaction—meaningful but not transformational. The biggest wins come from geographic deployment (50-200ms) and connection management (50-100ms). For most applications, using persistent WebSocket connections to geographically close public servers is sufficient. Running your own XRPL server is worthwhile for high-volume or latency-critical applications.
Assignment: Design a network architecture for a high-throughput XRPL application.
Requirements:
Measure latency to 5+ public XRPL servers from your location
Document connection establishment time
Calculate bandwidth requirements for your target TPS
Design geographic deployment for global users
Specify connection pool configurations
Plan server selection and failover strategy
Document WebSocket configuration parameters
Design caching strategy for your use case
Specify monitoring and alerting
Calculate latency improvement from each optimization
Estimate infrastructure costs
Recommend prioritized implementation order
Accurate latency measurements (25%)
Sound architecture design (25%)
Practical implementation details (25%)
Realistic cost-benefit analysis (25%)
Time investment: 3 hours
1. What is the typical latency savings from geographic deployment?
A) 5-10ms
B) 50-200ms
C) 500-1000ms
D) 2000ms+
Correct Answer: B
2. Why should transactions always be submitted in binary format?
A) Binary is more secure
B) Binary is ~50% smaller than JSON, reducing bandwidth and latency
C) JSON is not supported for submission
D) Binary enables compression
Correct Answer: B
3. How many connections per server are typically sufficient for 100 TPS?
A) 1-2 connections
B) 5-10 connections
C) 50+ connections
D) Connections don't affect throughput
Correct Answer: B
4. When does running your own XRPL server become worthwhile?
A) Always
B) Never - public servers are better
C) Above ~100 TPS or when latency/control is critical
D) Only for validators
Correct Answer: C
5. What percentage of transaction latency can network optimization realistically reduce?
A) 5-10% (200-400ms of ~4,000ms)
B) 50% (half of total time)
C) 90% (almost all latency is network)
D) 0% (network can't be optimized)
Correct Answer: A
- XRPL server configuration documentation
- rippled peer protocol specification
- WebSocket API reference
- High Performance Browser Networking (O'Reilly)
- CloudFlare networking blog posts
- AWS/GCP/Azure global infrastructure documentation
For Next Lesson:
Lesson 8 covers Application-Level Optimization—writing efficient XRPL application code.
End of Lesson 7
Total words: ~6,000
Estimated completion time: 55 minutes reading + 3 hours for deliverable
Key Takeaways
Geographic deployment provides the biggest network win
: 50-200ms saved by colocating with nearby XRPL servers. Route users to regional deployments.
Persistent WebSocket connections eliminate handshake overhead
: Each new connection costs 50-150ms. Pool and reuse connections.
Binary encoding cuts transaction size in half
: Always submit transactions in binary format, not JSON. JSON is for display only.
Connection pools should match throughput needs
: Size pools for your expected TPS with headroom for bursts.
Running your own server makes sense above 100 TPS
: Eliminates rate limits, reduces latency, provides operational control. ---