Application-Level Optimization - Writing Fast XRPL Code
Learning Objectives
Implement efficient transaction construction that minimizes CPU and memory overhead
Design batching strategies that maximize throughput while meeting latency requirements
Build pre-signing pipelines that eliminate signing latency from the critical path
Apply async programming patterns specific to XRPL's request-response model
Architect caching systems appropriate for XRPL's consistency guarantees
You can deploy the fastest servers, optimize every network path, and run dedicated XRPL nodes—but if your application code is inefficient, you'll waste most of that capacity.
- Sequential submissions when parallel is possible
- Rebuilding transactions instead of cloning
- Blocking on confirmations unnecessarily
- Querying chain state that could be cached
- Ignoring async patterns that hide latency
This lesson provides the patterns to avoid these mistakes.
Inefficient Pattern:
// Creating new objects for each transaction
async function sendPayments(destinations) {
for (const dest of destinations) {
const tx = {
TransactionType: 'Payment',
Account: wallet.address,
Amount: '1000000',
Destination: dest,
Fee: '12',
Sequence: await getSequence(wallet.address), // Network call!
};
await client.submit(tx);
}
}
// Problem: getSequence() called N times
// Problem: Fee hardcoded (may be wrong)
// Problem: Sequential submission
Efficient Pattern:
// Reuse base transaction, manage sequence locally
class TransactionFactory {
constructor(wallet, client) {
this.wallet = wallet;
this.client = client;
this.sequence = null;
this.baseFee = null;
}
async initialize() {
// Single network call for initial state
const [accountInfo, serverInfo] = await Promise.all([
this.client.request({
command: 'account_info',
account: this.wallet.address,
}),
this.client.request({ command: 'server_info' }),
]);
this.sequence = accountInfo.result.account_data.Sequence;
this.baseFee = serverInfo.result.info.validated_ledger.base_fee_xrp;
}
createPayment(destination, amount) {
const tx = {
TransactionType: 'Payment',
Account: this.wallet.address,
Amount: String(amount),
Destination: destination,
Fee: this.calculateFee(),
Sequence: this.sequence++, // Local increment
};
return tx;
}
calculateFee() {
// Apply multiplier for reliable inclusion
return String(Math.ceil(this.baseFee * 1.2 * 1000000));
}
}
// Usage
const factory = new TransactionFactory(wallet, client);
await factory.initialize(); // One network call
const transactions = destinations.map(dest =>
factory.createPayment(dest, 1000000)
);
// No network calls during construction
```
- Build transaction: 1ms
- Sign transaction: 5ms (ECDSA) or 1ms (Ed25519)
- Submit: 100ms
- Wait for confirmation: 3,900ms
- Signing alone: 500ms/second (ECDSA)
- Leaves 500ms for everything else
- Bottleneck!
Pre-Signing Solution:
class PreSigningPipeline {
constructor(wallet, client, bufferSize = 100) {
this.wallet = wallet;
this.client = client;
this.signedQueue = [];
this.bufferSize = bufferSize;
this.running = false;
}
async start() {
this.running = true;
this.refillLoop();
}
async refillLoop() {
while (this.running) {
if (this.signedQueue.length < this.bufferSize) {
await this.prepareTransaction();
} else {
await sleep(10); // Queue full, wait
}
}
}
async prepareTransaction() {
// Build generic transaction template
const tx = this.createTemplate();
// Sign in background (expensive operation)
const signed = this.wallet.sign(tx);
this.signedQueue.push({
template: tx,
signed: signed,
createdAt: Date.now(),
});
}
getPreSigned() {
return this.signedQueue.shift();
}
}
// Usage: Signing happens in background
const pipeline = new PreSigningPipeline(wallet, client);
await pipeline.start();
// When ready to send, signing is already done
const presigned = pipeline.getPreSigned();
await client.submit(presigned.signed); // No signing delay
```
The Challenge:
Each account has a sequence number that must increment.
If you submit sequence N before N-1 confirms, N may fail.
If you wait for each confirmation, throughput is limited.
Optimistic Sequence Management:
class SequenceManager {
constructor(client, account) {
this.client = client;
this.account = account;
this.localSequence = null;
this.confirmedSequence = null;
this.pendingTransactions = new Map();
}
async initialize() {
const info = await this.client.request({
command: 'account_info',
account: this.account,
});
this.localSequence = info.result.account_data.Sequence;
this.confirmedSequence = this.localSequence;
}
getNextSequence() {
return this.localSequence++;
}
async trackTransaction(txHash, sequence) {
this.pendingTransactions.set(sequence, txHash);
// Subscribe to transaction result
this.client.on('transaction', (event) => {
if (event.transaction.hash === txHash) {
this.confirmedSequence = Math.max(
this.confirmedSequence,
sequence + 1
);
this.pendingTransactions.delete(sequence);
}
});
}
// Handle failed transactions
async handleFailure(sequence, error) {
if (error.result === 'tefPAST_SEQ') {
// Transaction already confirmed, update tracking
this.confirmedSequence = Math.max(
this.confirmedSequence,
sequence + 1
);
} else if (error.result === 'terPRE_SEQ') {
// Previous transaction not confirmed yet, retry later
return 'RETRY';
} else {
// Permanent failure, may need sequence reset
await this.initialize();
}
}
}
```
Batching Trade-offs:
Batch Size | Throughput | Latency | When to Use
-----------|------------|--------------|-------------------
1 | Low | Lowest | Real-time, single tx
10 | Medium | Low | Near-real-time
100 | High | Medium | Batch processing
1000 | Highest | High | Bulk operations
Decision Framework:
function determineBatchStrategy(requirements) {
const { maxLatency, targetTPS, transactionType } = requirements;
if (maxLatency < 5000) {
// Real-time: minimal batching
return { batchSize: 1, maxWait: 0 };
}
if (maxLatency < 10000) {
// Near-real-time: small batches
return { batchSize: 10, maxWait: 1000 };
}
if (targetTPS > 100) {
// High throughput: larger batches
return {
batchSize: Math.min(1000, targetTPS / 10),
maxWait: 5000
};
}
// Default: balanced
return { batchSize: 50, maxWait: 2000 };
}
```
class BatchAccumulator {
constructor(options = {}) {
this.batchSize = options.batchSize || 100;
this.maxWait = options.maxWait || 5000;
this.onBatch = options.onBatch || (() => {});
this.pending = [];
this.timer = null;
}
add(transaction) {
return new Promise((resolve, reject) => {
this.pending.push({ transaction, resolve, reject });
if (this.pending.length >= this.batchSize) {
this.flush();
} else if (!this.timer) {
this.timer = setTimeout(() => this.flush(), this.maxWait);
}
});
}
async flush() {
if (this.timer) {
clearTimeout(this.timer);
this.timer = null;
}
if (this.pending.length === 0) return;
const batch = this.pending.splice(0, this.batchSize);
try {
const results = await this.onBatch(
batch.map(b => b.transaction)
);
batch.forEach((item, i) => {
if (results[i].success) {
item.resolve(results[i]);
} else {
item.reject(results[i].error);
}
});
} catch (error) {
batch.forEach(item => item.reject(error));
}
}
}
// Usage
const batcher = new BatchAccumulator({
batchSize: 50,
maxWait: 2000,
onBatch: async (transactions) => {
// Submit all transactions in parallel
return Promise.all(transactions.map(tx =>
client.submit(tx).catch(e => ({ success: false, error: e }))
));
}
});
// Add transactions - they batch automatically
await batcher.add(tx1); // May wait up to 2s for batch
await batcher.add(tx2);
```
class ParallelSubmitter {
constructor(client, options = {}) {
this.client = client;
this.concurrency = options.concurrency || 50;
this.retries = options.retries || 3;
this.active = 0;
this.queue = [];
}
async submit(transaction) {
return new Promise((resolve, reject) => {
this.queue.push({ transaction, resolve, reject, attempts: 0 });
this.processQueue();
});
}
async processQueue() {
while (this.active < this.concurrency && this.queue.length > 0) {
const item = this.queue.shift();
this.active++;
this.executeSubmission(item)
.then(result => {
this.active--;
item.resolve(result);
this.processQueue(); // Process next
})
.catch(error => {
this.active--;
if (item.attempts < this.retries && this.isRetryable(error)) {
item.attempts++;
this.queue.unshift(item); // Retry at front
} else {
item.reject(error);
}
this.processQueue();
});
}
}
async executeSubmission(item) {
const result = await this.client.submit(item.transaction);
if (result.result.engine_result !== 'tesSUCCESS') {
throw new Error(result.result.engine_result);
}
return result;
}
isRetryable(error) {
const retryableCodes = ['terQUEUED', 'telCAN_NOT_QUEUE'];
return retryableCodes.includes(error.message);
}
}
```
class TransactionPipeline {
constructor(client, wallet) {
this.client = client;
this.wallet = wallet;
this.stages = [];
}
addStage(name, processor) {
this.stages.push({ name, processor });
return this;
}
async process(input) {
let result = input;
for (const stage of this.stages) {
result = await stage.processor(result);
}
return result;
}
}
// Build reusable pipeline
const pipeline = new TransactionPipeline(client, wallet)
.addStage('validate', async (tx) => {
if (!tx.Account || !tx.Destination) {
throw new Error('Invalid transaction');
}
return tx;
})
.addStage('prepare', async (tx) => {
return await client.autofill(tx);
})
.addStage('sign', async (tx) => {
return wallet.sign(tx);
})
.addStage('submit', async (signed) => {
return await client.submitAndWait(signed.tx_blob);
});
// Use pipeline
const result = await pipeline.process({
TransactionType: 'Payment',
Account: wallet.address,
Destination: 'r...',
Amount: '1000000',
});
```
async function* streamTransactionResults(client, txHashes) {
const pending = new Map();
// Subscribe to transactions
for (const hash of txHashes) {
pending.set(hash, { status: 'pending', result: null });
}
client.on('transaction', (event) => {
const hash = event.transaction.hash;
if (pending.has(hash)) {
pending.get(hash).status = 'complete';
pending.get(hash).result = event;
}
});
// Yield results as they arrive
while (pending.size > 0) {
for (const [hash, data] of pending) {
if (data.status === 'complete') {
pending.delete(hash);
yield { hash, result: data.result };
}
}
if (pending.size > 0) {
await sleep(100); // Check again
}
}
}
// Usage
const hashes = await submitBatch(transactions);
for await (const result of streamTransactionResults(client, hashes)) {
console.log(Transaction ${result.hash} confirmed);
// Process each result as it arrives
}
```
class FireAndForgetSubmitter {
constructor(client) {
this.client = client;
this.submitted = new Map();
this.callbacks = new Map();
this.startResultListener();
}
startResultListener() {
this.client.on('transaction', (event) => {
const hash = event.transaction.hash;
if (this.callbacks.has(hash)) {
const callback = this.callbacks.get(hash);
callback(null, event);
this.callbacks.delete(hash);
this.submitted.delete(hash);
}
});
}
submit(transaction, callback) {
// Don't await - fire and continue
this.client.submit(transaction.tx_blob)
.then(result => {
const hash = result.result.tx_json.hash;
this.submitted.set(hash, {
submittedAt: Date.now(),
transaction
});
this.callbacks.set(hash, callback);
})
.catch(error => {
callback(error, null);
});
}
// Check for timeouts periodically
checkTimeouts(maxAge = 60000) {
const now = Date.now();
for (const [hash, data] of this.submitted) {
if (now - data.submittedAt > maxAge) {
const callback = this.callbacks.get(hash);
if (callback) {
callback(new Error('Transaction timeout'), null);
}
this.submitted.delete(hash);
this.callbacks.delete(hash);
}
}
}
}
```
Caching Decision Matrix:
Data Type | Cacheable? | TTL | Invalidation
---------------------|------------|-----------|---------------
Account info (seq) | Short | 1 ledger | On transaction
Account info (bal) | Short | 1 ledger | On transaction
Server info | Medium | 5-10s | Timer
Historical ledger | Forever | Never | None (immutable)
Historical tx | Forever | Never | None (immutable)
Current book offers | Short | 1 ledger | On order book change
Fee estimate | Medium | 5-10s | Timerclass LedgerAwareCache {
constructor(client) {
this.client = client;
this.cache = new Map();
this.currentLedger = 0;
this.subscribeToLedgers();
}
subscribeToLedgers() {
this.client.on('ledgerClosed', (ledger) => {
this.currentLedger = ledger.ledger_index;
this.invalidateStale();
});
}
set(key, value, options = {}) {
this.cache.set(key, {
value,
ledger: this.currentLedger,
expiresAt: options.ttl
? Date.now() + options.ttl
: null,
immutable: options.immutable || false,
});
}
get(key) {
const entry = this.cache.get(key);
if (!entry) return null;
// Immutable entries never expire
if (entry.immutable) return entry.value;
// Check TTL
if (entry.expiresAt && Date.now() > entry.expiresAt) {
this.cache.delete(key);
return null;
}
// Check ledger staleness (1 ledger grace period)
if (entry.ledger < this.currentLedger - 1) {
this.cache.delete(key);
return null;
}
return entry.value;
}
invalidateStale() {
for (const [key, entry] of this.cache) {
if (!entry.immutable && entry.ledger < this.currentLedger - 1) {
this.cache.delete(key);
}
}
}
// Cache historical data forever
cacheHistorical(key, value) {
this.set(key, value, { immutable: true });
}
}
```
class RequestDeduplicator {
constructor(client) {
this.client = client;
this.inflightRequests = new Map();
}
async request(params) {
const key = JSON.stringify(params);
// If identical request is in-flight, wait for it
if (this.inflightRequests.has(key)) {
return this.inflightRequests.get(key);
}
// Create new request
const promise = this.executeRequest(params);
this.inflightRequests.set(key, promise);
try {
const result = await promise;
return result;
} finally {
this.inflightRequests.delete(key);
}
}
async executeRequest(params) {
return this.client.request(params);
}
}
// Usage: Multiple calls to same endpoint share one request
const dedup = new RequestDeduplicator(client);
// These two calls share one actual network request
const [result1, result2] = await Promise.all([
dedup.request({ command: 'server_info' }),
dedup.request({ command: 'server_info' }),
]);
```
// BAD: Sequential submission
async function sendPayments(payments) {
const results = [];
for (const payment of payments) {
const result = await submitAndWait(payment); // Waits ~4s each
results.push(result);
}
return results; // 100 payments = 400 seconds!
}
// GOOD: Parallel submission
async function sendPayments(payments) {
return Promise.all(payments.map(payment =>
submitAndWait(payment)
)); // 100 payments = ~4 seconds
}
```
// BAD: Fire and forget without tracking
async function sendPayments(payments) {
for (const payment of payments) {
client.submit(payment); // No await, no tracking
}
return 'done'; // No idea what actually happened
}
// GOOD: Track everything
async function sendPayments(payments) {
const results = await Promise.allSettled(
payments.map(payment => submitAndWait(payment))
);
const succeeded = results.filter(r => r.status === 'fulfilled');
const failed = results.filter(r => r.status === 'rejected');
if (failed.length > 0) {
console.error(${failed.length} payments failed:, failed);
// Handle failures appropriately
}
return { succeeded: succeeded.length, failed: failed.length };
}
```
// BAD: Query chain for every transaction
async function sendPayment(destination, amount) {
const accountInfo = await client.request({
command: 'account_info',
account: wallet.address
}); // Network call
const serverInfo = await client.request({
command: 'server_info'
}); // Another network call
const tx = {
TransactionType: 'Payment',
Account: wallet.address,
Sequence: accountInfo.result.account_data.Sequence,
Fee: calculateFee(serverInfo.result),
// ...
};
return client.submit(tx);
}
// 3 network calls per transaction!
// GOOD: Cache and reuse
class OptimizedSubmitter {
constructor(client, wallet) {
this.client = client;
this.wallet = wallet;
this.sequence = null;
this.fee = null;
this.lastRefresh = 0;
}
async ensureFresh() {
if (Date.now() - this.lastRefresh < 5000) return;
const [accountInfo, serverInfo] = await Promise.all([
this.client.request({ command: 'account_info', account: this.wallet.address }),
this.client.request({ command: 'server_info' }),
]);
this.sequence = accountInfo.result.account_data.Sequence;
this.fee = calculateFee(serverInfo.result);
this.lastRefresh = Date.now();
}
async sendPayment(destination, amount) {
await this.ensureFresh(); // Only queries if stale
const tx = {
TransactionType: 'Payment',
Account: this.wallet.address,
Sequence: this.sequence++,
Fee: this.fee,
// ...
};
return this.client.submit(tx);
}
}
```
✅ Pre-signing eliminates signing from critical path—useful for ECDSA accounts
✅ Caching reduces unnecessary network calls—significant for read-heavy applications
✅ Batching improves throughput at latency cost—trade-off is application-specific
⚠️ Cache invalidation timing—too aggressive = stale data, too conservative = high latency
⚠️ Sequence number management edge cases—complex failure scenarios
📌 Optimistic sequence management causing failures—needs robust error handling
📌 Caching mutable data too long—can cause inconsistencies
📌 Fire-and-forget without tracking—silent failures
Application-level optimization can provide 2-10× improvement in effective throughput without infrastructure changes. The biggest wins come from parallelization (waiting less) and caching (querying less). These are standard software engineering practices applied to XRPL's specific patterns—nothing exotic, just disciplined implementation.
Assignment: Build a payment processing application demonstrating optimization techniques.
Requirements:
Transaction factory with sequence management
Parallel submitter with backpressure
Ledger-aware cache
Batching with configurable parameters
Pre-signing pipeline (optional background signing)
Request deduplication
Retry logic for transient failures
Sequence recovery on failures
Timeout handling
Compare sequential vs. parallel submission
Measure throughput at different batch sizes
Document optimization impact
Correct implementation of patterns (25%)
Robust error handling (25%)
Measurable performance improvement (25%)
Clean, maintainable code (25%)
Time investment: 4-5 hours
1. What is the primary benefit of parallel transaction submission?
A) Lower fees
B) Higher throughput by not waiting for each confirmation
C) Better security
D) Simpler code
Correct Answer: B
2. When should you invalidate cached account_info data?
A) Never - it's immutable
B) Every second
C) When a new ledger closes or the account transacts
D) Only on application restart
Correct Answer: C
3. What's the risk of optimistic sequence number management?
A) Higher fees
B) Transactions may fail if previous transactions don't confirm in expected order
C) Security vulnerabilities
D) Network congestion
Correct Answer: B
4. Why is fire-and-forget submission an antipattern?
A) It's slower than waiting
B) It uses more memory
C) Silent failures - you don't know if transactions succeeded
D) It's not supported by XRPL
Correct Answer: C
5. What's the optimal batching strategy for a real-time payment application with <5s latency requirement?
A) Batch size 1000, wait 10 seconds
B) Batch size 1, no waiting (real-time)
C) Batch size 100, wait 5 seconds
D) Batching is always better
Correct Answer: B
- "Designing Data-Intensive Applications" (O'Reilly)
- Node.js async patterns documentation
- xrpl.js library source code and examples
- JavaScript performance optimization guides
- Memory management in Node.js
- Connection pooling best practices
For Next Lesson:
Lesson 9 covers Validator & Node Optimization—infrastructure-level tuning for XRPL servers.
End of Lesson 8
Total words: ~6,000
Estimated completion time: 55 minutes reading + 4-5 hours for deliverable
Key Takeaways
Parallel submission is the biggest win
: Don't wait for one transaction to confirm before sending the next. Submit in parallel, track results asynchronously.
Cache aggressively, invalidate correctly
: Historical data is forever cacheable. Live data needs ledger-aware invalidation.
Manage sequences locally
: Don't query the network for sequence numbers. Track them locally with proper error handling.
Batch when latency tolerance allows
: Higher batches = higher throughput = higher latency. Match to your requirements.
Track everything
: Fire-and-forget leads to silent failures. Always track transactions to completion. ---