Performance Considerations
This guide covers performance considerations and optimization strategies for TrustWeave applications.
Overview
TrustWeave is designed for high performance with:
- Async Operations – all I/O uses Kotlin coroutines
- Efficient Serialization – Kotlinx Serialization for JSON handling
- Minimal Dependencies – only essential dependencies
- Optimized Algorithms – efficient cryptographic operations
Performance Best Practices
Coroutine Usage
TrustWeave operations are async by default:
1
2
3
4
5
6
7
8
9
10
11
12
import kotlinx.coroutines.*
// Use coroutines for concurrent operations
suspend fun processMultiple() = coroutineScope {
val results = listOf(
async { TrustWeave.dids.create() },
async { TrustWeave.dids.create() },
async { TrustWeave.dids.create() }
)
results.awaitAll()
}
What this does: Executes multiple operations concurrently.
Outcome: Better performance for I/O-bound operations.
Batch Operations
Batch operations when possible:
1
2
3
4
5
6
7
8
9
10
11
// Batch credential issuance
val credentials = subjects.map { subject ->
TrustWeave.issueCredential(
issuerDid = issuerDid.id,
issuerKeyId = issuerKeyId,
credentialSubject = subject
)
}
// Wait for all credentials
val results = credentials.map { it.await() }
Outcome: Reduces overhead from repeated setup.
Connection Pooling
For database-backed services, use connection pooling:
1
2
3
4
5
// Use HikariCP or similar for connection pooling
val dataSource = HikariDataSource().apply {
jdbcUrl = "jdbc:postgresql://localhost/TrustWeave"
maximumPoolSize = 10
}
Outcome: Reuses database connections for better performance.
Caching
Cache frequently accessed data:
1
2
3
4
5
6
7
8
// Cache DID documents
val didCache = ConcurrentHashMap<String, DidDocument>()
suspend fun resolveWithCache(did: String): DidDocument {
return didCache.getOrPut(did) {
TrustWeave.dids.resolve(did).document!!
}
}
Outcome: Reduces repeated DID resolution calls.
Memory Management
Resource Cleanup
Use use {} for automatic cleanup:
1
2
3
4
val fixture = TrustWeaveTestFixture.builder().build().use { fixture ->
// Use fixture
// Automatic cleanup on exit
}
Outcome: Prevents resource leaks.
Large Payloads
For large payloads, consider streaming:
1
2
3
4
5
6
7
// Stream large credential batches
fun processLargeBatch(subjects: Sequence<JsonObject>) {
subjects.chunked(100).forEach { chunk ->
// Process chunk
processChunk(chunk)
}
}
Outcome: Reduces memory usage for large datasets.
Network Optimization
HTTP Client Configuration
Configure HTTP clients for optimal performance:
1
2
3
4
5
6
// Use OkHttp with connection pooling
val client = OkHttpClient.Builder()
.connectionPool(ConnectionPool(10, 5, TimeUnit.MINUTES))
.connectTimeout(30, TimeUnit.SECONDS)
.readTimeout(30, TimeUnit.SECONDS)
.build()
Outcome: Optimizes network connection reuse.
Retry Logic
Implement retry logic for network operations:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
suspend fun <T> retry(
times: Int = 3,
delay: Long = 1000,
block: suspend () -> T
): T {
repeat(times - 1) {
try {
return block()
} catch (e: Exception) {
delay(delay)
}
}
return block()
}
Outcome: Handles transient network failures.
Blockchain Optimization
Transaction Batching
Batch blockchain transactions when possible:
1
2
3
4
5
6
// Batch multiple anchors
val anchors = credentials.map { credential ->
async { TrustWeave.anchor(credential, chainId) }
}
val results = anchors.awaitAll()
Outcome: Reduces blockchain transaction fees.
Confirmation Strategy
Use appropriate confirmation strategies:
1
2
3
4
5
6
7
8
9
// For testnets, accept faster confirmations
val client = BlockchainAnchorClient(chainId, options) {
confirmationStrategy = ConfirmationStrategy.Fastest
}
// For mainnet, wait for more confirmations
val mainnetClient = BlockchainAnchorClient(chainId, options) {
confirmationStrategy = ConfirmationStrategy.Secure
}
Outcome: Balances speed and security.
Cryptographic Performance
Algorithm Selection
Choose algorithms based on performance needs:
1
2
3
4
5
// Ed25519 is faster than RSA
val fastKey = kms.generateKey(Algorithm.Ed25519)
// RSA is more compatible but slower
val compatibleKey = kms.generateKey(Algorithm.Rsa2048)
Outcome: Optimizes signing and verification speed.
Key Caching
Cache keys for repeated operations:
1
2
3
4
5
6
7
8
// Cache keys for issuer
val issuerKeys = ConcurrentHashMap<String, Key>()
suspend fun getOrCreateKey(issuerDid: String): Key {
return issuerKeys.getOrPut(issuerDid) {
TrustWeave.createKey(issuerDid).getOrThrow()
}
}
Outcome: Reduces key generation overhead.
Monitoring and Profiling
Performance Metrics
Collect performance metrics:
1
2
3
4
5
6
// Measure operation time
val start = System.currentTimeMillis()
val did = TrustWeave.dids.create()
val duration = System.currentTimeMillis() - start
println("DID creation took ${duration}ms")
Profiling
Use profiling tools to identify bottlenecks:
1
2
# Use JProfiler or similar
./gradlew test --profile
Scalability Considerations
Horizontal Scaling
Design for horizontal scaling:
1
2
// Stateless operations scale horizontally
val TrustWeave = TrustWeave.create() // No shared state
Outcome: Enables horizontal scaling.
Load Balancing
Use load balancing for services:
1
2
3
4
5
6
// Multiple instances can share load
val instances = listOf(
TrustWeave.create(),
TrustWeave.create(),
TrustWeave.create()
)
Outcome: Distributes load across instances.
Benchmarking
Performance Benchmarks
Create benchmarks for critical paths:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
@Test
fun benchmarkDidCreation() = runBlocking {
val iterations = 1000
val start = System.currentTimeMillis()
repeat(iterations) {
TrustWeave.dids.create()
}
val duration = System.currentTimeMillis() - start
val avgTime = duration.toDouble() / iterations
println("Average DID creation: ${avgTime}ms")
assert(avgTime < 100) // Should be under 100ms
}
Next Steps
- Review Error Handling for performance-aware error handling
- See Testing Strategies for performance testing
- Check module-specific documentation for performance tips
- Explore Architecture Overview for design considerations