Skip to main content

Plots - Cryptographic Storage Container

πŸ’œ Support the DIG Network

Help build the future of decentralized storage! The DIG Network is an open-source project that needs community support to continue development.

πŸ’œ Support the Project β†’ - Donate crypto, buy NFTs, or sponsor development

Overview​

A plot is a cryptographic storage container implementing a streaming-optimized architecture with embedded proof-of-work. Each plot stores data capsules and is uniquely identified by a SHA-256 hash tied to its owner through Chia Public Synthetic Key signatures.

This encoding creates unforgeable storage commitments that prove data is physically stored on node-controlled infrastructure rather than proxied from external sources, preventing storage fraud and binding ownership to specific DIG Nodes.

Streaming-First Architecture​

The implementation is built entirely around streaming operations with 64KB chunks, ensuring constant memory usage regardless of plot size. Plots can scale to 256TB while maintaining sub-second access times and never loading full plots into memory.

Current Proof of Work Foundation​

The integrated Proof of Work component ensures storage verification requires significant computational timeβ€”deliberately exceeding typical HTTP request latency. This temporal requirement guarantees that data was pre-stored locally, eliminating real-time proxy attacks and establishing authentic data custody.

Dual Binding Security: Computational work is cryptographically bound to both the plot identifier and capsule content hash, preventing work reuse across different plots or data sets.

Proposed Evolution: Dual Farming Architecture​

In potential future collaboration with Chia community experts, the system could theoretically evolve to leverage Chia's Proof of Space and Time consensus mechanism. This hypothetical integration would enable DIG Nodes to "dual farm" their existing Chia farming plots, conceptually allowing simultaneous:

  • Farming Chia for blockchain consensus rewards
  • Proving DIG storage for data persistence verification
  • Farming Peer Assignments for Witness Miners

This proposed dual-purpose approach would maximize infrastructure efficiency by repurposing existing Chia farming hardware to provide cryptographic proof of unique data storage, creating a potential symbiotic relationship between Chia's consensus layer and DIG's storage verification protocol.

The envisioned result would be an economically efficient system where storage providers could monetize the same physical infrastructure across both networks while providing mathematically verifiable proof of data ownership and availabilityβ€”pending technical feasibility and community collaboration.

Technical Architecture​

Flexible Table Structure​

Plots use a configurable table structure with data and proof tables:

  • Data Tables (odd indices: 1, 3, 5): Store actual capsule content with metadata
  • Proof Tables (even indices: 0, 2, 4, 6): Store cryptographic proofs and security padding
interface PlotTable {
index: number; // Table number
type: 'PROOF' | 'DATA'; // Table type
hash: Buffer; // Table hash after proof-of-work
data: Buffer; // Table content
prevTableHash: Buffer; // Previous table hash (chaining)
nonce: number; // Proof-of-work nonce
workDifficulty: number; // Achieved difficulty
createdAt: number; // Creation timestamp
dataSize: number; // Size of table data
}

Dual-Binding Proof-of-Work​

Each table requires proof-of-work with dual binding to both plot and capsule content:

# Dual binding prevents work reuse across plots/capsules
hash_input = (
table_data_commitment + # Commitment to table data (32 bytes)
previous_hash + # Previous table hash (32 bytes)
str(nonce).encode() + # Current nonce attempt
public_key + # Plot public key (48 bytes)
combined_capsule_hash # Combined capsule hash for dual binding
)

candidate = hashlib.sha256(hash_input).digest()

# Verify difficulty: count leading zero bits
is_valid = crypto_utils.verify_work_difficulty(candidate, difficulty)

Plot Identifiers​

Plot Seed Commitment (pre-work identifier):

PlotSeedCommitment = SHA-256(
publicKey || // 32 bytes
merkleRoot || // 32 bytes
chiaBlockHeight || // 8 bytes
chiaBlockHash // 32 bytes
)

PlotId (final identifier):

PlotId = SHA-256(
publicKey || // 32 bytes
merkleRoot || // 32 bytes
workNonce || // 4-32 bytes (proof-of-work solution)
chiaBlockHeight || // 8 bytes
chiaBlockHash // 32 bytes
)

Streaming Capsule Storage​

Capsule Format​

Each capsule is serialized with complete metadata:

# Capsule serialization format
capsule_data = (
id_length + # 2 bytes: ID length
id_buffer + # Variable: Capsule ID
data_size + # 4 bytes: Data size
metadata_length + # 2 bytes: Metadata length
metadata_buffer + # Variable: JSON metadata
capsule_hash + # 32 bytes: SHA-256 of content
timestamp + # 8 bytes: Creation timestamp
capsule.data # Variable: Actual capsule data
)

Capsule Location Tracking​

class CapsuleLocation:
def __init__(self):
self.capsule_id: str
self.capsule_hash: bytes # SHA-256 of capsule content
self.table_index: int # Which table contains this capsule
self.data_offset: int # Offset within table
self.data_size: int # Capsule size
self.metadata: CapsuleMetadata # Complete capsule metadata

Cryptographic Proof of Living Storage​

Plots enable four core proof types:

1. Plot Ownership Proof​

class PlotOwnershipProof:
def __init__(self):
self.public_key: bytes # Owner's public key
self.merkle_root: bytes # Plot's merkle root
self.difficulty: int # Claimed difficulty
self.block_height: int # Chia anchor block
self.block_hash: bytes # Chia anchor hash
self.signature: bytes # DataLayer signature
self.signed_message: bytes # Message that was signed

2. Data Inclusion Proof​

class DataInclusionProof:
def __init__(self):
self.merkle_path: List[bytes] # Sibling hashes for verification
self.path_directions: bytes # Packed left/right directions
self.leaf_index: int # Position in Merkle tree

3. Computational Work Proof​

class ComputationalWorkProof:
def __init__(self):
self.nonce: bytes # Found nonce
self.table_data_hash: bytes # Table data hash
self.previous_hash: bytes # Previous table hash
self.work_difficulty: int # Achieved difficulty

4. Physical Access Proof​

class PhysicalAccessProof:
def __init__(self):
self.block_height: int # Current Chia block
self.block_hash: bytes # Current Chia hash
self.chunk_indices: List[int] # Selected chunks
self.chunk_data: List[bytes] # Actual chunk data
self.chunk_proofs: List[MerklePathProof] # Merkle proofs
self.capsule_root_hash: bytes # Capsule's merkle root
self.total_chunks: int # Total chunks in capsule

Security Properties​

Attack Prevention​

  • Storage Credit Theft: Dual-binding proof-of-work prevents claiming unearned storage credits
  • Plot Forgery: Cryptographic binding to public key prevents impersonation
  • Data Spoofing: Merkle verification ensures data integrity
  • Work Reuse: Dual binding prevents computational work reuse across plots/capsules
  • Memory Exhaustion: Streaming architecture prevents memory-based attacks
  • Replay Attacks: Temporal anchoring prevents proof reuse

Cryptographic Guarantees​

  • Collision Resistance: SHA-256 prevents PlotId collisions
  • Work Binding: Computational work tied to specific plot and capsule content
  • Temporal Binding: Chia blockchain anchoring provides ordering
  • Data Integrity: Merkle root captures all plot modifications
  • Non-repudiation: BLS signatures compatible with Chia

Streaming File Format​

6-Section Structure​

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Section β”‚ Offset β”‚ Size β”‚ Align β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ File Header β”‚ 0x0000 β”‚ 64B β”‚ - β”‚
β”‚ Metadata Section β”‚ Variable β”‚ Variable β”‚ 4096B β”‚
β”‚ Index Section β”‚ Variable β”‚ Variable β”‚ 4096B β”‚
β”‚ Table Section β”‚ Variable β”‚ Variable β”‚ 4096B β”‚
β”‚ Data Section β”‚ Variable β”‚ Variable β”‚ 4096B β”‚
β”‚ Verification β”‚ Variable β”‚ Variable β”‚ 4096B β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Streaming Performance Characteristics​

  • Memory Usage: Constant 64KB regardless of plot size
  • Chunk Processing: 64KB default (configurable)
  • Throughput: 500+ MB/s sequential, 100+ MB/s random
  • Compression: LZ4 (~300 MB/s), ZSTD (~150 MB/s)
  • Max Plot Size: 256TB theoretical
  • Max Capsules: 4.2 billion per plot

Implementation Process​

Streaming Creation Process​

# Create plot from capsule stream
plot = await Plot.create(options)
await plot.create_from_stream(capsule_stream)

# Plot is sealed when ready
def on_sealed():
print('Plot ready for validation')

plot.on('sealed', on_sealed)

Streaming Verification​

# Verify proof package without loading plot
result = await verify_proof_package(compressed_proof_package, {
'earliest_acceptable_block_height': current_height - 100,
'min_difficulty': 1,
'current_block_height': current_height
})

# Result contains detailed validation info
if result.is_valid:
print('All proofs valid')
else:
print('Validation errors:', result.errors)

Memory-Efficient Access​

# Stream capsule without loading into memory
capsule_stream = await plot.stream_capsule(capsule_id)
async for chunk in capsule_stream:
# Process 64KB chunks
await process_chunk(chunk)

Performance Scaling​

Streaming Architecture Benefits​

  • Memory Efficiency: O(1) memory usage regardless of plot size
  • I/O Optimization: 4096-byte alignment for optimal disk performance
  • Compression Support: Multiple algorithms (GZIP, ZSTD, LZ4, Brotli)
  • Concurrent Access: Non-blocking streaming with backpressure
  • Terabyte Scale: Tested with plots up to 100TB

Implementation Limits​

PropertyLimitImplementation
Max plot size256TBbigint fileSize
Max capsules per plot4.2 billion32-bit addressing
Max capsule size4GBSingle capsule limit
Memory usage64KB constantStreaming chunks
Header size64 bytesMinimal overhead

Network Integration​

Plots interface with other DIG Network components: