Skip to main content

DIG Node

💜 Support the DIG Network

Help build the future of decentralized storage! The DIG Network is an open-source project that needs community support to continue development.

💜 Support the Project → - Donate crypto, buy NFTs, or sponsor development

Overview​

DIG Nodes are the storage backbone of the network, responsible for maintaining plots, serving content, and earning rewards through successful validations.

Core Architecture​

class DIGNode:
# Storage Management
plot_manager: PlotManager # Plot file operations
capsule_store: CapsuleStore # Capsule-level storage
cache_layer: CacheManager # Hot data caching

# Network Services
p2p_network: P2PProtocol # Peer discovery/communication
gun_p2p: GunP2PNetwork # gun.js P2P metadata sharing
http_server: HTTPServer # Client API endpoints
rpc_client: ChiaRPCClient # Blockchain interaction

# Proof Generation
proof_engine: ProofGenerator # Proof creation
work_computer: WorkComputer # PoW calculations

# Economic Module
reward_tracker: RewardTracker # Earnings monitoring
bribe_manager: BribeManager # Network bribe handling

Storage Architecture​

Plot File Format​

// Plot file header (4KB aligned)
struct PlotHeader {
uint32_t magic; // 0x44494731 ("DIG1")
uint32_t version; // Format version
uint64_t creation_time; // Unix timestamp
uint256_t plot_id; // Unique identifier
uint32_t capsule_count; // Number of capsules
uint32_t table_pointers[7]; // C1-C7 table offsets
uint8_t padding[3968]; // Align to 4KB
};

// Capsule storage record
struct CapsuleRecord {
uint256_t datastore_id; // Parent DataStore
uint32_t capsule_index; // Position in DataStore
uint32_t capsule_size; // Actual size (≤4GB)
uint256_t capsule_hash; // SHA256 of content
uint64_t plot_offset; // Byte offset in plot
};

Storage Hierarchy​

L1 Cache (RAM):      Active capsules (1-10GB)
├── Recent requests
├── Validation challenges
└── Prefetch buffer

L2 Cache (NVMe): Frequent access (100GB-1TB)
├── Popular content
├── Active validations
└── Network bribes

L3 Storage (SSD): Medium access (1-10TB)
├── Tier 1-3 content
├── Recent plots
└── Index structures

L4 Archive (HDD): Cold storage (10TB+)
├── Tier 4-6 content
├── Historical data
└── Backup plots

API Specifications​

Content Retrieval Endpoints​

GET /capsule/{datastore_id}/{capsule_index}
Response: Binary capsule data
Headers:
X-Merkle-Proof: Base64 encoded proof
X-Plot-ID: Plot identifier
X-Capsule-Hash: SHA256 of content
Content-Length: Capsule size in bytes
Status Codes:
200: Success
404: Capsule not found
503: Temporarily unavailable

GET /datastore/{datastore_id}/manifest
Response: {
"datastore_id": "0x...",
"capsule_count": 1000,
"merkle_root": "0x...",
"total_size": 16777216000,
"creation_time": 1234567890,
"capsules": [
{"index": 0, "hash": "0x...", "size": 16777216},
...
]
}

GET /handle/{handle}/resolve
Response: {
"handle": "mydata",
"datastore_id": "0x...",
"tier": 4,
"registration_time": 1234567890,
"metadata": {...}
}

Network Operations​

POST /datastore/push
Body: {
"datastore_id": "0x...",
"capsules": [...],
"merkle_tree": {...},
"bribe_distributor": "0x..." // optional
}
Response: {
"accepted": true,
"storage_estimate": "2024-01-01T00:00:00Z",
"propagation_peers": ["node1", "node2", ...]
}

GET /network/peers
Response: {
"peers": [
{
"node_id": "dig1...",
"endpoint": "1.2.3.4:8444",
"capabilities": ["storage", "retrieval"],
"reputation": 0.95,
"last_seen": "2024-01-01T00:00:00Z"
},
...
]
}

POST /validation/challenge
Body: {
"challenge_id": "chal_123",
"capsule_hash": "0x...",
"challenge_type": "physical_access",
"validator_signature": "0x..."
}
Response: {
"proof": "0x...",
"generation_time_ms": 1250,
"plot_id": "0x..."
}

Administrative Endpoints​

GET /node/status
Response: {
"node_id": "dig1...",
"version": "1.0.0",
"uptime": 864000,
"plots": {
"count": 42,
"total_size": 4398046511104,
"capsule_count": 262144
},
"network": {
"peers": 127,
"bandwidth_used": 1099511627776,
"validations_24h": 1440
},
"earnings": {
"total_dig": 1000.5,
"total_bribes": 50.25,
"pending_rewards": 10.0
}
}

POST /plot/create
Body: {
"datastore_id": "0x...",
"capsule_indices": [0, 1, 2, ...],
"compression": "zstd"
}
Response: {
"plot_id": "0x...",
"creation_time": "2024-01-01T00:00:00Z",
"size_bytes": 107374182400,
"capsule_count": 6400
}

Operational Modes​

Archive Mode​

Store everything, maximize rewards

config = {
"mode": "archive",
"accept_all": True,
"tier_filter": None,
"min_reward": 0,
"compression": "zstd",
"cache_size": "10%"
}

Cache Mode​

Popular content only, optimize bandwidth

config = {
"mode": "cache",
"accept_all": False,
"tier_filter": [1, 2, 3],
"min_providers": 5,
"ttl_hours": 168,
"cache_size": "50%"
}

Selective Mode​

Manual/algorithmic selection

config = {
"mode": "selective",
"whitelist": ["handle1", "handle2"],
"blacklist": ["spam*"],
"min_reward": 0.1,
"max_size_gb": 100,
"ai_filter": "content_value_model_v2"
}

Performance Optimization​

Proof Generation Pipeline​

class ProofPipeline:
def __init__(self):
self.cpu_pool = ThreadPool(cpu_count())
self.gpu_context = GPUContext() if cuda_available() else None

async def generate_proofs(self, challenge):
# Parallel proof generation
tasks = [
self.cpu_pool.submit(self.ownership_proof, challenge),
self.cpu_pool.submit(self.inclusion_proof, challenge),
self.cpu_pool.submit(self.access_proof, challenge),
]

# GPU-accelerated PoW if available
if self.gpu_context:
pow_task = self.gpu_context.compute_pow(challenge)
else:
pow_task = self.cpu_pool.submit(self.work_proof, challenge)

tasks.append(pow_task)

# Collect results
proofs = await asyncio.gather(*tasks)
return ProofBundle(proofs)

Caching Strategy​

class AdaptiveCache:
def __init__(self, total_memory_gb):
self.l1_size = int(total_memory_gb * 0.1 * 1e9) # 10% for L1
self.l2_size = int(total_memory_gb * 0.3 * 1e9) # 30% for L2

self.l1_cache = LRUCache(self.l1_size)
self.l2_cache = LFUCache(self.l2_size)

def get(self, capsule_key):
# Check L1 first (hot data)
if data := self.l1_cache.get(capsule_key):
return data, "l1_hit"

# Check L2 (warm data)
if data := self.l2_cache.get(capsule_key):
self.l1_cache.put(capsule_key, data) # Promote to L1
return data, "l2_hit"

# Load from disk
data = self.load_from_disk(capsule_key)
self.l2_cache.put(capsule_key, data)
return data, "disk_read"

Network Protocol​

P2P Communication​

message DIGNodeMessage {
oneof payload {
NodeAnnouncement announcement = 1;
DataStoreGossip gossip = 2;
CapsuleRequest request = 3;
CapsuleResponse response = 4;
ValidationChallenge challenge = 5;
ProofResponse proof = 6;
}
bytes signature = 7;
uint64 timestamp = 8;
}

message CapsuleRequest {
bytes datastore_id = 1;
uint32 capsule_index = 2;
bytes requester_id = 3;
uint32 max_size = 4; // Bandwidth limit
}

message CapsuleResponse {
bytes capsule_data = 1;
bytes merkle_proof = 2;
bytes plot_id = 3;
uint32 compression = 4; // 0=none, 1=lz4, 2=zstd
}

Discovery Protocol​

class PeerDiscovery:
def __init__(self, bootstrap_nodes):
self.peers = set(bootstrap_nodes)
self.reputation = {} # node_id -> score

async def discover_peers(self):
# Kademlia-style discovery
target = self.node_id
closest_peers = self.k_closest_peers(target, k=20)

for peer in closest_peers:
new_peers = await self.query_peer(peer, "FIND_NODE", target)
self.peers.update(new_peers)

# Reputation-based filtering
self.peers = {p for p in self.peers
if self.reputation.get(p, 0.5) > 0.3}

Security Measures​

Node Authentication​

# Ed25519 key generation
private_key = nacl.utils.random(nacl.secret.SecretBox.KEY_SIZE)
signing_key = nacl.signing.SigningKey(private_key)
verify_key = signing_key.verify_key

# Node identity
node_id = f"dig1{verify_key.encode().hex()[:16]}"

# Message signing
def sign_message(message):
timestamp = int(time.time())
payload = message + timestamp.to_bytes(8, 'big')
return signing_key.sign(payload)

Rate Limiting​

RATE_LIMITS = {
"capsule_retrieval": RateLimit(100, "minute", per="ip"),
"validation_challenge": RateLimit(10, "minute", per="node"),
"p2p_messages": RateLimit(1000, "minute", per="peer"),
"api_requests": RateLimit(60, "minute", per="key"),
}

@rate_limit("capsule_retrieval")
async def handle_capsule_request(request):
# Process request
pass

Monitoring and Metrics​

Performance Metrics​

METRICS = {
# Storage metrics
"storage_utilization": Gauge("bytes_used / bytes_total"),
"capsule_count": Counter("total capsules stored"),
"plot_generation_rate": Histogram("plots per hour"),

# Network metrics
"peer_count": Gauge("active peer connections"),
"bandwidth_in": Counter("bytes received"),
"bandwidth_out": Counter("bytes sent"),
"request_latency": Histogram("ms per request"),

# Economic metrics
"validation_success_rate": Gauge("successful / total"),
"earnings_per_tb": Gauge("DIG per TB per day"),
"pending_rewards": Gauge("unclaimed DIG"),

# Health metrics
"cpu_usage": Gauge("percentage"),
"memory_usage": Gauge("percentage"),
"disk_io_rate": Gauge("MB/s"),
"proof_generation_time": Histogram("ms per proof"),
}

Logging Format​

{
"timestamp": "2024-01-01T00:00:00.000Z",
"level": "INFO",
"node_id": "dig1abc...xyz",
"component": "storage",
"event": "capsule_stored",
"details": {
"datastore_id": "0x123...",
"capsule_index": 42,
"size_bytes": 16777216,
"duration_ms": 250,
"cache_tier": "l3"
}
}

Deployment​

Docker Configuration​

FROM ubuntu:22.04

# Install dependencies
RUN apt-get update && apt-get install -y \
python3.10 python3-pip \
libgmp-dev libssl-dev \
libnuma-dev libhwloc-dev

# Install DIG node
COPY requirements.txt .
RUN pip3 install -r requirements.txt

COPY dig-node /usr/local/bin/
RUN chmod +x /usr/local/bin/dig-node

# Storage volumes
VOLUME ["/plots", "/cache", "/data"]

# Network ports
EXPOSE 8444 8555 9090

# Health check
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://localhost:9090/health || exit 1

# Run node
CMD ["dig-node", \
"--config", "/etc/dig/node.yaml", \
"--plots", "/plots", \
"--cache", "/cache"]

Configuration Example​

# /etc/dig/node.yaml
node:
id: "dig1abc...xyz"
mode: "selective"

storage:
plots_dir: "/plots"
cache_dir: "/cache"
max_plots: 100
compression: "zstd"

network:
listen_address: "0.0.0.0:8444"
bootstrap_nodes:
- "dig1.bootstrap.dig.net:8444"
- "dig2.bootstrap.dig.net:8444"
max_peers: 200
gun_p2p_peers:
- "https://gun.dig.net/gun"
- "wss://gun-ws.dig.net/gun"

validation:
stake_amount: 1000
min_uptime: 0.95

economic:
tier_preference: [1, 2, 3]
min_reward: 0.01
accept_bribes: true

monitoring:
metrics_port: 9090
log_level: "info"
log_file: "/var/log/dig-node.log"

P2P Metadata Network​

DIG Nodes participate in a gun.js P2P network to share real-time metadata with other nodes and DIG Browsers for optimized content discovery and delivery.

Shared Data Types​

Provider Announcements

  • Node ID and endpoint information
  • Available capsule IDs and storage capabilities
  • Node reliability scores and uptime statistics
  • Storage tier preferences and capacity

Content Metrics

  • DataStore popularity and access frequency
  • Response times and bandwidth usage
  • Cache hit rates and performance statistics
  • Geographic distribution of requests

Network Status

  • Active peer directory and connection health
  • Network topology and routing information
  • Validation success rates and economic metrics
  • Resource availability and load balancing data