Skip to main content

DIG Browser Specification

Overview

The DIG Browser is a JavaScript micro frontend component that enables secure, decentralized content loading from the DIG Network. It operates entirely client-side, ensuring that all data decryption and reconstruction happens on the user's machine, maintaining the network's common carrier status.

Core Principles

  1. Client-Side Processing: All data decryption and reconstruction occurs locally
  2. Common Carrier Compliance: Network infrastructure remains content-agnostic
  3. Secure Content Loading: End-to-end encryption with local decryption using data store ID
  4. Data Integrity: Merkle proof verification against on-chain DataStore root
  5. Micro Frontend Architecture: Lightweight, embeddable component

URL Format Support

The browser component supports three types of identifiers:

dig://<storeId>                    # Direct DataStore access
dig://<handle> # DIG Handle resolution (*.dig)
dig://<storeId>/<contentUrn> # Specific content within DataStore

Component Architecture

interface DigBrowserConfig {
network: {
rpcEndpoint: string; // Chia RPC for DataStore queries
digNodeEndpoint: string; // DIG Node for capsule retrieval
gunPeers: string[]; // gun.js P2P network peers
};
security: {
verifyProofs: boolean; // Enable merkle verification
maxCapsuleSize: number; // Max allowed capsule size (1GB)
};
ui: {
containerId: string;
theme: 'light' | 'dark';
};
}

class DigBrowser {
constructor(config: DigBrowserConfig);

// Core methods
async loadContent(identifier: string): Promise<void>;
async verifyContent(storeId: string, content: Buffer): Promise<boolean>;

// Event handlers
onContentLoaded: (metadata: ContentMetadata) => void;
onError: (error: DigBrowserError) => void;
onProgress: (progress: LoadProgress) => void;
}

Content Loading Process

  1. Initial Load

    // Initialize browser component
    const browser = new DigBrowser({
    network: {
    rpcEndpoint: 'https://rpc.dig.net',
    digNodeEndpoint: 'https://node.dig.net',
    gunPeers: [
    'https://gun.dig.net/gun',
    'wss://gun-ws.dig.net/gun'
    ]
    },
    security: {
    verifyProofs: true,
    maxCapsuleSize: 1048576000 // 1GB
    },
    ui: {
    containerId: 'dig-content',
    theme: 'light'
    }
    });
  2. Content Resolution Flow

    URL Input → Handle/Store Resolution → Metadata Load → Capsule Discovery → 
    Parallel Fetch → Decryption → Reconstruction → Verification → Display

Data Flow

1. URL Resolution

async function resolveIdentifier(identifier: string): Promise<ResolvedContent> {
const parsed = parseDigUrl(identifier);

if (parsed.type === 'handle') {
// Resolve DIG Handle to DataStore ID
const storeId = await resolveDIGHandle(parsed.handle);
return { storeId, contentUrn: parsed.contentUrn };
}

return { storeId: parsed.storeId, contentUrn: parsed.contentUrn };
}

2. DataStore Loading

interface DataStore {
storeId: string;
merkleRoot: string; // On-chain root for verification
metadata: {
contentManifest: ContentManifest[];
capsuleMap: Map<string, CapsuleInfo>;
};
}

async function loadDataStore(storeId: string): Promise<DataStore> {
// Query Chia blockchain for DataStore NFT
const nft = await queryChiaRPC(storeId);

// Extract merkle root from NFT metadata
const merkleRoot = nft.metadata.root_hash;

// Load content manifest
const manifest = await loadContentManifest(storeId);

return { storeId, merkleRoot, metadata: manifest };
}

3. Capsule Discovery and Loading

interface CapsuleInfo {
capsuleId: string;
size: CapsuleSize; // 256KB, 1MB, 10MB, 100MB, 1GB
providers: PlotCoinProvider[];
merkleProof: MerkleProof;
}

async function discoverCapsules(storeId: string, contentUrn: string): Promise<CapsuleInfo[]> {
// Query PlotCoin registry for capsule providers
const capsuleIds = await getCapsuleIdsForContent(storeId, contentUrn);

const capsuleInfos = await Promise.all(
capsuleIds.map(async (capsuleId) => {
// Find storage providers via PlotCoin registry
const providers = await queryPlotCoinRegistry(capsuleId);
return {
capsuleId,
size: determineCapsuleSize(capsuleId),
providers,
merkleProof: await getMerkleProof(storeId, capsuleId)
};
})
);

return capsuleInfos;
}

4. Capsule Decryption and Reconstruction

async function loadAndDecryptCapsule(
capsuleInfo: CapsuleInfo,
storeId: string
): Promise<Buffer> {
// Select best provider based on latency/availability
const provider = selectOptimalProvider(capsuleInfo.providers);

// Fetch encrypted capsule data
const encryptedCapsule = await fetchCapsule(provider, capsuleInfo.capsuleId);

// Decrypt using data store ID as key
const decryptedCapsule = await decryptCapsule(encryptedCapsule, storeId);

// Remove padding (minimum 5% at end)
const content = removePadding(decryptedCapsule);

return content;
}

async function reconstructContent(
capsules: Buffer[],
reconstructionPlan: ReconstructionPlan
): Promise<Buffer> {
// Reassemble capsules in correct order
const orderedCapsules = reconstructionPlan.order.map(i => capsules[i]);

// Concatenate and verify
const reconstructed = Buffer.concat(orderedCapsules);

return reconstructed;
}

5. Merkle Verification

async function verifyContentIntegrity(
content: Buffer,
merkleProof: MerkleProof,
dataStore: DataStore
): Promise<boolean> {
// Calculate content hash
const contentHash = sha256(content);

// Verify against DataStore merkle root
const isValid = verifyMerkleProof(
contentHash,
merkleProof.path,
merkleProof.index,
dataStore.merkleRoot
);

if (!isValid) {
throw new Error('Content failed merkle verification');
}

return true;
}

Security Features

Encryption

  • All capsule data is encrypted using the DataStore ID as the encryption key
  • Decryption happens entirely in the browser using WebCrypto API
  • Network nodes never see decrypted content

Proof Verification

interface MerkleProof {
root: string; // DataStore merkle root (on-chain)
path: string[]; // Sibling hashes up the tree
index: number; // Position in tree
}

// Verify content against on-chain DataStore root
async function verifyProof(
content: Buffer,
proof: MerkleProof,
expectedRoot: string
): Promise<boolean> {
let hash = sha256(content);

for (let i = 0; i < proof.path.length; i++) {
const sibling = proof.path[i];
const isLeft = (proof.index >> i) & 1;

hash = isLeft
? sha256(sibling + hash)
: sha256(hash + sibling);
}

return hash === expectedRoot;
}

Content Integrity Checks

  1. Capsule Level

    • Verify capsule hash matches registry
    • Check size matches expected capsule size
    • Validate minimum 5% padding exists
  2. Content Level

    • Verify reconstructed content against DataStore merkle root
    • Check content URN matches request
    • Validate complete reconstruction

Error Handling

enum ErrorCode {
INVALID_URL = 'INVALID_URL',
DATASTORE_NOT_FOUND = 'DATASTORE_NOT_FOUND',
HANDLE_NOT_RESOLVED = 'HANDLE_NOT_RESOLVED',
NO_PROVIDERS = 'NO_PROVIDERS',
DECRYPTION_FAILED = 'DECRYPTION_FAILED',
MERKLE_VERIFICATION_FAILED = 'MERKLE_VERIFICATION_FAILED',
RECONSTRUCTION_FAILED = 'RECONSTRUCTION_FAILED',
NETWORK_ERROR = 'NETWORK_ERROR'
}

interface DigBrowserError {
code: ErrorCode;
message: string;
details?: {
storeId?: string;
capsuleId?: string;
provider?: string;
};
}

Performance Optimizations

Parallel Capsule Loading

async function loadCapsulesParallel(
capsuleInfos: CapsuleInfo[],
storeId: string,
concurrency: number = 5
): Promise<Buffer[]> {
const queue = [...capsuleInfos];
const results: Buffer[] = new Array(capsuleInfos.length);
const workers: Promise<void>[] = [];

for (let i = 0; i < concurrency; i++) {
workers.push(processCapsuleQueue(queue, results, storeId));
}

await Promise.all(workers);
return results;
}

Caching Strategy

interface CacheConfig {
capsuleCache: {
maxSize: number; // Max cache size in bytes
ttl: number; // Time to live in seconds
};
datastoreCache: {
maxEntries: number;
ttl: number;
};
handleCache: {
maxEntries: number;
ttl: number;
};
}

// IndexedDB for persistent capsule caching
class CapsuleCache {
async get(capsuleId: string): Promise<Buffer | null>;
async set(capsuleId: string, data: Buffer): Promise<void>;
async evictOldest(): Promise<void>;
}

Integration Example

<!DOCTYPE html>
<html>
<head>
<title>DIG Content Viewer</title>
</head>
<body>
<div id="dig-content"></div>

<script type="module">
import { DigBrowser } from './dig-browser.js';

// Initialize browser with configuration
const browser = new DigBrowser({
network: {
rpcEndpoint: 'https://rpc.dig.net',
digNodeEndpoint: 'https://node.dig.net',
gunPeers: [
'https://gun.dig.net/gun',
'wss://gun-ws.dig.net/gun'
]
},
security: {
verifyProofs: true,
maxCapsuleSize: 1048576000
},
ui: {
containerId: 'dig-content',
theme: 'light'
}
});

// Handle progress updates
browser.onProgress = (progress) => {
console.log(`Loading: ${progress.loaded}/${progress.total} capsules`);
};

// Handle errors
browser.onError = (error) => {
console.error('DIG Browser Error:', error);
};

// Load content from URL parameter
const urlParams = new URLSearchParams(window.location.search);
const digUrl = urlParams.get('url') || 'dig://example.dig';

browser.loadContent(digUrl)
.then(() => console.log('Content loaded successfully'))
.catch(err => console.error('Failed to load content:', err));
</script>
</body>
</html>

Browser Requirements

Required APIs

  • WebCrypto API: For capsule decryption
  • IndexedDB: For capsule caching
  • Fetch API: For network requests
  • Web Workers: For parallel processing (optional)

Supported Browsers

  • Chrome/Edge 90+
  • Firefox 88+
  • Safari 14+
  • Opera 76+

Security Considerations

Content Isolation

  • Content rendered in sandboxed iframes
  • CSP headers prevent XSS
  • No eval() or dynamic code execution

Network Security

  • All requests over HTTPS/TLS
  • Verify provider signatures
  • Rate limiting on requests
  • Request timeout protection

Local Security

  • Encryption keys never persisted
  • Cache encrypted at rest
  • Memory cleared after use
  • No sensitive data in logs

Implementation Notes

Capsule Size Handling

The DIG Network uses fixed capsule sizes:

  • 256 KB (262,144 bytes)
  • 1 MB (1,048,576 bytes)
  • 10 MB (10,485,760 bytes)
  • 100 MB (104,857,600 bytes)
  • 1 GB (1,048,576,000 bytes)

Each capsule includes:

  • Encrypted original data
  • Minimum 5% padding (using block height entropy)
  • Padding marker (0xFFFFFFFF)
  • Size footer (4 bytes)

DataStore Integration

DataStores are Chia NFTs that:

  • Store merkle roots on-chain
  • Link to content manifests
  • Provide ownership proof
  • Enable version control

PlotCoin Discovery

Storage providers register via PlotCoins:

  • Map capsuleId to provider location
  • Include ZK proofs of storage
  • Enable decentralized discovery
  • Support economic incentives

P2P Network Integration

The DIG Browser participates in the same gun.js P2P network as DIG Nodes to share network metadata and optimize content discovery.

Shared Data Types

Provider Discovery Data

  • Capsule ID → Available provider endpoints
  • Provider reliability scores and last-seen timestamps
  • Node capabilities and storage tiers

Content Performance Metrics

  • DataStore access patterns and popularity
  • Content load times and success rates
  • Bandwidth usage and response times

Peer Directory Information

  • Active node endpoints and availability
  • Node capabilities and supported features
  • Network topology and connection health