Select a layer
to inspect its
architecture profile
| Building Block | Edge Advantage | Edge | Hyperscale |
|---|---|---|---|
| Infrastructure / Power | Tidal, solar, waste heat re-use — lowest PUE possible | ||
| Network / Interconnect | Existing fiber (North Sea proven), no new builds needed | ||
| Storage | Local NVMe serves 7B–70B weights without round-trips | ||
| Memory Bandwidth | GPU VRAM scales linearly — rack servers viable for inference | ||
| Compute (GPU fabric) | Consumer / prosumer GPUs sufficient for quantised inference | ||
| Inference Runtime | llama.cpp / Ollama run on CPU — no GPU required for small models | ||
| Model (Foundation) | Open-weight models (Llama, Mistral, Phi) run edge-deployed | ||
| Data Sovereignty | Computation stays within jurisdiction — economic value local |