ECHO: Disaster buddy

In the wake of natural disasters, solar geomagnetic storms or cyber-attacks, how can we ensure reliable situational awareness and decision support when conventional telecommunications infrastructure is compromised?

As part of the ECHO initiative, Disaster Buddy is a standalone, low-power AI assistant designed to function independently of modern communication networks, providing critical guidance in crisis scenarios when other services are unavailable.
Hand-held intelligence in crisis scenarios

Introduction

Technical Overview

Disaster Buddy leverages a highly optimized Large Language Model (LLM) deployed on a low-power handheld device. Unlike traditional LLM deployments that rely on cloud computing and energy-intensive GPUs, Disaster Buddy operates on a locally quantized model, enabling real-time interaction without requiring internet connectivity.

Key Challenges in AI Deployment for Disaster Scenarios

  • Energy Efficiency: Standard LLMs consume significant power, with a single query using approximately 3 watt-hours—enough to power a light bulb for two days. Disaster Buddy minimizes power consumption to ensure prolonged operational capability.

  • Computational Constraints: LLMs typically run on GPUs with extensive RAM (16GB+). Disaster Buddy circumvents this requirement through model quantization and distillation, enabling inference on an 8-bit CPU system.

Infrastructure Independence: Traditional AI services rely on cloud-based inference. However, in disaster scenarios where mains electricity, cellular networks, or even satellite links may be disrupted, Disaster Buddy ensures uninterrupted access to AI-powered assistance.

AI Optimisation for low-power scenarios

To achieve functional AI performance on constrained hardware, the Disaster Buddy project focused on:

  • Quantization: Compressing an LLM (based on Mistral) to run on an RISC-V chip-set with enhanced RAM.

  • Distillation: Training a smaller, efficient model from a larger parent model to maintain performance while reducing computational requirements.

  • Performance Trade-offs: Optimising for a balance between reasoning ability, knowledge retention, and power consumption, measured in tokens-per-second.

Our prototype demonstrated a highly quantized Mistral model achieving 1 token per second on the Disaster Buddy device—sufficient for real-time readability. By comparison, cloud-based LLMs operate at around 60 tokens per second.

Form factor and hardware 

In addition to AI model optimisation, the Disaster Buddy is designed to be robust for crisis operations with the following features: 

  • Modular Design: Interchangeable batteries, waterproofing, and ruggedised casing for extreme environments.

  • Multimodal Inputs: Support for keyboard, speech recognition, camera, torch, and short-wave radio functionality.

Power Solutions: Alternative battery technologies for off-grid use.

Testing 

Prototype tests assessed Disaster Buddy’s performance across key dimensions:

  • Token speed vs. power draw: Ensuring query responses were computationally efficient.

  • Durability: Evaluating performance under harsh conditions.

  • AI safety and alignment: Investigating the model’s reliability in high-stress decision-making environments.

While our prototype successfully executed LLM queries on low-power hardware, challenges remain in optimising battery life, ensuring robust input mechanisms, and refining AI alignment for real-world disaster response scenarios.

Next steps 

Moving forward, our team is developing a new generation of the Disaster Buddy device with:

  • Dedicated crisis-optimized mini-computers: Custom hardware integrating short-wave radio for offline data reception.

  • Preloaded crisis Information: Capability to store critical updates in anticipation of disasters, e.g., early wildfire or hurricane warnings.

  • Improved power management: Exploring alternative charging solutions such as kinetic energy harvesting.

Tabletop exercises and AI alignment research: Simulating real-world crisis conditions to refine Disaster Buddy’s decision-support capabilities.

Future development will focus on refining hardware, improving AI alignment, and ensuring usability in life-and-death situations.

For collaboration or further research inquiries, contact us at team (at) trillium.tech