ECHO: Disaster buddy

In the wake of natural disasters, solar geomagnetic storms or cyber-attacks, how can we ensure reliable situational awareness and decision support when conventional telecommunications infrastructure is compromised?

As part of the ECHO initiative, Disaster Buddy is a standalone, low-power AI assistant designed to function independently of modern communication networks, providing critical guidance in crisis scenarios when other services are unavailable.
Hand-held intelligence in crisis scenarios

Table-top exercises

We are running a series of tabletop exercises exploring these edge cases, identifying where AI decision-making is prone to failure and refining its ability to act with reliability. 

If you are interested in supporting the development of ECHO by joining a table-top exercise, please let us know here.

Introduction

Technical Overview

Disaster Buddy leverages a highly optimized Large Language Model (LLM) deployed on a low-power handheld device. Unlike traditional LLM deployments that rely on cloud computing and energy-intensive GPUs, Disaster Buddy operates on a locally quantized model, enabling real-time interaction without requiring internet connectivity.

Key Challenges in AI Deployment for Disaster Scenarios

  • Energy Efficiency: Standard LLMs consume significant power, with a single query using approximately 3 watt-hours—enough to power a light bulb for two days. Disaster Buddy minimizes power consumption to ensure prolonged operational capability.

  • Computational Constraints: LLMs typically run on GPUs with extensive RAM (16GB+). Disaster Buddy circumvents this requirement through model quantization and distillation, enabling inference on an 8-bit CPU system.

Infrastructure Independence: Traditional AI services rely on cloud-based inference. However, in disaster scenarios where mains electricity, cellular networks, or even satellite links may be disrupted, Disaster Buddy ensures uninterrupted access to AI-powered assistance.

AI Optimisation for low-power scenarios

To achieve functional AI performance on constrained hardware, the Disaster Buddy project focused on:

  • Quantization: Compressing an LLM (based on Mistral) to run on an RISC-V chip-set with enhanced RAM.

  • Distillation: Training a smaller, efficient model from a larger parent model to maintain performance while reducing computational requirements.

  • Performance Trade-offs: Optimising for a balance between reasoning ability, knowledge retention, and power consumption, measured in tokens-per-second.

Our prototype demonstrated a highly quantized Mistral model achieving 1 token per second on the Disaster Buddy device—sufficient for real-time readability. By comparison, cloud-based LLMs operate at around 60 tokens per second.

Form factor and hardware 

In addition to AI model optimisation, the Disaster Buddy is designed to be robust for crisis operations with the following features: 

  • Modular Design: Interchangeable batteries, waterproofing, and ruggedised casing for extreme environments.

  • Multimodal Inputs: Support for keyboard, speech recognition, camera, torch, and short-wave radio functionality.

Power Solutions: Alternative battery technologies for off-grid use.

Testing 

Prototype tests assessed Disaster Buddy’s performance across key dimensions:

  • Token speed vs. power draw: Ensuring query responses were computationally efficient.

  • Durability: Evaluating performance under harsh conditions.

  • AI safety and alignment: Investigating the model’s reliability in high-stress decision-making environments.

While our prototype successfully executed LLM queries on low-power hardware, challenges remain in optimising battery life, ensuring robust input mechanisms, and refining AI alignment for real-world disaster response scenarios.

Next steps 

Moving forward, our team is developing a new generation of the Disaster Buddy device with:

  • Dedicated crisis-optimized mini-computers: Custom hardware integrating short-wave radio for offline data reception.

  • Preloaded crisis Information: Capability to store critical updates in anticipation of disasters, e.g., early wildfire or hurricane warnings.

  • Improved power management: Exploring alternative charging solutions such as kinetic energy harvesting.

Tabletop exercises and AI alignment research: Simulating real-world crisis conditions to refine Disaster Buddy’s decision-support capabilities.

Future development will focus on refining hardware, improving AI alignment, and ensuring usability in life-and-death situations.

For collaboration or further research inquiries, contact us at team (at) trillium.tech

Governable Agentic AI for crisis response 

Crisis Operations, often referred to as Emergency Operations Centres (EOC) or Crisis Command Centers (CCC) can operate at a town, city, state or national-level. These centres are critical for co-ordinating responses during an emergency and there is currently active investment in these centres to mitigate the increasing civil resilience challenges of the 21st Century. However effective operations are still hindered by open problems, such as:

Information overload

Fragmented common operational picture

Limited real-time communication

Misallocation or inefficient allocation of response

Poor decisions and bias in high-stress situations

The potential for agentic crisis operations 

Agentic AI (smart tools that inform, act and report back to a LLM) is a promising direction of research for crisis operations, where human decision making can be positively enhanced by AI.

Therefore, the ability to transition from passive analysis to real-time decision-making in high-stakes environments is an important new capability. However, the shift from theoretical models to reliable, real-world action remains an open challenge and agentic systems remain brittle to real-world situations. 

Agentic AI for disaster response must be tested against unpredictable conditions before it can be trusted in the field.

ECHO is a governable agentic system with a deep emphasis on human-in-the-loop. We are developing ECHO by stress-testing AI-driven crisis response in simulated disaster scenarios. Much like the evolution of self-driving cars, where rare but high-risk situations require extensive training before deployment.