← Back to blog
Teton Explained

How Teton Sees the Room

Most care technology responds to what has already happened. Teton is built to understand what is happening now, continuously, in every resident room so care teams can act before a situation becomes a crisis.

May 6, 2026

We have built an ambient monitoring system that prevents falls, reduces reaction time, improves staff efficiency, and continuously monitors residents' health. Here is how it works.

The hardware: two parts, one system

The system consists of two physical parts: a compute unit attached to the wall and a ceiling-mounted sensor. The two are wired together. The sensor acts as the eyes of our AI and the compute unit acts as its brain.

The eyes of the system are a wide-angle, ceiling-mounted sensor. Placed centrally, a single sensor is enough for the AI to understand the entire room. There is no “wrong” side of the bed to fall on, no blind corner behind a chair. One sensor is enough to cover an entire room and keep the resident safe.

Why nothing leaves the device

The entire system is built to be privacy-preserving. This means that our AI runs locally on the compute unit in the room and that no footage is stored or leaves the device. The only things that are sent from the device are the bits of information needed to provide proactive care: alarms when certain events happen (sitting on the bed edge, walking with a walking aid), respiration rate when the resident is stationary, and pseudonymized fall clips when a fall occurs.

Teton hardware: sensor, edge computer, and connected devices
AnonymISED Computer Vision

Seeing

The sensor captures images of the room, and has infrared which allows it to work 24 hours a day.

Artificial intelligence

Understanding

The images are sent to the box for analysis. The important behaviours and activities are extracted, and the images are deleted. This whole process happens multiple times a second.

User friendly Systems

Communicating

We boil down the most important information and send it on to the staff on whichever device they are using.

Local processing isn't just about privacy. It's also about latency. In a critical care situation, the system needs to reflect what is going on now. Not 5 minutes ago. Uploading video and processing in the cloud would create too long of a delay to prevent most falls (anatomy of falls).

The 3D understanding that makes it work

A key aspect of our AI is that it uses Spatial Intelligence, which means it reasons in 3D, similarly to humans. Ten times per second, it reconstructs the room and the activity within it. It understands the room geometry: which parts of the room are walls, which are chairs, where people are, and what they are doing.

Loading 3D viewer...

Complete 3D scene with point cloud, 3D reconstruction of each person, and oriented cuboid furniture detections. All generated from a single ceiling-mounted sensor.

This 3D understanding unlocks three things

First, it makes measurement trivial. How far away did the resident place their walking aid before going to bed? What is the average stride length of a resident's gait? These are simple consequences of reasoning in 3D, but they have a huge impact on setting proactive alarms (optimized alarms), monitoring resident gait, and the care they receive.

Loading 3D viewer...

Two sensors monitoring adjacent spaces. The bed card updates in real time as the resident moves between rooms. Floor polygons show exits into common space (green) and bathroom (blue).

Second, it enables a new kind of privacy. Using the 3D geometry of the scene, our AI can generate pseudonymized fall animations that strip away appearance, such as facial features, pictures on the wall, and textures of clothes, while preserving the geometric information that carers need to assess injuries and plan proactive interventions.

Walking without the aid

The most common pattern. A resident wakes needing the bathroom. Their walking aid is a meter away, not far, but far enough that getting to it means standing up first.

Reaching for the aid

The resident tries to do the right thing. The aid is a meter from the bed. They lean out, overbalance, and fall before getting their hands on it.

The walker was moved

A staff member parks the walker by the door. A cleaner moves it. A visitor shifts it. The resident goes to sleep with the walker nearby and wakes up with it out of reach.

Third, it lets multiple AIs share a world. We can align the 3D worlds of separate sensors into a single shared one, so the system can seamlessly cover a connected living room and bedroom. Imagine both sensors seeing a person standing in the doorway between the two rooms. In disconnected worlds, each AI would report a person; with shared worlds, our AI understands that it is the same person, not two. This ability to reason across rooms has a tremendous effect on accurately monitoring resident health and accurately counting staff visits.

Loading 3D viewer...

A person walks between two sensor views. The connection line shows which sensor has visibility of them. Identity is maintained across the handover through lightweight spatial messages between compute units.

Privacy that doesn't ask anything of residents

At Teton, we have built a system that is privacy-preserving by design, by processing sensor input locally and using Spatial Intelligence. We have taught our AI a rich 3D geometric understanding, and we use it to generate pseudonymized fall animations. Our local processing lets us send alarms in real time, before the consequences of a fall compound. This is the balance we have struck: privacy and dignity for residents, resulting in a 99% adoption rate, while providing proactive care.

See ambient, privacy-first monitoring in action

Schedule a demo to see how Teton's spatial intelligence prevents falls and supports proactive care.

Book a demo