TinyML indoor positioning is a technology that lets everyday devices figure out where you are inside a building—without GPS. This guide will walk you through the basics, show you why it matters, and give you a hands‑on recipe to build your own tiny model that runs on a micro‑controller or a smartphone. Whether you’re a maker, a hobbyist, or a product designer, you’ll find clear steps to turn your Wi‑Fi, Bluetooth, or ultrasound signals into accurate location data.


Why TinyML Indoor Positioning Is a Game‑Changer

Indoor positioning solves a classic problem: GPS signals die inside walls. TinyML models that run on low‑power chips can process the subtle differences in signal timing or strength to pinpoint your location. This opens doors to smart lighting that turns on when you arrive, health‑care rooms that track patient movement, or even museum guides that give audio tours based on where you stand.

Key benefits:

  • Low cost – Uses existing hardware (BLE beacons, Wi‑Fi, or inexpensive ultrasonic modules) so you don’t need a custom sensor network.
  • Fast response – Tiny models run in milliseconds, keeping the experience snappy.
  • Privacy‑first – Data stays on the device; no raw location logs leave your home.

The Core Pieces of a TinyML Indoor Positioning System

Sensors  →  TinyML Model  →  Location Output

1. Sensors

Type What it measures Typical use
BLE beacons Received Signal Strength Indicator (RSSI) Anchor points for distance estimation
Wi‑Fi routers Signal strength, packet timing Existing infrastructure for passive tracking
Ultrasonic modules Echo time of sound Precise range on short distances
IMU (accelerometer, gyroscope) Motion data Helps with dead‑reckoning when signals are weak

2. TinyML Model

TinyML models for positioning are usually lightweight neural nets (a few layers of dense or convolutional units) that take raw signal data and output an estimated position. They’re quantized to 8‑bit integers to fit on microcontrollers with limited memory.

3. Output & Action

Once the model predicts your coordinates, your application can:

  • Trigger lights or fans
  • Provide navigation cues on a mobile app
  • Log movement for analytics (inside the device)

Step‑by‑Step: Building a BLE‑Based TinyML Positioning System

Let’s build a simple indoor positioning system using BLE beacons and a Raspberry Pi 4 as our training hub, then port the model to an ESP32.

Step 1 – Set Up the Environment

  1. Hardware

    • 4 BLE beacons (e.g., HM-10 or Estimote) placed at known corners of a room.
    • Raspberry Pi 4 with Raspbian OS.
    • ESP32 dev board for deployment.
  2. Software

    • Python 3.10 on Pi.
    • TensorFlow Lite (tflite) for training and inference.
    • BLE scanner library: bluepy or bleak.

Step 2 – Gather Data

2.1 Create a Ground‑Truth Map

Mark the exact (x, y) coordinates of each beacon on a floor plan. For simplicity, use a 5 m × 5 m square room.

2.2 Scan RSSI Values

Write a quick script to record RSSI readings from each beacon while a human walks around the room in a grid pattern (every 0.5 m). Store each sample with its ground‑truth position.

import time
from bluepy.btle import Scanner

scanner = Scanner()
samples = []

for _ in range(200):  # 200 samples
    devices = scanner.scan(2.0)  # scan for 2 seconds
    rssi_dict = {dev.addr: dev.rssi for dev in devices}
    samples.append({'rssi': rssi_dict, 'x': current_x, 'y': current_y})
    time.sleep(0.5)

You’ll end up with a CSV file where each row contains four RSSI values (one per beacon) and the true x, y coordinates.

2.3 Pre‑process the Data

Normalize RSSI values (e.g., map −120 dBm to 0, −30 dBm to 1). Then split the dataset into training (70 %) and testing (30 %) sets.

Step 3 – Build the TinyML Model

import tensorflow as tf
from tensorflow.keras import layers, models

def build_position_model():
    inputs = layers.Input(shape=(4,))  # 4 RSSI values
    x = layers.Dense(16, activation='relu')(inputs)
    x = layers.Dense(32, activation='relu')(x)
    x = layers.Dense(64, activation='relu')(x)
    # Two outputs: x and y
    outputs = layers.Dense(2, activation='linear')(x)
    return models.Model(inputs, outputs)

model = build_position_model()
model.compile(optimizer='adam',
              loss='mse',
              metrics=['mae'])

Train the model with the training data, then evaluate on the test set. A good model will show MAE (average error) below 0.5 m.

Step 4 – Quantize for the ESP32

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()

![Article supporting image](https://neuraai.blob.core.windows.net/uploads/2025-11-03_07.34.29_jyfnpqlcqg92i2gr.png)

with open('ble_position.tflite', 'wb') as f:
    f.write(tflite_model)

The resulting file should be < 70 kB, perfect for an ESP32.

Step 5 – Deploy to ESP32

  1. Copy the model to the ESP32’s flash memory.
  2. Install the TensorFlow Lite Micro library in the Arduino IDE.
  3. Use a Bluetooth library (e.g., BluetoothSerial or ESP32_BLE_Arduino) to read RSSI from nearby beacons.
  4. Run inference with the TFLite model and use the predicted coordinates to drive an LED or a small display.
#include "TensorFlowLite.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/all_ops_resolver.h"

extern const unsigned char ble_position_tflite[];
const tflite::Model* model = ::tflite::GetModel(ble_position_tflite);
tflite::AllOpsResolver resolver;
const int kArenaSize = 8 * 1024; // 8 KB
uint8_t tensor_arena[kArenaSize];
tflite::MicroInterpreter interpreter(model, resolver, tensor_arena, kArenaSize);

void setup() {
  Serial.begin(115200);
  interpreter.AllocateTensors();
}

void loop() {
  float rssi[4] = { readRSSI(beacon1), readRSSI(beacon2),
                    readRSSI(beacon3), readRSSI(beacon4) };
  // Normalize RSSI
  for (int i = 0; i < 4; i++) rssi[i] = (rssi[i] + 120) / 90.0;

  // Prepare input tensor
  TfLiteTensor* input = interpreter.input(0);
  for (int i = 0; i < 4; i++) input->data.f[i] = rssi[i];

  interpreter.Invoke();

  TfLiteTensor* output = interpreter.output(0);
  float x = output->data.f[0] * 5.0; // scale back to meters
  float y = output->data.f[1] * 5.0;

  Serial.printf("Position: (%.2f, %.2f) m\n", x, y);
  delay(500);
}

Now your ESP32 can estimate your position inside the room in real time, using only BLE signals and a tiny 8‑bit neural net.


Adding Accuracy: Fusion with IMU Dead‑Reckoning

When BLE signals are weak (e.g., near walls), you can blend the model’s output with motion data from the ESP32’s built‑in IMU. A simple complementary filter can keep the estimate stable:

# Pseudo‑code
alpha = 0.8
prev_x, prev_y = 0, 0
prev_time = millis()

while True:
    delta_t = (millis() - prev_time) / 1000.0
    accel = readAccel()  # x, y, z
    velocity = prev_velocity + accel * delta_t
    pred_x = prev_x + velocity.x * delta_t
    pred_y = prev_y + velocity.y * delta_t

    ble_x, ble_y = run_tflite()  # from TinyML
    final_x = alpha * ble_x + (1-alpha) * pred_x
    final_y = alpha * ble_y + (1-alpha) * pred_y

    prev_x, prev_y = final_x, final_y
    prev_velocity = velocity
    prev_time = millis()

The filter keeps the position reasonable even when BLE signals drop out.


Real‑World Use Cases of TinyML Indoor Positioning

Use Case What TinyML Does Why It Matters
Smart Lighting Lights turn on only where you are. Saves energy and adds convenience.
Asset Tracking Find a tool or medicine inside a warehouse. Reduces loss and speeds up inventory.
Museum Guides Audio tour plays based on proximity to exhibits. Enhances visitor experience without cables.
Home Automation Doors unlock when a key‑fob is near. Improves security and ease of access.

These applications show how tiny models can bring intelligent behavior to everyday devices without breaking the bank or draining batteries.


Best Practices for TinyML Indoor Positioning

  1. Keep the model small – Less than 100 kB is ideal for ESP32.
  2. Regularly recalibrate – Wi‑Fi and BLE signal strength can drift over time.
  3. Use multiple sensor types – BLE, Wi‑Fi, and IMU together provide robustness.
  4. Test in realistic environments – Walls, furniture, and people affect signals.
  5. Respect privacy – Store location data locally; never send raw logs to the cloud.

Future Directions

  • Self‑learning models – Devices that adjust weights on‑the‑fly when new beacons are added.
  • Ultra‑low‑power chips – New MCUs that can run TinyML at 0.1 mW, extending battery life.
  • Standardized beacon protocols – Easier integration across brands.
  • Indoor‑outdoor hybrid positioning – Seamlessly switch between GPS and indoor TinyML when indoors.

Wrap‑Up

TinyML indoor positioning turns ordinary sensors into a powerful navigation system that stays on the edge. By collecting a handful of BLE RSSI samples, training a lightweight neural net, and deploying it on an ESP32, you can build a fast, private, and energy‑friendly positioning solution. Whether you’re creating a smart home, a warehouse tracker, or an interactive museum, these techniques open up a world of possibilities that were once the domain of expensive LIDAR or Wi‑Fi ray‑tracing systems.

If you’re curious to dive deeper, check out our full tutorials and code examples on the Neura AI blog. The community is growing fast, and your next project could be just a few lines of code away.