README
Smart PIR Motion Detection
Example Summary
The smart_pir_detection example is the first in a series of examples for the cc27XX family of devices that leverages TI’s Edge AIs Studio to enable the development of AI applications capable of efficiently running locally in the device, on the Edge.
A Passive Infrared Sensor (PIR) is an electronic sensor that measures infrared (IR) light radiating from objects in its direct field of view. These sensors are commonly used as motion detectors in security alarms and automatic lighting applications.
This example enhances traditional PIR-based motion detection by enabling differentiation between targets based on their motion patterns. Using an AI Deep Learning model, the system can classify motion types based on the PIR sensor’s analog signal.
TI’s Edge AIs Studio guides users through the entire AI pipeline process with these key steps:
Live Sensor Data Acquisition. Enables users to build application-specific datasets by acquiring data directly from sensors through the device. Users can label each dataset instance with the appropriate class and define the number of samples per instance and sampling frequency.
Model Selection. Allows users to select a pre-trained neural network architecture and suitable device for inference based on accuracy, speed, and cost considerations. The tool provides estimates of the model’s footprint in terms of RAM, Flash usage, and inference time. For this example, the AI model (PIR-net) is pre-trained with a neural architecture proven to provide accurate results for this application.
Model Training. Enables selection of time and frequency-based features to be extracted from raw sensor data for training. Users can define training hyperparameters such as the number of epochs and learning rate. Feature extraction is supported on the MCU side through the feature extraction library.
Model Compilation. Compiles the trained AI model into efficient C code, leveraging intrinsic instructions supported by the hardware target. The device includes a Neural Processing Unit with Custom Datapath Extension (CDE) - hardware instructions that accelerate computation of quantized deep learning neural network layers, crucial for power-efficient edge execution. The use of the CDE is enabled by default in the example and the set of operations are considered at compilation time to optimize performance.
Live Inference Results Preview. Allows users to flash the complete application (including the compiled AI model) onto the device and run inference on real input sensor data with real-time result visualization. In this example, the AI model classifies motion as human, dog, or background.
Edge AI Studio
TI’s Edge AI Studio is a collection of tools that provide a fully integrated graphical user interface solution for collecting and annotating data, training and compiling models for deployment on live development platforms. Users can select from a variety of pre-trained models in the TI Model Zoo and optionally re-train them with custom data to improve accuracy and performance.
TI’s Edge AI Studio is available as a cloud-based application for processor devices and as both cloud and desktop applications for microcontrollers. When using a microcontroller, the output from TI’s Edge AI Studio is consumed by the Code Composer Studio™ development environment.
Prerequisites
Please go through the Release Notes to make sure you consider all the correct hardware and software dependencies.
Project Structure
- ai_artifacts
- Contains the AI neural network model header file
tvmgen_default.hand pre-compiled source codemod.athat includes the model structure of the PIR-net. These files will be regenerated each time the user trains a new model through TI’s Edge AI Studio. - These files can also be manually modified inside Code Composer Studio by compiling the model directly using TI Neural Network Compiler for MCUs.
- Contains the AI neural network model header file
- feature_extraction
FeatureExtract_config: Function to initialize FeatureExtract_Params based on a Model type; configured with default settings for the PIR-net neural network.FeatureExtract_run: Function that executes a sequence of Feature Extraction functions for a specific model type. The following signal processing functions are enabled in the default sequence for PIR detection:- Windowing Function: Splits the input signal frame into multiple windows with an overlapping stride.
- Fast Fourier Transform (FFT): Applies Real FFT with FP32 precision, where the number of bins depends on the sample length of each window.
- Spectral Entropy: Calculates the randomness within a signal and determines the energy spread across the frequency distribution.
- Zero Crossing Rate: Measures how often a signal crosses the zero axis, indicating the frequency or noisiness of a signal.
- Changes in Slopes: measures the sign changes of signal differences.
- Dominant Frequency: Determines the frequency components with the highest energy based on the energy distribution in the spectrum.
- Temporal Kurtosis: Measures the impulsive nature of signal over time, indicating the presence of sharp motion events or outliers.
FeatureExtract_findClass: Function to determine the appropriate labeled class/motion source based on the inferencing result of the PIR-net model.
- sensor
Sensor_init: Function to initialize the Sensor Task that handles the configuration of the ADC and Timer drivers to start and stop data gathering based on the sampling frequency requested by TI’s Edge AIs Studio.Sensor_getData: Function to start data sampling and transmit back the buffered sensor data to the main application task.
- smart_detection_pir
- Main application task that processes input from TI’s Edge AIs Studio through the Device Agent Protocol (DAP) to operate in different modes. Currently supported modes include:
- Live preview of raw sensor data acquisition.
- Live preview of sensor inferencing.
- The Device Agent Protocol (DAP) is a UART-based serial communication protocol that enables the interaction between TI’s Edge AIs Studio (the host) and supported devices, specifically designed to work with TI’s Edge AIs Studio.
- Main application task that processes input from TI’s Edge AIs Studio through the Device Agent Protocol (DAP) to operate in different modes. Currently supported modes include:
Peripherals & Pin Assignments
When this project is built, the SysConfig tool will generate the TI-Driver configurations into the ti_drivers_config.c and ti_drivers_config.h files and the TI-EdgeAI configuration into the ti_edgeai_config.c, ti_edgeai_config.h and tti_edgeai_fe_config.h files. Information on peripheral configuration, resources and parameters used is present in the generated files. Additionally, the System Configuration file (*.syscfg) present in the project may be opened with SysConfig’s graphical user interface to determine the resources used.
CONFIG_ADCBUF_0- ADCBuf instance.CONFIG_ADCBUF_0_CHANNEL_0- ADC channel 0 of theCONFIG_ADCBUF_0instance.
BoosterPacks, Board Resources & Jumper Settings
For board specific jumper settings, resources and BoosterPack modifications, refer to the Board.html file.
- The BoosterPack that will be used in the Plugin is the Edge AI Sensor BoosterPack (TIDA-010997)
Example Usage
- Connect the BoosterPack on top of the Launchpad.
- This will connect the ADC channel defined in SysConfig to the PIR sensor sampling source.
- Note: Undefined values may be returned if the channel is not connected to a sampling source.
Important: Caution should be used when connecting the pins to analog inputs greater than 3VDC.
Connect the Launchpad to the XDS110 debug probe.
Connect the XDS110 debug probe to the PC over USB.
The connection will have the following settings by default:
Baud-rate: 115200
Data bits: 8
Stop bits: 1
Parity: None
Flow Control: None
Go to TI’s Edge AIs Studio.
Select the Serial Port. Use the default baudrate (bps).
Now you are ready to follow the steps defined by TI’s Edge AIs Studio.