INTRODUCTION

Figure 1. Smart Wardrobe Prototype.

BACKGROUND

In traditional IoT scenarios, Specific-Purpose Sensor (SPS) or Distributed Multi-Sensor (DMS) are usually deployed directly on different locations or objects to sense diverse purposes, However, the number of sensing purposes always conflict with the complexity of networking and deployment. Integrated Multi-Sensor Tag (IMST) alleviates this problem, such as Texas Instruments SimpleLink SensorTag, and Laput's Synthetic Sensors, with multiple sensors integrated on a small board to indirectly monitor a large context, without direct instrumentation of objects. But due to the limited computing power of IMST, a large amount of raw data still needs to be sent to the remote server for processing through wireless technologies such as Bluetooth or WiFi, which may lead to processing delays, data leakage or intrusion.

MOTIVATION

With the rapid increase in computing power of end devices, many machine learning models are popular for inference on end devices. With the help of real-time data from sensors, the activities that are happening in the environment can be analyzed and recognized in real-time, which has a lot of demand and promise in the field of autonomous driving, sports, healthcare, etc.

METHOD

In this work, we explore the concept of Edge Computing Sensor Kit (ECSK) based on the former work. We designed a new type of hardware that integrates several different types of sensors, including a camera and a microphone, on a small board, while adding a powerful processor and Tensor Processing Unit, which allows algorithmic models and applications to run directly on it instead of on a remote server. Each sensor has a task channel, and multiple sensors can independently perform data sampling and feature extraction with concurrency. Feature data from different sensor channels are used to achieve real-time recognition of general activities in the environment (e.g., robotic arm movement) by a machine learning based multi-sensor information fusion algorithm.

RESULT

After completing the model training, we deployed our system in six real scenarios, and the deployment location of ECSK. In each scenario, ECSK continued the identification for each real-time activity, and we kept 50 positive and negative examples for each activity to ensure a balanced sample. We counted the confusion matrix, accuracy, precision, recall and F1 score for the 27 activities in scenarios, and finally the F1 score was used as the reference metric. We rounded all the values.

KEYWORDS

Edge Computing Sensor Kit, Information Fusion, Machine Learning, Real-time Activity Recognition, Digital Twin, Data Visualization.

MATERIAL

Photos and videos were taken in 2018. Picture 2 is from a trip to Beijing with my teammates for a national science and technology competition. Unfortunately, there are still a lot of interesting moments that were not recorded. Everything in the picture was designed by us.

EXPERIMENT

Figure 2. We participated in the 2018 National Student Intelligent Connected Innovation Competition.

Video 1. Final demo, picking up clothes from the closet by voice.

Video 2. One of my teammates was testing the mechanics..