ISSN: 2685-9572 Buletin Ilmiah Sarjana Teknik Elektro
Vol. 8, No. 1, February 2026, pp. 272-293
Application of AI-IoT Technologies to Develop the Smart LED Display Management and Monitoring System for the Laboratory
Trinh Luong Mien, Vu Van Duy, Trinh Thi Huong, Nguyen Trung Dung
Faculty of Electrical-Electronic Engineering, University of Transport and Communications, Hanoi, Vietnam
ARTICLE INFORMATION | ABSTRACT | |
Article History: Received 09 November 2025 Revised 18 January 2026 Accepted 27 February 2026 | Smart LED display systems are widely used to provide useful information to users, ranging from simple LED screens to complex screens management and monitoring systems involving a large number of diverse devices, capable of integrating modern technologies. This research focuses on developing a smart LED display management and monitoring system for a laboratory using AI-IoT technologies, which combines deep learning, computer vision, edge computing, embedded system, IoT Communication (MQTT), and web-based management. The goal is to provide convenience, efficiency, and flexibility for users and managers, enabling easy remote information updates and real-time display on LED screens, while simultaneously automatically monitoring and accurately counting the number of people entering and leaving the laboratory. The development of the system includes designing an ESP32-based central LED control board, selecting the P2.5 LED modules, the jetson nano, the Logitech C505e camera, suitable for low-cost educational research. Subsequently, the article introduces the image processing algorithm for counting people based on YOLOv7 TensorRT inference and develops the web management interface based on the Next.js platform, combined with data communication via MQTT protocol. This research was then experimentally implemented at the Mitsubishi FA Laboratory at the university of transport and communications (UTC). The experimental results showed that the Web interface features a grid layout divided into three functional groups, allowing for display content configuration, graphical visualization, clear status display. It provides networked link-tags for updating date/time, temperature/humidity, and In/Out people counts in real-time on both the Web and the LED screen via MQTT/ WebSocket protocols. The experimental results also indicated that the proposed algorithm for counting people In/Out the laboratory achieves high accuracy, over 90%, under normal, stable lighting conditions. This confirms that the proposed smart LED display system operates efficiently, stably, and reliably, and suitable for promoting the digital management of laboratories at a low investment cost. | |
Keywords: LED Display Management; Jetson Nano-ESP32 Kit; Internet of Things (IoT); Artificial Intelligence (AI); Deep Learning (DL); Computer Vision (CV); MQTT Protocol; YOLO-Based Object Detection | ||
Corresponding Author: NguyenTrung Dung, Faculty of Electrical-Electronic Engineering, University of Transport and Communications, Hanoi, Vietnam. Email: dungnt1@utc.edu.vn | ||
This work is open access under a Creative Commons Attribution-Share Alike 4.0 | ||
Document Citation: T. L. Mien, V. V. Duy, T. T. Huong, and N. T. Dung, “Application of AI-IoT Technologies to Develop the Smart LED Display Management and Monitoring System for the Laboratory,” Buletin Ilmiah Sarjana Teknik Elektro, vol. 8, no. 1, pp. 272-293, 2026, DOI: 10.12928/biste.v8i1.15263. | ||
The LED display device is an electronic screen that utilizes Light Emitting Diode (LED) technology to present information in the form of text, images, or dynamic effects. The LED display systems provide visual information and allow for content modification according to practical requirements. In the context of industry 4.0, the application of LED display systems has become increasingly widespread in many fields, including advertising, commerce, warning systems, traffic monitoring, factories, schools, hospitals, and administrative agencies. These systems serve not only as information display devices but are also integrated with numerous modern technologies to enhance user efficiency, such as the flexible content display capabilities and convenient remote control for information updates, thereby optimizing costs and improving overall system operational performance.
In recent years, smart LED display systems have become increasingly popular due to their superior advantages, such as energy efficiency and high durability, which facilitate reduced maintenance costs. Additionally, they offer flexible and easy content update capabilities via remote control protocols like Wi-Fi, Bluetooth, or wired networks, enabling the diverse display of information ranging from text, icons, and warnings to real-time sensor data.
Some recent studies have focused on the development of smart LED display systems, ranging from design, fabrication, and testing to the application of modern technologies within these systems. Other studies address the issue of energy efficiency in LED screens by integrating energy-saving controllers, adjusting brightness based on ambient light, and utilizing renewable energy sources such as solar power. The research [1] designed and fabricated a GSM-based mobile digital message display to serve modern public relations purposes, facilitating easy and remote updates of new messages. The research [2] indicated that installing solar panels can reduce the energy consumption of outdoor LEDs by up to 30%. Simultaneously, the trend of integrating artificial intelligence into LED displays is also being emphasized to automate content changes based on data from motion and sound sensors. The research [3] presented a real-time, long-distance display-camera communication system utilizing an LED-DCC clustering scheme to improve VLC signal quality and enhance the reliability of information extraction from cameras. However, this research needs to further improve the channel capacity of the LED-DCC system and implement it in practical applications. The study [4] introduced an automatic smart signage control system based on wireless communication and LED signage to warn drivers at accident-prone locations, using data from cameras detecting traffic conditions and sensors monitoring environmental parameters that may affect traffic. The study [5] presented the design and implementation of an IoT-based door signage system to notify the status of room occupants, helping to reduce the time required to change room status, increase the accuracy of the signage, and eliminate the need for physical contact with the sign. The studies [6] and [7] introduced an LED screen controller design based on FPGA and STM32, enabling easy reception of display data from USB, Ethernet, and SD card interfaces with high data throughput and efficient operation. The study [8] presented an auto-configuring LED display system based on computer vision to determine the position of each LED and communicate via the MQTT protocol, allowing LEDs to function as pixels to display a collective image. The studies [9][10] presented solutions for updating LED bulletin boards using wireless IoT technology with experimental models on Arduino/ESP32 kits, facilitating easy remote interaction and enabling users to quickly share and receive the latest information on mobile devices. The study [11] introduced key concepts regarding hardware and software design to realize the deployment of large-screen LED display systems in stadiums, demonstrating the diverse applicability of LED display systems. The studies [1]-[11] initially yielded positive results, meeting a wide range of applications in LED screen usage. However, these studies mainly focus on the LED display function, and have not yet provided AI-powered people counting functionality, nor have they fully integrated display and monitoring functions into a complete system.
In the digital society of connected things, in addition to some of the above studies on IoT applications for LED screens, IoT technology is also being studied by scientists for application in a variety of other practical applications, such as monitoring power load [12], monitoring electrical equipment in classrooms [13][14], monitoring gas leak [15][16], monitoring the engine oil level [17], automation and monitoring in agriculture [18]-[20], monitoring temperature/humidity [21]-[24] and monitoring environmental air [25]. These studies demonstrate the great potential and practical effectiveness of applying IoT technology in LED screen display monitoring management systems.
The research [26] presents the application of machine vision techniques and Yolov5 algorithm to detect LED faults on display panels by visual inspection using images collected from cameras. Research [27] introduces an overview of integrated deep learning solutions utilizing convolutional neural networks (CNN) algorithms, or Yolo, to detect faults in solar panels or to detect road surface cracks [28]. Meanwhile, research [29] develops algorithms based on Yolov3 and Yolov4 models to detect license plates, research [30]-[32] applies AI based on multi-task cascaded convolutional neural networks or Yolo for vehicle identification and counting, and initially also obtains encouraging results, contributing to solving the problem of traffic monitoring, reducing congestion, and ensuring traffic safety.
The research on human object detection monitoring using AI has been presented in a number of works [33]-[40]. The research [33] applies Yolov5 or the research [34] applies Yolov8, Yolov11 to detect human objects in search and rescue operations using drones, thereby allowing faster victim location determination, or application to detect the location of car drivers to help improve driving safety [35]. The research [36] develops the Yolo algorithm to detect people in residential areas concentrated in public areas, helping to improve social order and security. The research [37] presents evaluation results and recommendations for using Yolov8, Yolov5 in various specific applications. The research [38] presents the Yolov8 application for people counting recognition during festivals, anniversaries and pilgrimages, allowing to ensure social order and security, analyze people flows to find bottlenecks and report daily on people entering and leaving specific management areas. The research [39] focuses on the process of deploying deep learning models for real-time object detection using YOLOv4, however the system gives false detection results when there is clutter in the background or small objects. The research [40] applies AI-IoT technology to allow remote automatic monitoring of student activities in the classroom and send information to users via the network.
Nowadays, the trend of applying AI technology to monitor and detect objects, while integrating IoT technology for remote communication, is increasingly being researched and applied popularly. Therefore, researching the integration of AI-IoT technologies into the LED display monitoring management system for public areas such as hospitals, schools... will bring practical effects to users, especially in monitoring human actions, counting the number of people in the monitoring area, thereby serving many other target tasks. One of those tasks is the digital management of university laboratories, including monitoring equipment, tracking student entry and exit from the laboratory, and displaying this information visually, objectively, and automatically in real time on LED display screens.
This article focuses on the development of a smart LED display management and monitoring system capable of remote control, enabling flexible content display and integrating modern AI-IoT technologies. The system is built upon a hardware platform consisting of an IoT board combined with a Jetson nano embedded computer acting as the central controller, utilizing a camera for on-site image acquisition. The smart LED display management and monitoring system is designed to receive data from various sources, including camera data, sensor signals, and data direct input from users either on-site or remotely via the Web interface.
The novel contributions achieved in this article are as follows:
In addition to the problem statement presented above, the remaining content of the article is organized into the following parts: Part 2 presents the design of the smart LED display management and monitoring system for laboratories; Part 3 presents the image processing algorithm for counting people entering and leaving the laboratory based on YOLOv7 TensorRT inference; Part 4 details the development of the Web management interface for the smart LED display system; Part 5 presents the experimental results and quality assessment of the system at the Mitsubishi FA laboratory. The final part provides the conclusion and future development directions.
The smart LED display management and monitoring system for the laboratory comprises hardware components and software programs designed to process sensor signals and camera data, as well as to control information displayed on an LED screen with dimensions of 1280x320 mm. The smart LED display management and monitoring system for the laboratory possesses the following primary functions: Displaying device operating status (Indicating whether equipment is active, in standby mode, or experiencing a malfunction); Laboratory safety warnings issuing alerts based on data from room temperature sensors,…; Real-time data updates continuously from sensors located within the laboratory; Remote control via network enabling users to adjust LED display content remotely using computers or mobile phones. The general block diagram of the smart LED display management and monitoring system for the laboratory is illustrated in Figure 1.
Figure 1. Block diagram of the smart LED display management and monitoring system components
The smart LED display management and monitoring system for the laboratory consists of the following main component blocks.
The system is constructed based on a multi-layer architecture, combining processing at the Web server and processing at the Jetson nano edge device. All components are interconnected via LAN network and the MQTT communication protocol, enabling real-time data exchange. On the operator side, the system provides a Web management block consisting of a Front-end interface (Next.js) and Back-end (API and processing services). This block is responsible for receiving requests for display content input, displaying the number of people in the laboratory, and interacting with the storage area to record activity history.
On the edge computing side, the Jetson nano handles image processing to count people entering and exiting based on collected camera data and controls the P2.5 LED display via the HD-A4L card. The Jetson nano simultaneously deploys a WebService/MQTT client and a deep learning YOLOv7 model optimized by TensorRT. The main component blocks exchange data via MQTT combined with HTTP/API, separated into two distinct data streams: the LED display control stream and the people counting monitoring stream. This architecture allows for a clear separation between the user interface layer, the AI people counting processing layer, and the physical display layer, while leveraging parallel processing capabilities at the edge device.
The separation of the LED display control flow and the laboratory In-Out people counting flow is achieved by splitting them into two subroutines within two apps files, which are then installed on a custom-configured Jesson nano. This allows heavy AI inference tasks to run independently of real-time display tasks, thereby avoiding resource contention, reducing latency, and preventing display errors that could affect counting accuracy. This architecture helps the system maintain stable performance and high reliability even under high display load or traffic, while also increasing system scalability and maintainability can be seen in Figure 2.
Figure 2. Configuration of data flow separation between the system's main blocks
The central control board is a critical component of the system, ensuring accurate, flexible information display and remote-control capabilities. In this design, the central control board is based on the ESP32, integrated with the Jetson nano and HD-A4L card to display content on the LED display. The functions of the central control board include: capable of connecting to Wi-Fi or 4G networks, allowing data reception and control of LED display content via remote interfaces such as computers or mobile applications; acquiring and processing input sensor signals; communicating with the Jetson nano; outputting control signals to the LED screen; and enabling automatic real-time information display and updates on the LED. The controller design must ensure stability, scalability, and performance optimization for the system to operate effectively under practical usage conditions. The combination of hardware components – the ESP32-based on central control board, Jesson nano, HD A4L card, and LED display panels – allows for simultaneous fulfillment of requirements for space, display capabilities, flexible control, and processing power, thereby demonstrating the suitability and feasibility of the system for practical deployment in a laboratory with low investment costs. The central control board for the smart LED display management and monitoring system in this study is developed based on the ESP32, with the schematic diagram presented in the Figure 3.
Figure 3. Schematic diagram of the central controller for LED panels
Based on this schematic diagram, the Printed Circuit Board (PCB) for the central control board of the smart LED display management and monitoring system is obtained, as shown in the Figure 4 to Figure 5. The ESP32-based central control board acts as an embedded IoT station node, performing low-level control, sensor data collection, I/O port management. This ESP32-based central control board communicates with Jetson nano via UART and/or MQTT; it integrates closely with Jetson nano – acting as an edge AI device – performing AI inference (Laboratory In-Out people counting), processing high-level logic, and deciding on appropriate LED display content. This creates a comprehensive, scalable, and reliable system architecture with clearly separated functions and low investment costs.
Figure 4. The 2D PCB layout of the central controller for LED panels | Figure 5. The 3D PCB view of the central controller for LED panels |
The design requires an LED display screen with dimensions of 1280x320mm can be seen in Figure 6. The system utilizes P2.5 LED modules with individual dimensions of 320x160mm. The P2.5 LED display was chosen because of its small pixel pitch and high display density, ensuring clear content and text at short viewing distances characteristic of the laboratory. Its flexible module size allows for easy placement in limited spaces while maintaining aesthetics. Consequently, the configuration requires 4 modules along the horizontal axis and 2 modules along the vertical axis. And the total number of P2.5 LED modules required is 8 modules.
Figure 6. Panels LED P2.5 module
Key specifications of the P2.5 LED module
In this study, the HD-A4L card is selected as the LED display controller can be seen in Figure 7. The HD-A4L is a versatile LED multimedia player designed for digital signage, providing both offline and online (HDMI) control capabilities, along with diverse connectivity options (LAN, Wi-Fi, USB, HDMI). It operates on a 5V/12V DC power supply. This device manages video, images, and text content via the Android operating system and features cloud and mobile management capabilities. The HD-A4L control card was selected because it meets the system requirements due to its stable LED control, flexible configuration, and convenient network communication. This allows for remote content updates and easy integration with a central management platform, meeting the need for real-time information display in the laboratory environment.
Figure 7. LED display multimedia player HD-A4L
Key specifications of the card HD-A4L
The image processing device selected for this study is the Jetson nano can be seen in Figure 8. The Jetson nano enables image processing, recognition, and the counting of people entering and exiting the laboratory using the image processing program developed in this research. Jetson nano was chosen because it provides sufficiently powerful AI processing capabilities for computer vision tasks such as people detection and counting, while its compact size, low power consumption, and reasonable cost make it ideal for deploying Edge AI in the laboratory without requiring a large server infrastructure.
Figure 8. Jetson nano developer kit
The main specifications of the Jetson nano are as follows:
In this system, the Logitech C505e HD camera is selected to facilitate monitoring and to count individuals entering and exiting the laboratory. The basic specifications of camera Logitech C505e HD
The Logitech C505e camera was chosen because it provides stable image quality and HD resolution sufficient to meet the YOLOv7 human detection requirements in the laboratory, maintaining accurate bounding box and reducing errors in tracking and counting people. It also offers good compatibility with Jetson Nano via USB plug-and-play, requiring no complex driver installation, enabling quick deployment, reduced integration costs, and increased operational reliability.
To execute image processing functions and count the number of people entering and exiting the laboratory, the smart LED display management and monitoring system is equipped with Jetson nano quad-core ARM Cortex-A57 hardware and a Logitech C505e HD Webcam, with technical specifications as detailed in Part 2. The camera is tasked with monitoring and acquiring video footage of individuals entering and exiting the laboratory, subsequently transmitting this data to the Jetson nano. Upon receipt, the Jetson nano processes the data by executing the AI model developed in this study to detect and count the number of people entering and exiting the laboratory. The image processing program for counting people is organized into specific sub-modules, comprising: the image acquisition and pre-processing module, the inference module using the YOLOv7 TensorRT model, the post-processing module, and the people counting module.
The image pre-processing module utilizes OpenCV to continuously capture frames from the connected camera at an appropriate frame rate (approximately 10–20 fps). Each frame undergoes the following pre-processing steps:
The YOLOv7 model is pre-trained/fine-tuned, subsequently converted into the TensorRT engine format. The inference module loads this TensorRT engine into the GPU of the Jetson nano to perform calculations for each pre-processed frame. The inference results include a list of bounding boxes, confidence scores, and class labels for each detected object.
The post-processing module filters bounding boxes labeled "person" with a confidence score exceeding a predefined threshold (set at 0.8). Non-maximum suppression (NMS) algorithms are applied to eliminate duplicate bounding boxes. The image processing module is organized as a continuous loop; however, the interval between inference cycles can be configured to balance processing speed and system resources.
The objective is to count the number of people entering and exiting the laboratory in real-time. Input data consists of a video frame sequence from a camera positioned facing downward towards the laboratory entrance. Each frame undergoes a processing chain: OpenCV pre-processing, human detection inference using the TensorRT-optimized YOLOv7 model, followed by multi-target tracking to maintain trajectory identity. In-Out event recording is based on a line-crossing rule within a fixed Region of Interest (ROI), thereby minimizing noise outside the doorway area.
In the image coordinate system, a counting line is established parallel to the laboratory door threshold. For each tracked human target with a specific ID, the representative point is defined as the "foot" of the bounding box (the coordinates of the midpoint of the bottom edge) at time step k denoted as
. The "line crossing" event is registered when the sign of the relative position of
with respect to
changes between two consecutive samples, and the displacement exceeds a minimum threshold within a corridor of width d surrounding the line, this mechanism is implemented to eliminate jitter or fluctuations near the line edge. The direction of movement is determined by the motion vector
relative to the line's normal vector; Movement of a target from outside the Region of Interest (ROI) to the inside is labeled as "In", while movement in the opposite direction is labeled as "Out". Each unique ID is recorded for a maximum of one event per crossing instance, utilizing an "already counted" flag mechanism and a debouncing cycle based on a fixed number of frames can be seen in Figure 9.
Figure 9. Illustration of the laboratory In/Out counting zone
The signed distance from point to line
is calculated along the normal vector:
(1) |
Sign of the relative position:
(2) |
= +1: The point lies on one side of the counting line (inside the ROI).
= -1: The point lies on the opposite side of the counting line (outside the ROI).
= 0: The point lies exactly on the counting line (rare due to fragmented images).
Condition for "crossing the line" within corridor . The counting corridor of thickness (width) d surrounding the line is described by:
(3) |
Between two consecutive frames (k -1) and k, the "crossing the line" event of the id is only accepted if it simultaneously satisfies the following conditions:
(4) |
(5) |
(6) |
Displacement vector between two frames:
(7) |
Displacement component along the normal direction:
(8) |
If < 0: Movement is from outside to inside the ROI (labeled In). If
> 0: Movement is from inside to outside the ROI (labeled Out).
Furthermore, by knowing the number of people entering and exiting the laboratory, we can also estimate the number of people present in the laboratory in real-time. Let In(t) and Out(t) be the number of entries and exits recorded in the time interval . The quantity of people present (Occupancy) at time t is updated according to the difference equation:
(9) |
The value is initialized according to the initial state (default is 0 or as confirmed by the operator). To limit accumulated drift due to counting errors, the system allows for scheduled re-calibration (e.g., at midnight) or upon manual confirmation can be seen in Figure 10.
Figure 10. Determination of the direction of people entering and exiting the laboratory
At each sampling instance, the process initiates by acquiring a frame from the camera. Subsequently, the image undergoes BGR-to-RGB color space conversion, resizing to the model's input dimensions of 640x640 pixels, and pixel value normalization. The pre-processed frame is fed into the YOLOv7 TensorRT engine; the output consists of a set of bounding boxes accompanied by class labels and confidence scores. Detections labeled "person" are retained, and Non-maximum suppression (NMS) is applied to eliminate duplicate detections. The detection results serve as input observations for the tracking module (utilizing DeepSORT) to assign and maintain stable identities (IDs) for each individual over time. For each active trajectory, the system updates the new foot point coordinates and evaluates the conditions for crossing the corridor surrounding the counting line, verifying the change in side and ensuring the minimum displacement speed is met. If these conditions are satisfied, an "In" or "Out" event is recorded, and the total entry count In(t), exit count Out(t), and current occupancy Occ(t) are upadated. Periodic updates occur every t (1–2 seconds) or immediately upon a state change, the variables In, Out, Occ along with timestamps, are transmitted to the LED display. Simultaneously, event records (containing Track ID, In/Out status, people count, and timestamp) are archived in a log file.
Regarding error control, in cases of temporary tracking loss due to brief occlusion, the identity is maintained via a Time-To-Live (TTL) parameter spanning a specific number of frames. If the TTL expires without a matching observation, the trajectory is terminated without generating a false event. Furthermore, in the event of an interruption in the Jetson's connection to the camera or the broker, the process automatically attempts to reconnect and logs the intervals of data loss. Here is the Pseudo-Code for implementing the proposed algorithm for counting the number of people In/Out the laboratory.
The Web management block plays a central role in user interaction and data coordination between the display system and the image processing system can be seen in Figure 11. It is developed based on the Next.js platform, integrating both the Front-end and Back-End within a unified architecture. The Web management block operates according to a sequential process initiating with application startup. Upon user access, the system first reads necessary configuration information, including the laboratory identifier (LabID), MQTT broker address, communication topics, and a list of default effects and colors.
After configuration, the system proceeds to initialize the user interface with main menus such as Dashboard, Display control, History, and Configuration. Simultaneously, initial state variables are established with the current people count set to 0. Subsequently, the system establishes a connection to the MQTT broker via the WebSocket. Upon successful connection, the system subscribes to receive information from topics, specifically the people counting topic, and transitions into an event loop operational mode. Within this loop, the system continuously processes two parallel data streams: The first thread is to handle user operations on the interface such as viewing dashboard, controlling display, viewing history or changing configuration. When the user wants to send a control command, the system will check the validity of the data before packaging it into JSON format and sending it to the Back-End or publishing it directly to MQTT.
The Web management block also processes MQTT messages received from the people counting system. Whenever there is a new message about the number of people, the system will analyze the data, update the current status and add it to the history, then automatically update the Dashboard interface so that users can see the latest information. Throughout operation, the system continuously monitors the MQTT connection status. If a disconnection is detected, the system updates the interface to notify the user and automatically attempts to reconnect. Finally, when the user closes the application, the system performs cleanup operations, such as disconnecting MQTT and releasing resources, before terminating completely.
Figure 11. Web dashboard management for configuring LED panels
The Front-end is built on React/Next.js, operating primarily on the client-side (browser), and is responsible for the following functions:
By leveraging Next.js server-side rendering (SSR) capabilities for static pages, the interface ensures fast loading speeds and user-friendliness, while dynamic (real-time) components are updated via MQTT/ WebSocket on the client side.
The Back-End of the Web management block is implemented utilizing Next.js API Routes, encompassing the following primary functions:
By leveraging Next.js's “full-stack” model, Front-End and Back-End are placed in the same project, simplifying deployment and synchronizing data types (types, schemas) between the two sides.
The design of the Web management interface for the smart LED display system focuses on simplicity and intuitiveness to serve effective monitoring. It prioritizes minimizing the steps required to display a notification. Error messages and warnings are presented clearly to enable rapid recognition by operators.
The Web management block establishes data communication with edge computing devices via the MQTT protocol. This ensures timely data updates and satisfies the system's real-time requirements. Figure 12 illustrates the data linkage between the Jetson nano edge device and the Web application.
Figure 12. MQTT integration linking the Edge device and the Web application
The integration of MQTT is designed according to the following principles:
With this design, the Web management block serves not only as a user interface but also as an intermediate node coordinating information between the operator, the AI system on the Jetson nano, and the LED display board, ensuring data consistency and real-time responsiveness.
The installation of the smart LED display management and monitoring system was conducted at the Mitsubishi FA laboratory at UTC, comprising the components illustrated in Figure 13. The specific components and devices are as follows: (1)-LED matrix panels; (2)-Network equipment cabinet and HD-A4L card for controlling data display (video, images, text, ...) on the LED panels; (3)-Jetson nano control cabinet and central control board; (4)-Web management program running on the computer; (5)-Logitech C505eHD camera installed under the laboratory door frame to count the number of people In/Out the laboratory. The system is designed within a unified architecture, integrating the LED control Web management program and AI Edge devices processing for real-time people In/Out counting. It incorporates standardized MQTT/JSON communication, feedback loops, and professional operational metrics (median/p95, ID-switch), allowing for rapid deployment, competitive pricing, and easy scalability for managing multiple laboratories.
Figure 13. Installation of the smart LED display management and monitoring system at the Mitsubishi FA laboratory
The testing of the proposed algorithm for counting the number of people entering and exiting the laboratory was conducted on Jetson nano hardware, utilizing real-world data collected from the Logitech C505eHD camera installed at the laboratory entrance. The testing process for the people counting algorithm was conducted under three different scenarios: Scenario 1: Normal In/out flow with normal lighting. Scenario 2: Crowded In/out flow with potential trajectory intersections and normal lighting. Scenario 3: Normal In/out flow with variable lighting conditions. Initialization of the ROI (blue color) and counting line (yellow color) is aligned with the door position; the displayed FPS ranges from ~10.5 to 10.9, with an inference time (Inf) of ~59 ms per inference (using the TensorRT engine) can be seen in Figure 14.
The overlay results align with the logic of the "detection – tracking – line crossing" algorithm. Error rates increase in test scenarios involving intersecting paths or lighting variations due to ID-switching and confidence fluctuations. Parameters such as a corridor of ±12px, minimum displacement of 10px, and a Time-To-Live (TTL) of 15 frames significantly reduced over-counting and under-counting can be seen in Figure 15. In/Out counting accuracy was evaluated based on the number of correct counts compared to ground truth labels in standard recording sessions. The accuracy metric is defined as follows:
(10) |
Figure 14. ROI & Counting line frame prior to target detection
Figure 15. Detection overlay, trajectory & line crossing events
For occupancy (number of people present), errors are reported using Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) between the estimated and the actual count during the testing period. Additionally, end-to-end latency (from frame capture to Web display) and inference speed (FPS) on the Jetson nano are two critical performance metrics. The Table 1 provides measurement results for 30-minute sessions per scenario. In cases of crowding or prolonged occlusion, the system also records the ID switch rate as an indicator of tracker stability.
Table 1 shows that the laboratory In-Out people counting accuracy consistently remained above 90% in all three scenarios (95.8%, 90.6%, 92.1%). In scenario 2 (crowded, intersection), performance decreased with FPS dropping from 16.2 to 13.1, latency increasing from 420ms to 560ms, and ID-switch increasing by 4.7%, but the system still achieved 90.6% accuracy. The errors arising from prolonged obstruction or people standing still near the counting line are directly related to the chosen system configuration: the counting corridor width (corridor ±12 px) helps eliminate spurious oscillations when objects move close to the edge but can slow event recording when objects stop for a long time in this area, while TTL=15 frames allows for maintaining identification during short obstructions but still leads to track termination and increased ID-switch if the obstruction time exceeds the allowed threshold.
Table 1. Accuracy of counting people In/Out & number of people present Occ
Scenario | FPS average | Delay end-to-end | Accuracy | MAE | RMSE Occ | ID switch |
Scenario 1 | 16.2 | 420ms | 95.8% | 0.6 | 0.9 | 1.2% |
Scenario 2 | 13.1 | 560ms | 90.6% | 1.3 | 1.8 | 4.7% |
Scenario 3 | 12.4 | 640ms | 92.1% | 1.1 | 1.6 | 3.1% |
Experimental results with the proposed algorithm demonstrate fast and accurate human detection, even when multiple individuals appear simultaneously. It accurately tracks individual movement across frames, assigns unique IDs to each object, and avoids duplicate counting when individuals move slowly or stand still. This ensures the system maintains high accuracy, achieving over 90%. Errors primarily occur when individuals occlude each other or remain stationary within the counting corridor for extended periods. The use of a finite-thickness corridor and minimum displacement conditions helps eliminate false fluctuations caused by image jitter. Tracking techniques with re-identification (re-ID) reduce ID switching during short occlusions; for long occlusions, selecting a downward-angled camera position and a sufficiently wide ROI are decisive factors. The mechanism of scheduled recalibration and manual confirmation helps eliminate accumulated drift during long-term operation without affecting the real-time experience. This indicates that the proposed algorithm/ program meets the practical requirements for the laboratory management and monitoring.
The system operation testing revealed the performance results of the Jetson nano, HD-A4L, and LED P2.5 devices as shown in the Figure 16. The Jtop parameter results for the Jetson nano show GPU usage at a high level ~80–98%, clock speed at ~921MHz, CPU usage fluctuating with tracking load, GPU temperature remain safe ~38–50°C, CPU temperature remain safe ~41–51°C, no throttling was observed and system-wide power consumption at ~6.7–7.8W, and the MQTT daemon and Python inference processes remained stable. This demonstrates that the proposed hardware system achieves good energy efficiency, and confirms that the hardware is suitable, operates stably, and is durable for the AI-IoT pipeline under real testing conditions.
Figure 16. Jtop – Jetson nano system load during inference
Some real-world LED display results at the Mitsubishi FA laboratory are described in Figure 17, consisting of the following main information fields: laboratory information & status, current time (hour, date), room temperature/humidity, and the number of people entering/exiting the laboratory. The display pipeline achieves a latency of under 0.65s () even with images included; the Jetson nano maintains 9-15 FPS with safe temperatures and low power consumption; the HD-A4L successfully receives and applies the program can be seen in Table 2.
Figure 17. Result of laboratory information & In-Out people count display on the LED panel
Table 2. System hardware performance evaluation
Index (HW) | Scenario 1 | Scenario 2 | Scenario 3 |
FPS Inference (YOLOv7-TRT) | 16.2 | 13.1 | 12.4 |
Temperature GPU (°C) | 60 | 63 | 64 |
Estimated Power (W) | 7.8 | 8.5 | 8.7 |
Successful MQTT Publish (Jetson→Broker) | 100% | 100% | 100% |
LatencyWeb→LED (text) median/p95 | 310/520 ms | 320/540 ms | 330/560 ms |
LatencyWeb→LED (128×128) median/p95 | 390/640 ms | 410/660 ms | 420/680 ms |
Display updated/OK (HDA4L) | 100% | 100% | 100% |
The web interface layout is organized in an intuitive grid format, clearly separating the status monitoring area, the LED content preview area, and the control configuration block. This allows operators to quickly grasp important information and perform operations with minimal steps. This layout effectively supports real-time management and reduces errors during display configuration. The web interface windows display the status of each board: active (green), maintenance (yellow), and inactive (red). Each tag provides location, system time, temperature/humidity, and real-time In/Out people counters. All data tags are updated via MQTT/WebSocket on the web interface, assisting managers in effectively operating and monitoring the system can be seen in Figure 18. The left-side functional group displays Jetson nano status (CPU temp, RAM, uptime, FPS); the middle functional group is the P2.5 LED board preview (512×128 px) simulating actual content; the right-side functional group shows the graph of people entering/exiting per minute; and the bottom functional group is the configuration form for content (text/image/effects) and font size settings.
Figure 18. Web interface displaying information of the smart LED display management and monitoring system
The system supports a background image library and provides a live preview prior to finalizing the configuration. The preview command demonstrates that the background content is accurately applied in the preview and is synchronized with the display program on the HD-A4L card after transmitting the display command to the LED panels can be seen in Figure 19. The Dashboard, LED display, and history/preview functions operate stably, with automatic reconnection capabilities during brief network interruptions. The data display latency is minimal, making it suitable for near real-time monitoring. Table 3 shows that the Web system achieved very low data loss rates and low latency, fully meeting the requirements for real-time monitoring and control, thereby demonstrating that this developedWeb platform is reliable enough for continuous operation in a laboratory environment.
Figure 19. Preview of background and display content on LED panels
Table 3. Web Operation Metrics (3 sessions × 30 minutes)
Index (Web) | Median | p95 |
Dashboard update rate | 1.00 Hz | 1.00 |
MQTT reconnection time (Web) | 1.7 s | 3.9 s |
MQTT→UI delay (big-number) | 120 ms | 220 ms |
UI message drop rate | 0 % | 0 % |
Rendering errors/component freezes | 0 % | 0 % |
This article presents the results of developing the Smart LED display management and monitoring system for the laboratory, successfully applied at the Mitsubishi FA laboratory at UTC. The system is equipped with hardware comprising an ESP32-based control circuit board integrated with a Jetson nano embedded computer, a Logitech camera, P2.5 LED modules, and an HD-A4L card. Subsequently, the authors developed an algorithm and image processing program for counting people entering and exiting the laboratory based on YOLOv7 TensorRT inference, alongside a Web management program to support easy and effective remote system management and monitoring. The Web management program enables the display of device operating status, laboratory safety warnings, and real-time data updates. It facilitates remote monitoring and content updates via computers or mobile phones. The people counting program running on the Jetson nano allows for high-accuracy recognition and counting of individuals entering and exiting the laboratory, satisfying real-time requirements.
The experimental results have confirmed the smart laboratory LED display system operational effectiveness and reliability. Specifically, the people counting algorithm based on the YOLOv7 achieved a high accuracy of over 90%. Furthermore, the LED web management program operates stably, offering a convenient and user-friendly interface that allows for easy expansion to multiple LED panels. Although the system achieved high accuracy in most test scenarios for In-Out the laboratory people counting, some limitations were noted when lighting changed sharply, many people simultaneously entering/exiting and intersecting, obscuring each other, or moving close together in a confined space. In these cases, ID-switching or temporary track loss could occur, leading to a slight increase in counting error. Additionally, when high people density persisted, the increased processing load resulted in a predicted decrease in FPS, although the system remained stable.
Future research will focus on developing edge AI models to further enhance people counting accuracy, under changing lighting conditions, people entering and exiting stand still near the corridor, either obscured the long passageway, ... or improving warning functions, and generating reports and statistics for the Web management program. Simultaneously, the system will integrate data processing and analysis of IoT sensor data, security cameras, ... The smart LED display management solution not only assists in monitoring individuals in enclosed spaces but is also scalable for integration with control, warning, or data statistical systems, serving management and industrial automation goals, such as smart classrooms management or monitoring worker entry and exit in factories.
Acknowledgement
This research is funded by University of Transport and Communications (UTC) under grant number T2025-DT-005.
REFERENCES
AUTHOR BIOGRAPHY
Trinh Luong Mien obtained his PhD degree at Russian University of Transport (RUT MIIT) in Russia in 2012. He is Assoc.Prof.Dr., Head of Lab, Head of Department Cybernetics, Vice Dean of Faculty of Electrical - Electronic Engineering, University Transport and Communications in Vietnam since 2004. His main research is the development of intelligent control algorithms for the technological and manufacturing processes in industry and transportation. Email: mientl@utc.edu.vn. Orchid: https://orcid.org/0000-0003-4305-7130. Scopus: 59305481700 Google Scholar: https://scholar.google.com.vn/citations?user=9HtgrIUAAAAJ&hl. |
Vu Van Duy is an engineer in Control Engineering and Automation, graduated from UTC in 2023. Currently, he is studying for a master's degree at UTC. His research interests include embedded systems-IoT, AI algorithm & model development, Web programming (front-end, back-end). |
Trinh Thi Huong is a lecturer at UTC. She received her PhD in 2019 from HUST. Her main research interests are doppler frequency; communication in transportation; communication in railways, high-speed railways. |
Nguyen Trung Dung has a master's degree in Control Engineering and Automation. He is a lecturer at UTC. His research interests include Scada/IoT systems, electric drives, and automatic electric drives. |
Trinh Luong Mien (Application of AI-IoT Technologies to Develop the Smart LED Display Management and Monitoring System for the Laboratory)