Energy Efficient Neural Architectures for TinyML Applications
DOI:
https://doi.org/10.38124/ijsrmt.v4i5.531Keywords:
TinyML, Energy-Efficient Neural Networks, Edge Computing, Model Compression Techniques, Neural Architecture OptimizationAbstract
There is now a shift being made in machine learning because of Tiny Machine Learning (TinyML) and its use on microcontrollers and edge sensors. This article investigates energy-efficient neural network designs for TinyML that are built to strike a balance among accuracy, how much memory is used and power consumption. We look at recent developments in model quantization, pruning and neural architecture search (NAS) that support using deep learning models in very energy efficient devices. The practical uses of MobileNet, SqueezeNet and EfficientNet on devices that have edge hardware are considered, along with how well they can preserve overall accuracy. Evaluations of minimizing energy DRAM by codesigning hardware and software, along with using specialized accelerators, are considered. Since real-time decisions matter a lot in environmental monitoring, wearable technology and industrial IoT, it’s clear that model deployment must be both efficient and dependable. It gives an overview of the most recent findings to demonstrate how energy-efficient architecture contributes to the fast ongoing progress of TinyML in many areas. Focusing on hands-on methods and actual use cases, this discussion gives actionable tips to those wanting to design smart and energy-efficient edge systems.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Scientific Research and Modern Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
PlumX Metrics takes 2–4 working days to display the details. As the paper receives citations, PlumX Metrics will update accordingly.