Integrating Real-Time Pose Estimation in Block-Based Programming Environments Through Novel Architectural Patterns
Main Article Content
Abstract
This technical demonstration study develops an automated motion analysis system via MitApp Inventor. It is a high-level block-based visual programming language. The system also utilises a pose estimation library for computer vision tasks, which is implemented within the application. The system addresses the growing need for accessible motion tracking by eliminating dependency on additional hardware and providing real-time movement classification capabilities. The user interface, as well as the block diagram of the application, are designed and developed using MIT App Inventor. The basic working principle of how the application operates is that users can perform movements that are automatically tracked and classified. MitApp Inventor allows users to design and develop via a computer or a laptop. Once created, the application can be viewed in an Android / iOS emulator as well as on the user's device. In terms of motion tracking performance, Posenet has been chosen as the only library that the MitApp Inventor supports. The Posenet model is suitable for detecting and tracking key body points of a human body in real-time. The system features four different arm exercises, including left-arm bicep curls, right-arm bicep curls, lateral raises, and military presses. These exercises are designed to detect the angles of the body's joints when a user performs them. Testing with 10 participants, who performed 25 repetitions of each exercise, totalling 1,000 pose classifications, demonstrated the system's effectiveness. The Posenet achieved high accuracy in movement recognition, with precision and recall values of 0.94 and 0.94 for left arm curls, 0.932 and 0.932 for right arm curls, and 0.96 and 0.96 for both lateral raises and military push exercises, demonstrating its effectiveness in precise motion classification. The system achieved an overall accuracy of 94.8% while providing immediate feedback for movement form correction, offering a viable approach for automated motion analysis applicable to human-robot interaction, motion capture systems and industrial safety monitoring.
Manuscript received: 9 Jun 2025 | Revised: 17 Jul 2025 | Accepted: 24 Sep 2025 | Published: 31 Mar 2026
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
W. Hu, K. Liu, L. Liu, and H. Shang, "A Spatial-Temporal Transformer based Framework For Human Pose Assessment And Correction in Education Scenarios," 2023.
DOI: https://doi.org/10.48550/arXiv.2311.00401
P. Nilsen, K. Roback, A. Brostrom, and P.-E. Ellstrom, "Creatures of habit: accounting for the role of habit in implementation research on clinical behaviour change," Implementation Science, vol. 7, no. 1, p. 53, 2012.
DOI: https://doi.org/10.1186/1748-5908-7-53
H. Zhou and H. Hu, "Human motion tracking for rehabilitation—A survey," Biomedical Signal Processing and Control, vol. 3, no. 1, pp. 1–18, 2008.
DOI: https://doi.org/10.1016/j.bspc.2007.09.001
L. Wei and S.J. Wang, "Motion Tracking of Daily Living and Physical Activities in Health Care: Systematic Review From Designers' Perspective," JMIR Mhealth and Uhealth, vol. 12, p. e46282, 2024.
DOI: https://doi.org/10.2196/46282
A. Kos, Y. Wei, S. Tomazic, and A. Umek, "The role of science and technology in sport," Procedia Computer Science, vol. 129, pp. 489–495, 2018.
DOI: https://doi.org/10.1016/j.procs.2018.03.029
Z. Gao and J.E. Lee, "Emerging Technology in Promoting Physical Activity and Health: Challenges and Opportunities," Journal of Clinical Medicine, vol. 8, no. 11, p. 1830, 2019.
DOI: https://doi.org/10.3390/jcm8111830
M. R. Reshma, B. Kannan, V. P. Jagathy Raj, and S. Shailesh, "Cultural heritage preservation through dance digitisation: A review," Digital Applications in Archaeology and Cultural Heritage, vol. 28, p. e00257, 2023.
DOI: https://doi.org/10.1016/j.daach.2023.e00257
K.B. Gan, C.H. Chen and N.A.A. Aziz, "Upper Limbs Extension and Flexion Angles Calculation and Visualisation Using Two Wearable Inertial Measurement Units," International Journal of Robotics and Automation Science, vol. 4, pp. 1–7, 2022.
DOI: https://doi.org/10.33093/ijoras.2022.4.1
S. Guan, "Skeleton-based Human Action Recognition: From 3D Pose Estimation to Action Recognition," Ph.D. dissertation, University of Technology Sydney, 2023.
S. Dubey and M. Dixit, "A comprehensive survey on human pose estimation approaches," Multimedia Systems, vol. 29, no. 1, pp. 167–195, 2023.
DOI: https://doi.org/10.1007/s00530-022-00980-0
T.F. Cootes, G. Edwards and C.J. Taylor, "Comparing Active Shape Models with Active Appearance Models," Proceedings of the British Machine Vision Conference, p. 18.1-18.10, 1999.
DOI: https://doi.org/10.5244/C.13.18
T.-D. Tran, X.-T. Vo, D.-L. Nguyen, and K.-H. Jo, "Combination of Deep Learner Network and Transformer for 3D Human Pose Estimation," 2022 22nd International Conference on Control, Automation and Systems (ICCAS), pp. 174–178, 2022.
DOI: https://doi.org/10.23919/ICCAS55662.2022.10003954
Y. Sun, Z. Sun and W. Chen, "The evolution of object detection methods," Engineering Applications of Artificial Intelligence, vol. 133, p. 108458, 2024.
DOI: https://doi.org/10.1016/j.engappai.2024.108458
T. Abekoon et al., "A comprehensive review to evaluate the synergy of intelligent food packaging with modern food technology and artificial intelligence field," Discover Sustainability, vol. 5, no. 1, p. 160, 2024.
DOI: https://doi.org/10.1007/s43621-024-00371-7
T.L. Munea, Y.Z. Jembre, H.T. Weldegebriel, L. Chen, C. Huang and C. Yang, "The Progress of Human Pose Estimation: A Survey and Taxonomy of Models Applied in 2D Human Pose Estimation," IEEE Access, vol. 8, pp. 133330–133348, 2020.
DOI: https://doi.org/10.1109/ACCESS.2020.3010248
A. Gupta, A. Anpalagan, L. Guan and A.S. Khwaja, "Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues," Array, vol. 10, p. 100057, 2021.
DOI: https://doi.org/10.1016/j.array.2021.100057
A. Tharatipyakul, T. Srikaewsiew and S. Pongnumkul, "Deep learning-based human body pose estimation in providing feedback for physical movement: A review," Heliyon, vol. 10, no. 17, 2024.
DOI: https://doi.org/10.1016/j.heliyon.2024.e36589
J. Wang et al., "Deep 3D human pose estimation: A review," Computer Vision and Image Understanding, vol. 210, p. 103225, 2021.
DOI: https://doi.org/10.1016/j.cviu.2021.103225
E. Nishani and B. Cico, "Computer vision approaches based on deep learning and neural networks: Deep neural networks for video analysis of human pose estimation," 2017 6th Mediterranean Conference on Embedded Computing (MECO), pp. 1–4, 2017.
DOI: https://doi.org/10.1109/MECO.2017.7977207
Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, "OpenPose: Real-time Multi-Person 2D Pose Estimation using Part Affinity Fields," 2019.
DOI: https://doi.org/10.48550/arXiv.1812.08008
B. Jo and S. Kim, "Comparative Analysis of OpenPose, PoseNet, and MoveNet Models for Pose Estimation in Mobile Devices," Traitement du Signal, vol. 39, no. 1, pp. 119–124, 2022.
DOI: https://doi.org/10.18280/ts.390111
K. L. Lew, K. S. Sim, S. C. Tan, and F. S. Abas, "Virtual Reality Post Stroke Upper Limb Assessment using Unreal Engine 4.," Engineering Letters, vol. 29, no. 4, 2021.
URL:https://www.engineeringletters.com/issues_v29/issue_4/EL_29_4_24.pdf
C. C. Lim, K. S. Sim, and C. K. Toa, "Development of Visual-based Rehabilitation Using Sensors for Stroke Patient," International Journal of Robotics and Automation Science, vol. 2, pp. 25–30, 2020.
DOI: https://doi.org/10.33093/ijoras.2020.2.4
M. Too, S. H. Lau, and C. K. Tan, "Validity and Reliability of a Conceptual Framework on Enhancing Learning for Students via Kinect: A Pilot Test," International Journal of Robotics and Automation Science, vol. 6, no. 1, pp. 59–63, 2024.
DOI: https://doi.org/10.33093/ijoras.2024.6.1.8
R. G. Candraningtyas, A. P. Yunus, and Y. H. Choo, "Human Fall Motion Prediction – A Review," International Journal of Robotics and Automation Science, vol. 6, no. 2, pp. 52–58, 2024.
DOI: https://doi.org/10.33093/ijoras.2024.6.2.8
K. L. Lew, K. S. Sim, S. C. Tan, and F. S. Abas, "3D Kinematics of Upper Limb Functional Assessment Using HTC Vive in Unreal Engine 4," Advances in Computational Collective Intelligence, pp. 264–275, 2020
DOI: https://doi.org/10.1007/978-3-030-63119-2_22
K.L. Lew, C.K. Toa, P. Zhou, C.S. Lee, T. Kurniawan, S.A. Babale and C. Zheng, "AI-Assisted Analysis for Breast Cancer Imaging and Diagnostics," International Journal on Robotics, Automation and Sciences, vol. 7, no. 1, pp. 111-119, 2025.