https://mmupress.com/index.php/jiwe/issue/feedJournal of Informatics and Web Engineering2026-02-14T08:05:04+08:00Prof. Dr. Su-Cheng Haw sucheng@mmu.edu.myOpen Journal Systems<p>The<strong> Journal of Informatics and Web Engineering (JIWE) i</strong>s a peer-reviewed journal that advances the engineering of user-centric, web-native information systems. We published original research, reviews, and case studies that unite informatics with rigorous web-engineering methods across the full lifecycle, from requirements and design to deployment and evolution.</p> <p> </p> <p>eISSN:<strong> 2821-370X | </strong>Publisher: <a href="https://journals.mmupress.com/"><strong>MMU Press</strong></a> | Access: <strong>Open</strong> | Frequency: <strong>Triannual (Feb, June & October)</strong> effective from 2024 | Website: <strong><a href="https://journals.mmupress.com/jiwe">https://journals.mmupress.com/jiwe</a></strong></p> <p>Indexed in:<br /><a style="margin-right: 10px;" href="https://myjurnal.mohe.gov.my/public/browse-journal-view.php?id=1038" target="_blank" rel="noopener"><img style="width: 112px; display: inline;" src="https://journals.mmupress.com/resources/myjurnal-logo.png" alt="" width="200" height="26" /></a> <a style="margin-right: 10px;" href="https://journals.mmupress.com/index.php/jiwe/management/settings/context/#" target="_blank" rel="noopener"><img style="width: 95px; display: inline;" src="https://journals.mmupress.com/resources/mycite-logo.jpg" alt="" width="200" height="34" /></a><a style="margin-right: 10px;" href="https://search.crossref.org/search/works?q=2821-370X&from_ui=yes"><img style="display: inline;" src="https://assets.crossref.org/logo/crossref-logo-landscape-100.png" /></a><a style="margin-right: 10px;" href="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=2821-370X&btnG="><img style="display: inline; width: 137px;" src="https://journals.mmupress.com/resources/google-scholar-logo.png" /></a><a style="margin-right: 10px;" href="https://www.ebsco.com/"><img style="display: inline; width: 100px;" src="https://journals.mmupress.com/resources/ebscohost-logo.png" /></a> <a style="margin-right: 10px;" href="https://www.doaj.org/toc/2821-370X"><img style="width: 89px; display: inline;" src="https://journals.mmupress.com/resources/doaj-logo.jpg" alt="" width="200" height="22" /></a><a style="margin-right: 10px;" href="https://openalex.org/works?page=1&filter=primary_location.source.id:s4387278993"><img style="display: inline; width: 100px;" src="https://journals.mmupress.com/resources/openalex-logo.png" /></a><a style="margin-right: 10px;" href="https://ascidatabase.com/masterjournallist.php?v=Journal+of+Informatics+and+Web+Engineering"><img style="display: inline; width: 100px;" src="https://journals.mmupress.com/resources/asci-logo.png" /></a><img style="width: 110px; display: inline; margin-right: 10px;" src="https://journals.mmupress.com/resources/dimensions-logo.png" alt="" width="200" height="34" /></p>https://mmupress.com/index.php/jiwe/article/view/2917Editorial Preview for February 2026 Issue2026-01-29T12:50:18+08:00Fong-Yee Leelee.fong.yee@mmu.edu.mySu-Cheng Hawsucheng@mmu.edu.my<p>The February 2026 issue presents 17 research papers within its regular section, covering a wide range of areas including Artificial Intelligence (AI), Machine Learning (ML), Web and Cloud Technologies, Recommender Systems and Software Engineering. The collection reflects JIWE’s ongoing effort to advance research in the informatics and web engineering domain. A special thematic section, guest-edited by Prof. Ts. Dr. Hairulnizam Bin Mahdin, centers on “Intelligent Systems and the Next Wave of Digital Innovation”. This section studies the pervasive role of AI and machine learning in multiple sectors of industrial automation and disaster preparedness. These studies also emphasize on trust, transparency, and data-driven architectures, demonstrating how converging technologies create more inclusive, and efficient digital ecosystems. Several papers in this issue also align with the United Nations Sustainable Development Goals (SDGs) which includes SDG 3 (Good Health and Well-being) through advancements in medical imaging and disease detection, SDG 9 (Industry, Innovation, and Infrastructure) via research on cloud-based systems and industrial optimization, SDG 10 (Reduced Inequalities) through the development of inclusive sign language interfaces, SDG 11 (Sustainable Cities and Communities) via innovations in flood management and air quality monitoring, and SDG 13 (Climate Action) through neuro-intelligent techniques for drought prediction. With these alignments, it shows our journal commitment of moving beyond just theory and creating a real-world impact.</p>2026-02-14T00:00:00+08:00Copyright (c) 2026 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2014Robust Medical Image Prediction via Adaptive Reconstruction: Bridging the Gap in Low-Quality Data2025-06-29T11:50:52+08:00Prateek Singhalprateeksinghal2031@gmail.comMadan Singhmadan.phdce@gmail.com<p>Medical image prediction plays a very significant role in clinical decision-making and early detection and diagnosis of different diseases. However, the quality of medical images has a huge impact on the predictive models' accuracy. Poor-quality data usually occurs due to problems like noise, artifacts, and low resolution and poses a major challenge for reliable medical image prediction. Framework advances medical image analysis through three novel contributions firstly, A hybrid architecture combining wavelet-based denoising with deep learning (DL) enhancement (unlike existing single-approach methods). Secondly, Cross-modality robustness validated on low-quality CT/MRI/X-rays from real clinics (versus modality-specific solutions), and lastly, A closed-loop system where diagnostic predictions guide iterative image refinement (absent in current workflows). Benchmarks show 98.5% accuracy at 0.6ms latency, with 19% fewer false positives than cascaded approaches. This reduces the gap in low-quality data. Our method combines state-of-the-art image processing methods with machine learning algorithms to enhance the quality of medical images before feeding them into predictive models. The adaptive reconstruction-based model consists of using classic denoising techniques in images and DL-based approaches, selectively enhancing critical features and removing noise. It aims to provide qualities in image reconstruction suitable for prediction tasks by recovering lost or degraded information. Additionally, the work focuses on utilising robust machine learning algorithms to improve prediction accuracy on the reconstructed images. The framework was tested on various datasets and had significant improvements in predictive performance when compared to the traditional approaches using low-quality images directly. The findings indicated that adaptive reconstruction improves visual quality of medical images and improves the overall predictive model performance for clinical use cases. The proposed adaptive reconstruction model also represents a promising strategy for overcoming constraints posed by low-quality data and will improve the accuracy and reliability evidencing clinically relevant outcomes in medical imaging.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2072Efficiency of Neo4j in Designing and Analysing Graph Database for Air Quality Analysis of Indian Metro Cities2025-07-01T21:29:20+08:00Jiten Chavdachavdajiten00@gmail.comKishan Bharvadkishanbharvad4221@gmail.comRishap Parmarrishapparmar360@gmail.comNidhi Aroranidhi.fst.1070@gmail.com<p>The analysis of the Air Quality Index (AQI) is currently a popular subject in the area of sustainable research, as it is crucial for investigating and analysing the effects of air pollutants on human health in urban environments. It has been identified over the last decade that airborne pollution has become a critical issue and will remain an important concern in India in the coming years. In recent years, a variety of models and algorithms utilizing big data techniques have been developed for the analysis of air quality data. In this paper, we suggest monitoring and feature analysis of air quality data using a graph database. The research aims to analyse the annual and seasonal variations of AQI over a 10-year period between 2015–2024 from daily averaged concentration data of key air pollutants for 5 metro cities of India. The trends shown by all the cities have been compared to understand the seasonal variations in the average Air quality index. The variations of Average AQI in different severity classes in the cities also provide in-depth analysis of the trends. The findings from this analysis yield highly valuable information to assist in air pollution control, consequently leading to substantial societal and technical impacts. Finally, we offer a perspective on the future of air quality analysis, presenting some promising and challenging concepts. The results of this study can promote a more effectual environment monitoring system to detect drastic or unusual changes in atmosphere through the use of modern technologies.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2034Radiology Report Generation Using Deep Learning and Web-Based Deployment for Chest X-Ray Analysis2025-07-07T10:07:01+08:00David Agboladedagbolade72@gmail.comPeyman Heydarianpeyman.heydarian@solent.ac.ukShakeel Ahmadshakeel.ahmad@solent.ac.uk<p>The huge rise in the number of medical images has caused a major problem in radiology departments. Radiologists are now working harder than ever, which affects the quality of their diagnoses and patient care. It takes 15 to 30 minutes to write a manual radiological report for each case, and different people may see things differently. Modern departments process over 230 cases a week, which causes long delays in diagnosis. Automated report generation systems that are already in use have a lot of problems, such as not being able to be interpreted clinically, not having enough Digital Imaging and Communications in Medicine (DICOM) integration, and not having the right deployment architectures. This makes it hard for medical artificial intelligence to be widely used in clinical settings. This work shows a new automated web-based system for making radiologist reports from chest X-ray pictures using cutting-edge deep learning methods. We suggest using a CheXNet-based convolutional neural network (CNN) with attention mechanisms and Gated Recurrent Units (GRU) to make diagnostic summaries that are useful in a clinical setting. The system is fully compatible with DICOM and uses Streamlit, Docker, and AWS cloud services to make clinical workflows operate together smoothly. The Indiana University Chest X-ray dataset, which has 7,491 pictures and 3,955 reports, was used for training and testing. The system did much better than the best methods available, with BLEU-1, BLEU-2, BLEU-3, and BLEU-4 scores of 0.685, 0.595, 0.533, and 0.482, respectively, as well as a METEOR score of 0.392 and a ROUGE-L score of 0.718.The deployed web application provides real-time report generation with attention heatmap visualisations enabling clinicians to understand model decision-making processes. This interpretability feature addresses critical trust barriers in clinical AI adoption whilst supporting radiologists with diagnostic assistance for routine chest imaging cases.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2266Hybrid Sentiment Analysis Model for Customer Feedback Interpretation Using Lexicon, Machine Learning and Deep Learning Techniques2025-07-18T12:53:31+08:00Jou Jia Yi1211306755@student.mmu.edu.myLew Sook Lingsllew@mmu.edu.myTan Li Taoltt28@cam.ac.uk<p>Customer feedback is pivotal in enhancing service quality and user satisfaction across digital platforms. However, traditional sentiment analysis methods often struggle with informal languages, contextual nuances, and aspect-specific opinions. In this paper, a hybrid sentiment analysis framework is proposed, utilizing lexicon-based (VADER), machine learning (Support Vector Machine and Random Forest), and deep learning (BERT) techniques to achieve improved sentiment classification accuracy and interpretability compared to previous studies. The framework incorporates advanced preprocessing techniques, such as emoji normalization, handling of negation, and detection of intensifiers, to better capture emotional information in user-generated content. The objectives of this study are to develop a robust sentiment analysis system that can accurately classify user sentiment and extract aspect-specific insights from customer feedback. Aspect-based sentiment analysis (ABSA) was also employed to provide detailed evaluations of specific service components, including driver behaviour, app performance, and pricing. In this study, experimental results using the Uber Customer Reviews Dataset (2024) demonstrate that the proposed hybrid model achieves 99% accuracy, significantly outperforms the individual model, and obtains a macro F1-score of 0.98. These findings confirm that integrating lexicon-based, machine learning, and deep learning approaches enhances sentiment classification effectiveness and supports data-driven decision making based on user experience.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/1955Deep Learning-Based Automatic Detection and Diagnosis of Tuberculosis from Chest X-ray Images: A Comprehensive Analysis2025-07-30T15:04:33+08:00Paschal C. Ahanotu2ahanp17@solent.ac.ukDeborah A. Adedigba2adedd38@solent.ac.ukRaza Hasanraza.hasan@solent.ac.ukSellappan Palaniappansellappan.palaniappan@gmail.com<p>Tuberculosis (TB) continues to be one of the foremost public health issues in the world, and remains the second most salient communicable cause of death after COVID-19. In 2022, TB accounted for 10.6 million new infections and 1.3 million deaths globally. Conventional diagnostic approaches involving sputum smear microscopy, culture assays, and GeneXpert MTB/RIF are characterized by excessive turnaround times, elevated costs, and dependency on specialised infrastructure and trained personnel. These constraints are exacerbated in resource-poor countries, resulting in delayed diagnosis, delayed therapy initiation, and enhanced disease transmission. This work investigates the application of deep learning algorithms to automatically diagnose TB from chest X-ray images as a promising alternative method of diagnosis. The evolution of machine learning and deep learning technologies offers novel opportunities to address these diagnostic dilemmas because TB manifests apparent characteristics, such as pleural thickening, fibrosis, infiltration, masses, and nodules that are resolvable from chest X-ray images. We trained and tested four state-of-the-art convolutional neural networks (CNNs), that is, VGG16, ResNet50, InceptionV3, and DenseNet121, on a dataset of 4,200 chest X-rays with 700 positive TB cases and 3,500 normal cases. The approach comprises extensive data preprocessing, applying transfer learning techniques, balancing classes through weighted class consideration, and rigorous task evaluation using measures such as accuracy, precision, recall, and F1-score. DenseNet121 yielded the best-performing model with a total accuracy of 98.0% and balanced sensitivity and specificity between the two classes. The deep learning method proposed in this study holds great promise for enhancing the TB diagnosis accuracy, speed, and accessibility, particularly in resource-poor settings. This work finds critical applications in bridging the gap between diagnosing and treating TB and offers a scalable and cost-effective method for early diagnosis and prompt intervention in global TB control measures.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2022Ontology-Based E-Commerce Recommender System: A Hybrid Semantic Filtering Approach2025-07-04T14:58:09+08:00Jocelyn Pua1201202935@student.mmu.edu.mySu-Cheng Hawsucheng@mmu.edu.myLucia Dwi Krisnawatikrisna@staff.ukdw.ac.idShaymaa Al-Juboorishaymaa.al-juboori@plymouth.ac.ukGee-Kok Tonggktong@mmu.edu.my<p>The rapid growth of e-commerce has led to product overload, which has resulted in personalized product discovery becoming a crucial problem for consumers. Classic recommender systems (RS), which rely heavily on content-based filtering or collaborative filtering, tend to face well-known problems such as cold start, data sparsity, and ignorance of semantics. Such constraints frequently result in irrelevant or redundant suggestions, thereby lowering the user satisfaction and conversions. This study presents a hybrid ontology-based e-commerce recommender system that combines symbolic reasoning with deep semantic matching. The system is based on a Neo4j graph database to capture structured product relationships and is combined with sentence embedding models (MiniLM) to compute the semantic similarity between user queries and product data. For semantic matching, cosine similarity is used, and for ontology-based filtering, graph relationships, such as SAME_CATEGORY, SIMILAR_PRICE, and SAME_MANUFACTURER, are employed. An e-commerce dataset that was cleaned and pre-processed was used to test the system. The performance was measured using the following metrics: precision, Recall, F1-Score, and accuracy. The performance was measured using the following metrics: precision, Recall, F1-Score, and accuracy. The proposed system achieved a precision of 0.95, recall of 0.93, F1-Score of 0.94, and accuracy of 0.94, demonstrating that the hybrid approach yields superior recommendation quality compared with using a single method.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/1944AI-Driven Malware Analysis and Detection: A Comprehensive Survey of Techniques, Trends and Challenges2025-06-14T22:36:52+08:00Salman Khan24730@students.riphah.edu.pkHasnat Razahasnatzaidi@gmail.comMansoor Alamm.mansoor@riphah.edu.pk<p>Malware represents the most critical threat in cybersecurity, meant to compromise the security for any individual or any organization. These are covert software, designed to perform malicious act like data theft, data alteration, and to interrupt a normal operation of the services. The persistent evolution of malware has called for more sophisticated techniques in its detection and prevention, resulted into direct need of Artificial Intelligence in cybersecurity. Artificial intelligence, using machine learning techniques and rising concepts like neural networks has greatly improved the traditional static and dynamic ways of detecting malware. Advances in AI-driven solutions have made them much more capable than their predecessors of detecting malware and addressing threats in real time. By training machine learning models on vast quantities of data, malicious patterns can easily be detected and identify patterns. With these emerging challenges, AI powers automated real-time analysis and adaptive security posture can effectively mitigate the threat. Large Language Models (LLMs) have revolutionized natural language processing and are increasingly being deployed across a wide range of applications, including text generation, summarization, translation, and detection systems. Recent research related to the methodologies employed in developing detection systems using LLMs, outlines the existing limitations and research gaps, and proposes potential areas for future investigation. The use of AI in malware analysis faces its own challenges with the potential for adversarial attacks and the scale of AI models that can muddy the waters of transparency and trust. Overcoming these challenges will involve the creation of mature, ethical, AI systems and an open dialogue between cybersecurity professionals, sustainable AI development and regulatory compliance all working in concert.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/1884Optimised Data Integration using Transformer Model and Resource Description Framework2025-06-30T16:44:06+08:00Jerome Aondongu Achirachirjerome@gmail.comMuhammad Abdulkarimmmmhammmad@gmail.comMohammed Abdullahimoham08@gmail.com<p>Organizations have become highly reliant on a range of data sources that span structured, semi-structured, and unstructured data types. These repositories allow large-scale storage for faster ingestion and analytics but pose tremendous challenges of integration owing to schema and contextual differences. Traditional data integration methods, such as the ontology-based Resource Description Framework (RDF), are often inadequate when dealing with these challenges. They specifically struggle with the dynamic evolution of the schema of data sources, context-aware interpretation, and achieving interoperability across heterogeneous data sources. This paper presents an integrated system that augments resource description knowledge with token embeddings using the attention mechanism of the transformer model with relative positional encoding to overcome these weaknesses. Data from unstructured sources are used to create an embedding, whereas structured data are mapped into the RDF. The embeddings were then integrated into the RDF using <em>hasEmbedding. </em>Virtual transformations are employed to handle schema alignment and cosine similarity merges similar entities to provide a unified data view. Thus, the model explicitly integrates contextual knowledge within resource description knowledge triples, thereby improving the semantic representation. The proposed system uses a Simple Protocol and Resource Description Knowledge Query Language for the efficient querying of resource description knowledge, thus enhancing interoperability across domains. The proposed model produces a result that attains a good schema mapping accuracy of 97.82%, thus enabling more accurate and meaningful linking of heterogeneous datasets. Empirical trials involving use cases across human activity analysis and flood risk management prove the system’s robustness, scalability, and effectiveness for knowledge discovery while allowing cross-domain integration of heterogeneous types of data within intricate scenarios. The results show that incorporating embedding into RDF reduces dependence on strict, pre-defined ontologies, simplifies schema on-demand alignment, and allows unified querying without the need to curate the integrated data into a traditional data warehouse.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2119Comparative Analysis of the Performance of CSS Animation Methods (Transition and Animation) under High DOM Load2025-08-06T17:15:55+08:00Dmitrii Zakharovforsocials.mail@gmail.comEkaterina Mironenkoformycars.mail@gmail.com<p>This article presents a comparative performance analysis of two primary Cascading Style Sheets (CSS) animation methods — transition and animation — under conditions involving large-scale DOM rendering. Despite widespread beliefs among developers that transitions generally offer better performance, there is a lack of quantitative research validating this assumption in practical scenarios. To address this gap, we developed a web application capable of rendering and animating from 50 up to 10,000 elements simultaneously. Performance metrics were collected in real-time using built-in browser APIs, including average and minimum FPS, frame delta (time between frames), JavaScript heap memory usage, and total animation duration. Tests were conducted on the latest versions of Chrome and Firefox browsers. Results demonstrate that the choice of animation method heavily influences performance, the specific CSS property being animated (e.g., opacity, transform, blur, background-color), and the number of animated elements. While transitions showed slightly better efficiency in terms of JavaScript resource consumption, other metrics such as frame rate and rendering stability were comparable between the two methods. Notably, animations consistently require more memory, which may affect scalability in high-load interfaces. This research provides valuable quantitative data to guide front-end developers in selecting appropriate animation techniques for complex interfaces. It also emphasizes the importance of selecting GPU-friendly CSS properties to enhance rendering performance. Overall, the findings suggest that while the difference in performance between transitions and animations is modest, understanding their behavior under load is crucial for building smooth and responsive web experiences.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2151XAI -Driven Explainability for Cardiovascular Diseases Prediction2025-08-02T07:12:18+08:00Jacqueline Dike0dikej91@solent.ac.ukJarutas Andritschjarutas.andrtisch@solent.ac.uk<p>The adoption of artificial intelligence (AI) in cardiovascular disease prediction has significantly improved risk stratification, offering new avenues for early diagnosis and preventive care. With the growing availability of electronic health records and structured clinical datasets, machine learning (ML) and deep learning (DL) models have demonstrated strong predictive capabilities. However, despite their performance, its adoption in healthcare is often constrained by the lack of transparency and interpretability in many ML and DL models. This lack of explainability undermines clinical trust and raises ethical concerns. In high-stakes domains such as CVD prediction, clinicians require not only accurate outputs but also clear explanations of how those predictions are derived. This paper presents a comparative evaluation of explainable artificial intelligence (XAI) techniques applied to both conventional ML models such as Logistic Regression, Support Vector Machine, Decision Tree, and Random Forest and DL architectures including AutoInt, FT-Transformer, and Category Embedding. Using the Framingham Heart Study dataset, this study integrates SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) to assess model interpretability and feature relevance. Results show that conventional models offer superior explainability with comparable predictive accuracy, while DL models, although slightly less interpretable, demonstrate potential with advanced XAI techniques. The findings advocate hybrid approaches that balance accuracy and interpretability, supporting ethical and practical AI deployment in healthcare.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2035Sentiment-Based Music Recommendation System using Natural Language Processing for Emotion-aware Song Suggestions2025-07-10T19:46:33+08:00Yu Bui Xuanyu.bui.xuan@student.mmu.edu.myS. Prabha Kumaresanprabha.kumaresan@mmu.edu.myNaveen Palanichamyp.naveen@mmu.edu.myMohamed Uvaze Ahamed Ayoobkhanmayoobkhan@aut-edu.uz<p>Music plays a vital role in influencing emotions, mood, and mental health. However, classical music recommendation systems mostly rely on listening to history, genre preference, or popularity, ignoring the listener’s mood. Motivation to go further in this area gave birth to this study, where a new sentiment-based music recommendation system is being designed by incorporating Natural Language Processing (NLP) and Machine Learning (ML) techniques to provide emotion-aware song recommendations. The system collects various audio features such as valence, energy, tempo, and danceability from music distribution platforms such as Spotify, which are well-known indicators for classifying the emotional tone of a song. Thereafter, NLP processes are used to analyse audio features and provide sentiment scores assigned to each music track: positive, negative, or neutral. These sentiment scores were further used with other song features, such as genre and tempo, to build in-depth emotional profiles for each song. Three ML methods were implemented in the system for classification and recommendation: K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Decision Tree (DT). After many trials, the SVM scored the highest in sentiment classification accuracy (87.5%), with maximum precision and recall values of 0.88. The recommendation is fed through a simple interface on a website where the user can enter their feelings and obtain song recommendations instantly determined by mood. According to the survey, 78% of users said that mood-based recommendations fit their emotional state better than traditional recommendations. Although the results prove this, limitations have been noted, particularly with a limited range of features and small dataset. Future enhancements will focus on real-time affect tracking, additional affect features, and larger and more diverse datasets. Traditional NLP applies to text data, but this system applies sentiment detection to numerical audio features. This version does not use lyric-based NLP.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2102A Unified Cloud-Based System for the Tyre Retail Industry: Design and Implementation of Tracktive2025-07-09T10:18:19+08:00William Theo Wei Loon1211103037@student.mmu.edu.mySiew-Chin Chongchong.siew.chin@mmu.edu.myKian-Ming LimKian-Ming.Lim@nottingham.edu.cnLee-Ying Chonglychong@mmu.edu.myKuok-Kwee Weewee.kuok.kwee@mmu.edu.my<p>In the Malaysian concurrent tyre retailing industry, most of the business activities are carried out in the traditional labour-intensive ways. This manual process is prone to inefficiencies, increases operating costs and increases the likelihood of error. The current paper outlines Tracktive, a unified and cloud-based solution that was conceptualized and built to automate the industry by centralizing the ordering of tyres, inventory and order management to a single cloud-based platform. Tracktive is built on microservice architecture and event-driven architecture and deployed with the help of a simulated cloud environment. All the core functional modules are independent, scalable modules, each having its own management. The event-driven architecture is used in inter-module communication via Apache Kafka, which allows asynchronous communication between each microservices. A specific Application Programming Interface (API) Gateway is deployed to avoid unauthorized access and uses JSON Web Token (JWT) authentication and Bcrypt hashing. The system is finally tested with the help of LocalStack, which emulates Amazon Web Service (AWS) services, thus, proving the cloud-native design of the system cost-efficiently and effectively. Tracktive have the potential to improve the accuracy of operations, streamline the processes, and grant the business more agility by replacing the outdated manual processes with a centralized digital system, which could also be a driving force of digital transformation in the tyre retailing industry.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/1788An Empirical Evaluation of Machine Learning Methods and Text Classifiers for Sentiment Analysis of Online Consumer Reviews2025-05-02T14:02:57+08:00Pei Qin Lo1211101157@student.mmu.edu.mySew Lai Ngslng@mmu.edu.myLi-xian Jiao624292300@qq.com<p>This study aims to identify the best predictive model for analysing online product reviews (OPRs) in the electronics industry, with a secondary focus on leveraging unstructured customer feedback to support product improvement. Using a dataset of 9,675 Oppo mobile phone reviews, this study employs three classification models—Random Forest, Support Vector Machine (SVM) and Logistic Regression–paired with Term Frequency-Inverse Document Frequency (TF-IDF) or bidirectional encoder representation transformer (BERT) as the embedding models to analyse customer sentiment and derive actionable insights. The methodology features a comprehensive analysis pipeline that includes text preprocessing with the Natural Language Toolkit (NLTK), feature extraction using) vectorization and BERT embeddings, and sentiment prediction through various classifiers. The results indicated that BERT was the most effective, achieving the highest accuracy, precision, recall, and F1-score. This superior performance stems from the Random’s ability to handle high-dimensional, sparse data and effectively utilize the weighted word importance provided by TF-IDF, which makes it particularly well suited for sentiment classification tasks involving structured text representations. This study contributes to this field by providing an effective framework for analysing online reviews. This can help businesses understand customer needs for refining product offerings and laying the groundwork for future applications across different product categories.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2127Enhancing Fraud Detection in Financial Transactions using LightGBM and Random Forest2025-07-10T18:06:56+08:00Wan-Ping Khorwanping1023@outlook.comKah-Ong Michael Gohmichael.goh@mmu.edu.myCheck-Yee Lawcylaw@mmu.edu.myConnie Teetee.connie@mmu.edu.myYong-Wee Sekywsek@utem.edu.myRiasat Khanriasat.khan@northsouth.edu<p>In recent years, the frequency and complexity of financial fraud have been rising and have become an urgent challenge for the global financial system. Traditional rule-based detection methods struggle to cope with new types of fraud, especially in terms of real-time detection, generalization ability, and accuracy. To overcome these limitations, machine learning techniques have gradually emerged as a promising solution for identifying fraudulent transactions with better flexibility and scalability. Based on the publicly available European credit card fraud transaction dataset, this study proposes a hybrid model that combines the advantages of LightGBM and Random Forest, aiming to improve the accuracy, robustness, and interpretability of fraud detection. To handle the severe data imbalance problem (fraud cases accounting for only 0.17%), this study applies a RandomUnderSampling strategy and further enhances model performance through systematic parameter tuning using RandomizedSearchCV and decision threshold optimization. Stratified K-Fold cross-validation is also used to validate model stability. In addition, the model is compared with alternative resampling methods including SMOTE and ADASYN, and the results reaffirm the effectiveness and practicality of the undersampling approach. The final model achieves an overall accuracy of 99%, a recall of 86% on the fraud class, ROC-AUC of 0.9746, and PR-AUC of 0.6639. While the precision is relatively low (13%), it reflects a deliberate strategy of prioritizing fraud detection over false positives. This hybrid approach demonstrates a good balance between detection performance and practicality, offering better interpretability and lower computational cost compared to many deep learning models.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2038Real-Time Posture Monitoring for Effective Exercise Using MediaPipe Python2025-07-22T10:53:25+08:00Wan Izzul Wafiq Wan Noor Asmawi1201100927@student.mmu.edu.myS. Prabha Kumaresanprabha.kumaresan@mmu.edu.my<p>Maintaining proper posture during exercise is crucial for preventing injuries and maximizing workout efficiency. This project aims to develop a real-time posture monitoring system using MediaPipe and OpenCV to provide instant feedback on the exercise form. The system captures video input through a webcam, processes it using OpenCV, and utilizes MediaPipe’s pose-estimation model to detect key body landmarks. By analysing the joint angles and comparing them to predefined optimal postures, the system evaluates the user’s form and provides corrective feedback in real time. This approach eliminates the need for expensive wearable sensors, making posture monitoring more accessible and user friendly. The literature review highlights the effectiveness of computer vision-based solutions in fitness applications and identifies key challenges, such as occlusions, varying lighting conditions, and real-time processing constraints. The proposed system addresses these issues by optimizing the pose-estimation accuracy and feedback mechanisms. Testing and user surveys confirmed the system’s effectiveness, achieving 90% accuracy for squat posture detection and 86% accuracy for lunges under typical home-workout conditions. The expected outcome of this project is a functional real-time exercise posture monitoring system that enhances the user training experience by ensuring a proper form. Future improvements may involve integrating machine learning techniques to personalize feedback and expand the system to multi-user environments. This project contributes to the advancement of computer vision applications in fitness and rehabilitation domains.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/1990Design and Implementation of a Web-Based Lecture Timetable Scheduling System2025-08-08T17:48:10+08:00Suraj Abubakarelsurraj@gmail.comMohammed Kabir Daudamdauda@atbu.edu.ngMaryam Abdullahi Musaammaryam@atbu.edu.ngBala Muhammad Muhammadmuhammadbala466@gmail.com<p>Timetable management is a controversial but core administrative activity at tertiary learning institutions, normally saturated with timetabling clashes, resource wastage, and communication breakdowns. This study describes the development and validation of a Web-Based Lecture Timetable Scheduling System (WLTSS) for Abubakar Tafawa Balewa University (ATBU) Department of Computer Science. The system was implemented to automatically and effectively build, modify, and publicize lecture timetables. On Hypertext Markup Language (HTML), Cascading Style Sheets (CSS), JavaScript, and React as frontend technologies, the system provides an adaptable interface with real-time data updates through the Reacts component model foundation. MySQL and Node.js are used to run the back end with proper data manipulation, timetabling clash detection, and proper administration user access management. To showcase the system performance, comprehensive stress tests were performed, and good performance was observed at 500 concurrent users, with an overall response time of approximately 1.1 seconds. Test statistics also indicated an 87% success rate of load-simulated requests with all failures resulting from the modelled network timeout and database contentions. The results demonstrated the reliability and scalability of the operation of this system. The user feedback collected through questionnaires also confirmed greater satisfaction and usability with this new automated scheduling process than with conventional scheduling by hand. By minimizing clashes of classroom assignments, instructors, and lecturers, WLTSS significantly enhances administrative effectiveness and improves communication between students and lecturers. In addition to reducing administrative burden, the system provides transparency and adaptability in scheduling lectures. Future evolution will include integration with scheduling algorithms through AI, managing user preferences, and a dedicated app for smartphones to further increase functionality and end-user accessibility.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2140Vehicle Re-identification System using Residual Network with Instance-Batch Normalization 2025-07-28T11:05:40+08:00Wei Jie Low1211101082@student.mmu.edu.myKah-Ong Michael Goh michael.goh@mmu.edu.myCheck-Yee Lawcylaw@mmu.edu.myConnie Teetee.connie@mmu.edu.myYong-Wee Sekywsek@utem.edu.myMd Ismail Hossenm.hossen@griffith.edu.au<p>Vehicle Re-identification (Re-ID) has become extremely important due to the increasing number of vehicles on the road and its potential to address traffic-related challenges. As a result, there is also a growing need for efficient methods to track and identify vehicles across multiple traffic cameras. One of the biggest challenges of this task is the variations in vehicle appearances across different camera angles. This is because vehicles can appear significantly different when captured from various camera angles and viewpoints. Furthermore, the current vehicle Re-ID solutions typically require extensive coding knowledge, making it inaccessible to many potential users. Therefore, we focus on developing a user-friendly software application that simplifies the entire Re-ID workflow. This includes tasks like dataset preparation and data preprocessing using YOLO, model training with ResNet-ibn, performance evaluation, and visualization of results. The application provides a comprehensive pipeline that enables users to perform vehicle Re-ID tasks without requiring advanced programming skills. The experiment results shown that ResNet-IBN model achieved the highest results on custom dataset MMUVD_1500 with mAP of 87.63% and Rank@1 of 84.68% respectively. For instance, users would be able to input query vehicle images and receive matched gallery images from different camera viewpoints through the application interface. Thus, this makes it easier for users to track vehicles across multiple locations, enhance the usability and broaden the accessibility of vehicle Re-ID tasks. The final outcome is a complete software solution with a user-friendly interface that allows users to perform vehicle Re-ID tasks effortlessly.</p>2026-02-14T00:00:00+08:00Copyright (c) 2026 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2553Editorial: Intelligent Systems and the Next Wave of Digital Innovation2025-12-05T15:20:48+08:00Hairulnizam Mahdinhairuln@uthm.edu.my<p>The field of artificial intelligence (AI), machine learning and intelligent automation has become pervasive in our modern digital world. It extends from business and public services to environmental management and even into people's daily experiences with technology. In this special issue "Intelligent Systems and the Next Wave of Digital Innovation," published in the Journal of Informatics and Web Engineering, reviews several studies to explore the increasing role of intelligent systems in current society and their importance. Some of the more significant areas of discussion are how to formalize the expectations for explainable AI, evaluating face recognition models in the real world, how trust and transparency of AI models are evaluated and more. It also highlights a promising and emerging frontier of intelligent automation — from swarm intelligence and optimization within manufacturing to the ubiquity of multimodal interfaces, such as sign language chatbots. Furthermore, smart environmental analytics techniques such as neuro-intelligent techniques for drought prediction and IoT-generated flood intelligence systems help communities to plan for disaster events are also being studied. All of these contributions in turn reinforce the notion that intelligent systems can be developed more responsively and contextually through data-driven architectures. These also reflect a wider digital innovation trend: an era when decision-support tools and algorithmic intelligence and real-time data and other technologies converge toward reliability, efficiency, inclusivity, and resilience in increasingly complex social and technical ecosystems.</p>2026-02-14T00:00:00+08:00Copyright (c) 2026 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2175Test Case Prioritization Using Ant Colony Optimization to Improve Fault Detection and Time2025-08-17T22:19:34+08:00Nurezayana Zainalnurezayana@uthm.edu.myMuhammad Sh Sallehmuhammadshsalleh@gmail.comNur Atiqah Wahidah Sulaimanhi230016@student.uthm.edu.myAmmar Alazabammar.alazab@torrens.edu.auNur Liyana Sulaimannrliyana@uthm.edu.my<p>Regression testing plays a critical role in ensuring the reliability and quality of software following continuous integration and development. However, executing all test cases during regression testing can be time-consuming and resource-intensive. Test Case Prioritization (TCP) addresses this challenge by determining an optimal execution order of test cases that maximizes early fault detection while minimizing execution time. Optimization algorithms contribute significantly to enhancing the effectiveness of TCP while utilizing limited resources. This study proposes an Ant Colony Optimization (ACO) algorithm to address the TCP problem, leveraging its strength in navigating complex search spaces inspired by the foraging behavior of real ant colonies. It involves four phases: dataset selection, dataset conversion, algorithm implementation, and performance evaluation. ACO was implemented and evaluated on two datasets (Case Study One and Case Study Two) of differing sizes and complexity. The results demonstrate its potential to improve testing efficiency and effectiveness with limited resources using the Average Percentage Fault Detected (APFD) and execution time. Case Study One, which involved a larger dataset, achieved a higher APFD (0.6911), but required more iterations and execution time (0.3733 s). In contrast, Case Study Two, with fewer test cases and faults, demonstrated a faster convergence and execution time (0.2596 s), with a slightly lower APFD (0.6700). These findings demonstrate a trade-off between early fault detection and execution efficiency, indicating that dataset characteristics such as size and fault density influence the performance of the algorithm.</p>2026-02-14T00:00:00+08:00Copyright (c) 2025 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2258Flood Disaster Preparedness and Response Using a Web-Based Integrated Flood Management System (IFMS)2025-08-15T09:44:09+08:00Mohamed Abdiraman Abdirahimabdurahmanabdurahim21@gmail.comAbd Samad Bin Hasan Basariabdsamad@uthm.edu.myMalik Bader Alazzamm.alazzam@jadara.edu.joMuhammad Shukri Bin Che Lah mdshukri@uthm.edu.myRabiah Binti Ahmadrabiah@uthm.edu.myAida Binti Mustaphaaidam@uthm.edu.my<p>Floods are one of the most frequent and damaging natural hazards in Malaysia, especially in low-lying places, such as Batu Pahat and Johor. National flood monitoring systems, such as InfoBanjir and Malaysian Meteorological Department (MET) Malaysia, have disjointed data pools, late updates, and inadequate public access. In this paper, we present an IFMS, which is a contemporary web platform developed to integrate national flood management systems through data collection, automatic processing, dynamic visualization tools, and others. The system architecture consists of three main layers: IoT-enabled flood sensors, centralized web server, and responsive user interfaces. Backend processing is performed using Laravel, and front-end design uses Bootstrap and Chart.js for live data visualization. The IFMS algorithm classifies severity using a predefined standard for water levels and rainfall, modelled by a pseudocode for reproducibility and scalability. The real-time data are centralized in various APIs, such as data.gov.my and Google Maps, to ensure real-time updates occur throughout the time, and interactive monitoring by map. According to the experimental assessment, the IFMS achieves a less than one minute data refresh speed which outstrips the 15–30 min delay compared with the one observed by InfoBanjir. After user acceptance testing (UAT) (194 respondents) user satisfaction rate 94.9% for the system and technical stability 89.7% were achieved so that the new solution to be acceptable and operational. The first solution is evidenced by an evaluation comparison with other systems implemented globally, such as the Iowa Flood Information System (IFIS), Tokyo Metropolitan Flood Control System, and European Flood Awareness System (EFAS) which showed innovation in adopting real-time API integration, hydrograph and hyetograph visualization, and mobile responsiveness. Consequently, the IFMS represents an important advancement in the flood management landscape in Malaysia, harmonizing global standards with local deployment to contribute to greater situational awareness, decision-making, and community resilience.</p>2026-02-14T00:00:00+08:00Copyright (c) 2026 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2476Evaluating Accuracy Latency and Robustness of Face Recognition Models for Real-Time Web Applications2025-10-22T09:29:40+08:00Alani Fatini SharziziAI220095@student.uthm.edu.myNur Afiqah Sahadunnurafiqah@uthm.edu.myAbdulkadir Hassan DisinaDisina.hassan@naub.edu.ngHarinda Fernandoharinda.f@sliit.lk<p>Face-recognition technology is one of the most important advancements in the field of computer vision. They play a crucial role in many applications, including biometric authentication, surveillance, online security, and interactive web systems. The use of web-based solutions is increasing continuously. Therefore, accurate and fast recognition models employing few resources are required in real-world applications. However, because of the challenges related to such environments, including the lighting, occlusion, pose, and computing power of client devices, it is difficult to ascertain which model will be most successful in a real-life scenario. The purpose of this research is to compare four deep learning frameworks for face recognition, which are most widely used by scientists and software developers. FaceNet, SFace, OpenFace, and DeepFace have all been subjected to rigorous examinations to determine which one is the most suitable for work on the real-time web. As part of the assessment, a prototype application was created to enable the simulation of real-time applications. This solution enables both the upload of the test image and the group image to determine which person is the subject of the research. Subsequently, the model performance was tested under the conditions of pose, light, and occlusion variations. Performance was measured using the following features: accuracy, similarity distance, processing latency, and robustness. Therefore, the results show that there is no single best model compatible with all web-based applications, and the outcome fundamentally depends on the developer’s required accuracy and speed.</p>2026-02-14T00:00:00+08:00Copyright (c) 2026 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2484An Enhanced Glowworm Swarm Optimization for Minimizing Surface Roughness in Die Sinking Electrical Discharge Machining 2025-10-22T09:31:38+08:00Nurezayana Zainalnurezayana@uthm.edu.myMuhammad Ammar S.M. Shahromshafieammar79@gmail.comMohamad Firdaus Ab. Azizmdfirdaus@uthm.edu.mySalama A. Mostafa Mostafasalama.adrees@alnoor.edu.iqAnis Farhan Kamaruzzamananisfarhankmi@yahoo.comNor Bakiah Abd. Warifnorbakiah@uthm.edu.my<p>Electric Discharge Machining (EDM) is a non-traditional machining process that utilizes electric sparks between an electrode and a workpiece submerged in a dielectric fluid to ablate a material. It is commonly used in die-making, aerospace, automotive manufacturing, and medical manufacturing, because it can machine hard and complex materials with high precision. In this work, a Surface Roughness Optimization for EDM (SRO-EDM) model is proposed to investigate the machining performance of the die-sinking EDM process of titanium alloys. A regression-based combined approach of Glowworm Swarm Optimization (GSO) and a Two-Factor Interaction (2FI) model has been proposed to investigate the impact of four key process variables, namely voltage (V), peak current (Ip), pulse-on time (ton), and pulse-off time (toff) on surface roughness (Ra) at various locations on work surfaces. A Central Composite Design (CCD) was applied to systematically investigate parameter combinations. Statistical analysis was performed using analysis of variance (ANOVA), which confirmed the statistical significance of the selected parameters, and 2FI regression (R² = 0.60) with moderate-fit predictive accuracy was established. To enhance the quality of optimization, the Enhanced Glowworm Swarm Optimization (EGSO) algorithm is proposed by hybridizing the GSO with Artificial Fish Swarm (AFS) algorithm. The AFS module improves the exploration capability of the GSO and alleviates the local optima problem. For the experimental validation of the model, Response Surface Methodology (RSM) was used to generate the regression based on the developed model and as an objective function for optimization. Experimental results show that EGSO outperforms GSO in performance to achieve an optimized Ra ( ) compared to through conventional GSO. The results demonstrate that the EGSO model can improve convergence accuracy and speed and is a practical method for EDM surface quality optimization in the high-precision manufacturing industry.</p>2026-02-14T00:00:00+08:00Copyright (c) 2026 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2487ANFIS and RBFNN Efficacy and Timescale Dependence in SPEI-Based Drought Prediction using Meteorological Inputs2025-10-22T09:32:23+08:00Alisa Afendialyssaaffendi@ymail.comMuhamad Usman Tariqmuhammad.kazi@adu.ac.aeShuhaida Ismailshuhaida@uthm.edu.myAzizul Azhar Ramliazizulr@uthm.edu.my<p>Drought is a slow-onset natural disaster that has far-reaching effects on agriculture, water security, and socio-economic systems, especially in climate-vulnerable countries such as Malaysia. It is imperative to predict droughts for prompt mitigation efforts. In this paper, the influence of temporal scale on drought modelling has been put into discussion by analyzing a comparison between two machine learning (ML) models: Adaptive Neuro-Fuzzy Inference System (ANFIS); Radial Basis Function Neural Network (RBFNN) based on Standardized Precipitation Evapotranspiration Index (SPEI) as depicting drought. The SPEI of four timescale categories (SPEI-3, SPEI-6, SPEI-9, and SPEI-12) were calculated weekly and monthly (two different temporal scales) from a 15-year (5,844 observations) set of meteorological records (including precipitation, minimum and maximum temperature, humidity, and mean sea level pressure). Model performance was assessed using the Mean Absolute Error (MAE), Pearson correlation coefficient ( ), and Nash-Sutcliffe efficiency (NSE). It is shown that RBFNN surpassed ANFIS at short-, medium-, and long-term timescales in terms of MAE values irrespective of temporal scale, with weekly having the highest accuracy for longer time intervals (especially SPEI-12). It was observed that, in terms of dealing with complex non-linear relationships as well as temporal granularity, RBFNN outperformed ANFIS where ANFIS showed poor performance because of its rule base expansion and input dimensionality. This research provides evidence that integrating RBFNN with weekly temporal scale data and long-term drought indices would be a more robust apparatus for predicting severe drought in Malaysia. These results also highlight the relevance of properly choosing the temporal granularity to develop data-driven forecasting systems for hydrometeorology applications.</p>2026-02-14T00:00:00+08:00Copyright (c) 2026 Journal of Informatics and Web Engineeringhttps://mmupress.com/index.php/jiwe/article/view/2478A Conceptual Framework on Development of Sign Language Chatbot for E-Commerce2025-11-22T20:33:28+08:00Salma Jahan Nishasalmajn359@gmail.comNabhan Salihsalih.43@osu.eduWan-Noorshahida Mohd-Isawan.noorshahida.isa@mmu.edu.my<p>This study proposes and validates a conceptual framework using a sign language (SL) chatbot in the e-commerce domain to improve accessibility. Currently, the use of SL is a less-explored area in online platforms. It is difficult for any communication-challenged person to interact online for buying products and services, using a chatbot, particularly for people using SL. The objective of this study is to introduce a novel hybrid architecture using SL Recognition with a conversational-based chatbot agent via a custom Application Programming Interface (API) in e-commerce platforms. Our work proposes a combined hybrid chatbot framework model using Convolutional Neural Network (CNN) and Natural Language Processing (NLP). Python libraries such as Keras, OpenCV, and MediaPipe frameworks were used to read the signs in the system. To test this study at this initial stage, a preliminary feasibility experiment was conducted. Purposive sampling has been used to select 8 participants familiar with American Sign Language (ASL), who were tested under various conditions, including different lighting and clothing. The SL recognition module’s initial performance data has been analyzed using precision, recall, and F1 scores to assess ASL recognition and achieved an accuracy of 98%. This whole work showcases a blueprint to develop inclusive e-commerce platforms to encourage accessibility.</p>2026-02-14T00:00:00+08:00Copyright (c) 2026 Journal of Informatics and Web Engineering