Sibling Discrimination Using Linear Fusion on Deep Learning Face Recognition Models

Main Article Content

Rita Goel
Maida Alamgir
Haroon Wahab
Maria Alamgir
Irfan Mehmood
Hassan Ugail
Amit Sinha

Abstract

Facial recognition technology has revolutionised human identification, providing a non-invasive alternative to traditional biometric methods like signatures and voice recognition. The integration of deep learning has significantly enhanced the accuracy and adaptability of these systems, now widely used in criminal identification, access control, and security. Initial research focused on recognising full-frontal facial features, but recent advancements have tackled the challenge of identifying partially visible faces, a scenario that often reduces recognition accuracy. This study aims to identify siblings based on facial features, particularly in cases where only partial features like eyes, nose, or mouth are visible. Utilising advanced deep learning models such as VGG19, VGG16, VGGFace, and FaceNet, the research introduces a framework to differentiate between sibling images effectively. To boost discrimination accuracy, the framework employs a linear fusion technique that merges insights from all the models used. The methodology involves preprocessing image pairs, extracting embeddings with pre-trained models, and integrating information through linear fusion. Evaluation metrics, including confusion matrix analysis, assess the framework's robustness and precision. Custom datasets of cropped sibling facial areas form the experimental basis, testing the models under various conditions like different facial poses and cropped regions. Model selection emphasises accuracy and extensive training on large datasets to ensure reliable performance in distinguishing subtle facial differences. Experimental results show that combining multiple models' outputs using linear fusion improves the accuracy and realism of sibling discrimination based on facial features. Findings indicate a minimum accuracy of 96% across different facial regions. Although this is slightly lower than the accuracy achieved by a single model like VGG16 with full-frontal poses, the fusion approach provides a more realistic outcome by incorporating insights from all four models. This underscores the potential of advanced deep learning techniques in enhancing facial recognition systems for practical applications.

Article Details

How to Cite
Goel, R., Alamgir , M. ., Wahab, H. . ., Alamgir , M. ., Mehmood , I. ., Ugail , H. ., & Sinha, A. . (2024). Sibling Discrimination Using Linear Fusion on Deep Learning Face Recognition Models. Journal of Informatics and Web Engineering, 3(3), 214–232. https://doi.org/10.33093/jiwe.2024.3.3.14
Section
Thematic (Pervasive Computing)

References

V. Talreja, M. C. Valenti, and N. M. Nasrabadi, “Multibiometric secure system based on deep learning,” IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2017, doi: 10.1109/globalsip.2017.8308652.

P. Majumdar, A. Agarwal, R. Singh, and M. Vatsa, "Evading Face Recognition via Partial Tampering of Faces," in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 11-20, doi: 10.1109/CVPRW.2019.00008.

R. Goel, I. Mehmood, and H. Ugail, “A Study of Deep Learning-Based Face Recognition Models for Sibling Identification,” Sensors, vol. 21, no. 15, p. 5068, 2021, doi: 10.3390/s21155068.

J. W. Tanaka and D. Simonyi, “The ‘Parts and Wholes’ of Face Recognition: A Review of the Literature,” Quarterly Journal of Experimental Psychology, vol. 69, no. 10, pp. 1876–1889, 2016, doi: 10.1080/17470218.2016.1146780.

M. Almuashi, S. Z. Mohd Hashim, D. Mohamad, M. H. Alkawaz, and A. Ali, "Automated kinship verification and identification through human facial images: a survey," Multimedia Tools and Applications, vol. 76, no. 1, pp. 265-307, 2017, doi: 10.1007/s11042-015-3007-5.

S. Zafeiriou, C. Zhang, and Z. Zhang, “A survey on face detection in the wild: Past, present and future,” Computer Vision and Image Understanding, vol. 138, pp. 1–24, 2015, doi: 10.1016/j.cviu.2015.03.015.

R. Fang, K. D. Tang, N. Snavely, and T. Chen, “Towards computational models of kinship verification,” IEEE International Conference on Image Processing, 2010, doi: 10.1109/icip.2010.5652590.

H. Lamba, A. Sarkar, M. Vatsa, R. Singh, and A. Noore, “Face recognition for look-alikes: A preliminary study,” International Joint Conference on Biometrics (IJCB), 2011, doi: 10.1109/ijcb.2011.6117520.

S. Xia, M. Shao, and Y. Fu, Kinship verification through transfer learning. 2011. doi: 10.5591/978-1-57735-516-8/ijcai11-422.

A. Shadrikov, “Achieving Better Kinship Recognition Through Better Baseline,” IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), 2020, doi: 10.1109/fg47880.2020.00137.

T. F. Vieira, A. Bottino, A. Laurentini, and M. De Simone, “Detecting siblings in image pairs,” The Visual Computer, vol. 30, no. 12, pp. 1333–1345, 2013, doi: 10.1007/s00371-013-0884-3.

S. M. Mathews, C. Kambhamettu, and K. E. Barner, "Am I your sibling?"; Inferring kinship cues from facial image pairs, 2015 49th Annual Conference on Information Sciences and Systems (CISS), 2015, doi: 10.1109/ciss.2015.7086888.

H. Yan and C. Song, “Multi-scale deep relational reasoning for facial kinship verification,” Pattern Recognition, vol. 110, p. 107541, 2020, doi: 10.1016/j.patcog.2020.107541.

L. He, H. Li, Q. Zhang, and Z. Sun, “Dynamic Feature Learning for Partial Face Recognition,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, doi: 10.1109/cvpr.2018.00737.

S. Z. Li, D. Yi, Z. Lei, and S. Liao, “The CASIA NIR-VIS 2.0 Face Database,” 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2013, doi: 10.1109/cvprw.2013.59.

B. Lahasan, S. L. Lutfi, I. Venkat, M. A. Al-Betar, and R. San-Segundo, “Optimized symmetric partial facegraphs for face recognition in adverse conditions,” Information Sciences, vol. 429, pp. 194–214, 2018, doi: 10.1016/j.ins.2017.11.013.

A. Elmahmudi and H. Ugail, “Deep face recognition using imperfect facial data,” Future Generation Computer Systems, vol. 99, pp. 213–225, 2019, doi: 10.1016/j.future.2019.04.025.

Y. Zhu and Y. Jiang, “Optimization of face recognition algorithm based on deep learning multi feature fusion driven by big data,” Image and Vision Computing, vol. 104, p. 104023, 2020, doi: 10.1016/j.imavis.2020.104023.

S. Umer, B. C. Dhara, and B. Chanda, “Face recognition using fusion of feature learning techniques,” Measurement, vol. 146, pp. 43–54, 2019, doi: 10.1016/j.measurement.2019.06.008.

Z. Yujiao, L. W. Ang, S. Shaomin, and S. Palaniappan, “Dropout Prediction Model for College Students in MOOCs Based on Weighted Multi-feature and SVM,” Journal of Informatics and Web Engineering, vol. 2, no. 2, pp. 29–42, 2023, doi: 10.33093/jiwe.2023.2.2.3.

S. Nemati, R. Rohani, M. E. Basiri, M. Abdar, N. Y. Yen, and V. Makarenkov, “A Hybrid Latent Space Data Fusion Method for Multimodal Emotion Recognition,” IEEE Access, vol. 7, pp. 172948–172964, 2019, doi: 10.1109/access.2019.2955637.

S. Saleem, J. Amin, M. Sharif, M. A. Anjum, M. Iqbal, and S.-H. Wang, “A deep network designed for segmentation and classification of leukemia using fusion of the transfer learning models,” Complex & Intelligent Systems, vol. 8, no. 4, pp. 3105–3120, 2021, doi: 10.1007/s40747-021-00473-z.

J. Gao, P. Li, Z. Chen, and J. Zhang, "A Survey on Deep Learning for Multimodal Data Fusion," Neural Computation, vol. 32, no. 5, pp. 829-864, 2020, doi: 10.1162/neco_a_01273.

Y. Zheng, Z. Xu, and X. Wang, “The Fusion of Deep Learning and Fuzzy Systems: A State-of-the-Art Survey,” IEEE Transactions on Fuzzy Systems, vol. 30, no. 8, pp. 2783–2799, 2021, doi: 10.1109/tfuzz.2021.3062899.

X. Sun and M. Lv, “Facial Expression Recognition Based on a Hybrid Model Combining Deep and Shallow Features,” Cognitive Computation, vol. 11, no. 4, pp. 587–597, 2019, doi: 10.1007/s12559-019-09654-y.

F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2015, doi: 10.1109/cvpr.2015.7298682.

M. S. Z. Ahmad, N. A. Ab. Aziz, and A. K. Ghazali, “Development of Automated Attendance System Using Pretrained Deep Learning Models,” International Journal on Robotics Automation and Sciences, vol. 6, no. 1, pp. 6–12, 2024, doi: 10.33093/ijoras.2024.6.1.2.

O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face Recognition,” British Machine Vision Conference, 2015, doi: 10.5244/c.29.41.

K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv (Cornell University), 2014, doi: 10.48550/arxiv.1409.1556.

C. Szegedy, S. E. Reed, D. Erhan, and D. Anguelov, “Scalable, High-Quality Object Detection,” 2014. https://www.semanticscholar.org/paper/Scalable%2C-High-Quality-Object-Detection-Szegedy-Reed/4328ec9d98eff5d7eb70997f76d81b27849f3220

A. Krizhevsky, I. Sutskever, G. E. Hinton, and University of Toronto, “ImageNet Classification with Deep Convolutional Neural Networks,” Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf

V. Sudha and T. R. Ganeshbabu, “A Convolutional Neural Network Classifier VGG-19 Architecture for Lesion Detection and Grading in Diabetic Retinopathy Based on Deep Learning,” Computers, Materials & Continua/Computers, Materials & Continua (Print), vol. 66, no. 1, pp. 827–842, 2020, doi: 10.32604/cmc.2020.012008.

N. Shnain, Z. Hussain, and S. Lu, “A Feature-Based Structural Measure: An Image Similarity Measure for Face Recognition,” Applied Sciences, vol. 7, no. 8, p. 786, 2017, doi: 10.3390/app7080786.

A. R. Lahitani, A. E. Permanasari, and N. A. Setiawan, “Cosine similarity to determine similarity measure: Study case in online essay assessment,” International Conference on Cyber and IT Service Management, 2016, doi: 10.1109/citsm.2016.7577578.

A. Ultsch and J. Lötsch, “Euclidean distance-optimized data transformation for cluster analysis in biomedical data (EDOtrans),” BMC Bioinformatics, vol. 23, no. 1, 2022, doi: 10.1186/s12859-022-04769-w.

I. Bakurov, M. Buzzelli, R. Schettini, M. Castelli, and L. Vanneschi, “Structural similarity index (SSIM) revisited: A data-driven approach,” Expert Systems With Applications, vol. 189, p. 116087, 2021, doi: 10.1016/j.eswa.2021.116087.

K. M. A. Parks, L. A. Griffith, N. B. Armstrong, and R. A. Stevenson, “Statistical Learning and Social Competency: The Mediating Role of Language,” Scientific Reports, vol. 10, no. 1, 2020, doi: 10.1038/s41598-020-61047-6.

D. Chicco and G. Jurman, “The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation,” BMC Genomics, vol. 21, no. 1, 2020, doi: 10.1186/s12864-019-6413-7.