Interpreting the Smart Healthcare Model Using Shapley Values
Keywords:
Artificial Intelligence, SHAP tree explainer, Smart Cities, Healthcare, Vital signsAbstract
Data analysis and treatment specifications are important in healthcare, but there are many challenges, such as specifying metrics to measure and monitor patient health. Machine learning appears to improve the prediction of patients’ outcomes at the lowest cost in healthcare systems. The machine learning used real data collected by Internet of Things (IoT) devices and analyzed them to detect potential future risks. There are many techniques for detecting potential future risks, but the random forest model was the strongest. Unfortunately, its results were complex and difficult to interpret. Therefore, this study used Shapley values, a powerful analysis method that helps interpret the ambiguity in ML results, often referred to as a black box. This led to the development of the random forest model, which interprets its outcomes by assigning equal importance to features based on their contribution to the results. This study is divided into two steps. Firstly, the random forest model classifies the data based on relationships among feature attributes to predict the outcome. Secondly, interpreting the prediction results from the previous step using Shapley Tree Interpreter (SHAP) values to distribute importance across attributes based on their contributions to the prediction. The proposed method showed that the age, ID, and AP-LO are the most important attributes in predicting diastolic blood pressure. Where age had (-157), id (+17.53), and ap-lo (-213.03) effects. Gender, height, and glu were the most important predictors of systolic blood pressure. Where gender had (-4.41), height (+2.78), and glu (+2.48) effects.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Journal of Soft Computing and Data Mining

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.









