Learning Intelligent for Effective Sonography (LIFES) Model for Rapid Diagnosis of Heart Failure in Echocardiography

Lies Dina Liastuti, Bambang Budi Siswanto, Renan Sukmawan, Wisnu Jatmiko, Idrus Alwi, Budi Wiweko, Aria Kekalih, Yosilia Nursakina, Rindayu Yusticia Indira Putri, Grafika Jati, Mgs M Luthfi Ramadhan, Ericko Govardi, Aqsha Azhary Nur

Abstract


Background: The accuracy of an artificial intelligence model based on echocardiography video data in the diagnosis of heart failure (HF) called LIFES (Learning Intelligent for Effective Sonography) was investigated. Methods: A cross-sectional diagnostic test was conducted using consecutive sampling of HF and normal patients’ echocardiography data. The gold-standard comparison was HF diagnosis established by expert cardiologists based on clinical data and echocardiography. After pre-processing, the AI model is built based on Long-Short Term Memory (LSTM) using independent variable estimation and video classification techniques. The model will classify the echocardiography video data into normal and heart failure category. Statistical analysis was carried out to calculate the value of accuracy, area under the curve (AUC), sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and likelihood ratio (LR). Results: A total of 138 patients with HF admitted to Harapan Kita National Heart Center from January 2020 to October 2021 were selected as research subjects. The first scenario yielded decent diagnostic performance for distinguishing between heart failure and normal patients. In this model, the overall diagnostic accuracy of A2C, A4C, PLAX-view were 92,96%, 90,62% and 88,28%, respectively. The automated ML-derived approach had the best overall performance using the 2AC view, with a misclassification rate of only 7,04%. Conclusion: The LIFES model was feasible, accurate, and quick in distinguishing between heart failure and normal patients through series of echocardiography images.

Keywords


artificial intelligence; machine learning; echocardiography; heart failure

References


Kaptoge S, Pennells L, De Bacquer D, et al. World health organization cardiovascular disease risk charts: Revised models to estimate risk in 21 global regions. The Lancet global health. 2019;7(10):e1332-e1345. https://search.datacite.org/works/10.1016/s2214-109x(19)30318-3. DOI: 10.1016/s2214-109x(19)30318-3.

Clerkin K, Fried J, Raikhelkar J, et al. COVID-19 and cardiovascular disease. Circulation. 2020;141(20):1648-55. https://www.ncbi.nlm.nih.gov/pubmed/32200663. DOI: 10.1161/CIRCULATIONAHA.120.046941.

Kusunose K, Haga A, Abe T, Sata M. Utilization of artificial intelligence in echocardiography. Circulation J. 2019;83(8):1623-9. https://www.jstage.jst.go.jp/article/circj/83/8/83_CJ-19-0420/_article/-char/en. DOI: 10.1253/circj.CJ-19-0420.

Dey D, Slomka PJ, Leeson P, et al. Artificial intelligence in cardiovascular imaging: JACC state-of-the-art review. J Am Coll Cardiol. 2019;73(11):1317-35. https://www.ncbi.nlm.nih.gov/pubmed/30898208. DOI: 10.1016/j.jacc.2018.12.054.

Raghavendra U, Acharya UR, Gudigar A, et al. Automated screening of congestive heart failure using variational mode decomposition and texture features extracted from ultrasound images. Neural Comput Appl. 2017;28(10):2869–78.

Sanchez-Martinez S, Duchateau N, Erdei T, et al. Machine learning analysis of left ventricular function to characterize heart failure with preserved ejection fraction. Circ Cardiovasc Imaging. 2018;11(4):e007138. DOI: 10.1161/CIRCIMAGING.117.0074

Kusunose K, Haga A, Abe T, Sata M. Utilization of artificial intelligence in echocardiography. Circulation J. 2019;83(8):1623-9. https://www.jstage.jst.go.jp/article/circj/83/8/83_CJ-19-0420/_article/-char/en. DOI: 10.1253/circj.CJ-19-0420.

Khamis H, Zurakhov G, Azar V, et al. Automatic apical view classification of echocardiograms using a discriminative learning dictionary. Medical Image Analysis. 2017;36:15-21. https://doi.org/10.1016/j.media.2016.10.007

Jiang Z, Lin Z, Davis L. 2013. Label consistent K-SVD: learning a discriminative dictionary for recognition. Transactions on Pattern Analysis and Machine Intelligence. 35:11.

Madley-Dowd P, Hughes R, Tilling K, Heron J. The proportion of missing data should not be used to guide decisions on multiple imputation. Journal of Clinical Epidemiol. 2019;110: 63–73. https://doi.org/10.1016/j.jclinepi.2019.02.016.

Goodfellow I, Bengio Y, Courville A. Deep learning. London, England: MIT Press; 2016.

Patterson J, Gibson A. Deep learning a practitioner’s approach. Sebastopol, USA: O’Reilly Media; 2017.

Simonyan, Karen, Zisserman A. Very deep convolutional networks for large-scale image recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings. 2015. p. 1–14.

Reyes E, Ha J, Firdaus I, et al. Heart failure across Asia: Same healthcare burden but differences in organization of care. Int J Cardiol. 2016;233:163-7. https://doi.org/10.1016/j.ijcard.2016.07.256

Omar A, Narula S, Rahman M. et al. Precision phenotyping in heart Failure and pattern clustering of ultrasound data for the assessment of diastolic dysfunction. J Am Coll Cardiol. 2017;10:11. https://doi.org/10.1016/j.jcmg.2016.10.012.

Chiou Y, Hung C, Lin S. AI-assisted echocardiographic prescreening of heart failure with preserved ejection fraction on basis of intrabeat dynamics. J Am Coll Cardiol. 2021. https://doi.org/10.1016/j.jcmg.2021.05.005

Madani A, Arnaout R, Mofrad M, Arnaout R. Fast and accurate view classification of echocardiograms using deep learning. Nature Partner Journals. 2018;1:6. doi:10.1038/s41746-017-0013-1.


Full Text: PDF

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.