Skip to main content

Machine-learned models using hematological inflammation markers in the prediction of short-term acute coronary syndrome outcomes

Abstract

Background

Increased systemic and local inflammation play a vital role in the pathophysiology of acute coronary syndrome. This study aimed to assess the usefulness of selected machine learning methods and hematological markers of inflammation in predicting short-term outcomes of acute coronary syndrome (ACS).

Methods

We analyzed the predictive importance of laboratory and clinical features in 6769 hospitalizations of patients with ACS. Two binary classifications were considered: significant coronary lesion (SCL) or lack of SCL, and in-hospital death or survival. SCL was observed in 73% of patients. In-hospital mortality was observed in 1.4% of patients and it was higher in the case of patients with SCL. Ensembles of decision trees and decision rule models were trained to predict these classifications.

Results

The best performing model for in-hospital mortality was based on the dominance-based rough set approach and the full set of laboratory as well as clinical features. This model achieved 81 ± 2.4% sensitivity and 81.1 ± 0.5% specificity in the detection of in-hospital mortality. The models trained for SCL performed considerably worse. The best performing model for detecting SCL achieved 56.9 ± 0.2% sensitivity and 66.9 ± 0.2% specificity. Dominance rough set approach classifier operating on the full set of clinical and laboratory features identifies presence or absence of diabetes, systolic and diastolic blood pressure and prothrombin time as having the highest confirmation measures (best predictive value) in the detection of in-hospital mortality. When we used the limited set of variables, neutrophil count, age, systolic and diastolic pressure and heart rate (taken at admission) achieved the high feature importance scores (provided by the gradient boosted trees classifier) as well as the positive confirmation measures (provided by the dominance-based rough set approach classifier).

Conclusions

Machine learned models can rely on the association between the elevated inflammatory markers and the short-term ACS outcomes to provide accurate predictions. Moreover, such models can help assess the usefulness of laboratory and clinical features in predicting the in-hospital mortality of ACS patients.

Background

Many studies have shown that increased systemic and local inflammation play a key role in the pathophysiology of ACS. Hematological and inflammatory markers may have a meaningful predictive value for ACS outcomes [1]. Hence, readily available and inexpensive markers such as neutrophil count, neutrophil to lymphocyte ratio (NLR), red cell distribution width (RDW), platelet to lymphocyte ratio (PLR), mean platelet volume (MPV), and platelet distribution width (PDW) have recently attracted more attention and encouraged further research. Indeed, these indices may provide information on ACS pathophysiology and may be useful in risk stratification and its optimal management [2, 3]. Also, many studies have pointed at their prognostic value in all-cause mortality, major cardiovascular events, stent thrombosis, arrhythmias, and myocardial perfusion disorders concerning acute myocardial infarction and unstable angina [4]. The most recent studies have indicated that combining these markers with the Global Registry of Acute Coronary Events (GRACE), SYNTAX, and Thrombolysis in Myocardial Infarction (TIMI) scores improves risk stratification and ACS patients’ diagnostics [5,6,7,8,9].

With the growing availability of medical data, machine learning methods offer a promising extension of classical statistical analysis [10]. In this study, we have used machine learning methods and investigated the usefulness of the hematological indices presented above in predicting SCL and in-hospital mortality. We also demonstrated that machine learning methods can be a valuable supplement to the traditional methods of inferential statistics.

Methods

We analyzed the medical records of patients with ACS admitted to the local cardiology unit between January 2012 and June 2017. The analyzed group comprised of patients who had their diagnosis reevaluated and confirmed by a cardiologist according to ESC guidelines [11]. The data concerning the 6769 hospitalizations (5678 individual patients) was obtained retrospectively from electronic medical records.

Two sets of features were considered in this study: a full set and a simplified set. Table 1 presents the variables used in both sets. The full set included 53 nominal and numeric features. All the variables were obtained from electronic medical records directly. Some information including descriptions of electrocardiograms or elements of physical examination was stored in our records as an unstructured text. Although some studies on ACS outcomes also set out to investigate the possibility of using the features extracted from unstructured reports [12], we decided to include only the features that were saved in our records directly to avoid additional bias.

Table 1 Features used by XGboost and DRSA-BRE classifiers

The simplified set consisted of 23 numerical features. This set was chosen on the basis of its potential application and the potential predictive value for ACS outcomes. We favored the features that did not require human interpretation or analysis. In this way, we tried to investigate the possibility of creating a classifier that could be built into medical records software and automatically identify the patients with a high risk of an unfavorable outcome.

The inclusion criteria for the study were as follows:

  1. 1.

    The patient was admitted to the cardiology department on an emergency basis.

  2. 2.

    The patient had a discharge diagnosis of ACS including STEMI, NSTE-ACS or unstable angina.

  3. 3.

    The patient had coronary angiography within 96 h of admission.

  4. 4.

    If the same patient was admitted multiple times in the analyzed period, each admission was recorded independently but the information about prior PCI, CABG or MI was retained.

Patients who were assessed to qualify for revascularization based on coronary angiogram and, therefore, underwent PCI or were referred to CABG were considered to have had significant coronary lesion (SCL) (n = 4943, 73% of cases), while patients who did not undergo revascularization were considered to have no sCAD (n = 1826, 27% of cases). Patients who did not consent for invasive management were excluded from the study.

In-hospital death was observed in 1.4% of cases (n = 97). Descriptive statistics were performed using the STATISTICA software. First, the normality of distribution was tested using the Shapiro–Wilk Test. The univariate two-tailed Mann–Whitney-U test and frequency tables were used to explore the differences between these two groups.

As a part of our study, we used machine learning methods and investigated their performance in predicting the presence of SCL and in-hospital mortality. However, we were not only interested in their predictive performance. The secondary aim of our study was to identify the extent to which the selected features affected the prediction accuracy. In particular, we wanted to investigate the predictive value of hematological indices and explore the possibility of creating a model based on them. That is why, the interpretability of the constructed classification model and its ability to identify significant features were of crucial relevance.

We considered three different classification algorithms: logistic regression, gradient boosted trees (XGBoost) and the Dominance-based Rough Set Balanced Rule Ensemble (DRSA-BRE). The logistic regression model was included in this study as a baseline classifier. Gradient boosted trees, by contrast, were used as a well-known and well-performing off-the-shelf classifier [13]. DRSA-BRE was explicitly included in the study due to the class imbalance in the dataset (i.e. the disproportion between the number of cases in classes) observed in both ACS problems. More precisely, in the DRSA-BRE undersampling neighborhood balanced bagging method [14] was applied to address the class imbalance problem. This type of classifier has recently been successfully applied to the Diabetic Retinopathy Assessment [15]. Additionally, to improve the predictive performance of XGBoost on the class-imbalanced problems, we undersampled the majority class in training sets.

When using logistic regression and XGBoost classifiers, the missing values were filled in with the mean values from all the observations in the test set. Moreover, both logistic regression and XGBoost were trained only on the simplified set of features. Both of these classifiers were not able to handle nominal values directly and thus we decided not to transform them. The DRSA-BRE classifier was trained on both the full and simplified sets of features. The missing values were handled directly in DRSA-BRE by the VC-DomLEM [16, 17] algorithm, which was used as a component classifier in the constructed bagging ensemble.

As explained above, one of the aims of our study was to assess the predictive importance of the analyzed sets of features on the short-term ACS outcomes. Our study showed that the XGBoost classifier provided the feature importance scores which reflected how valuable each feature was during the model construction. For the DRSA-BRE classifier the attribute relevance was evaluated by a confirmation measure (the degree to which the presence of an attribute in the hypothesis of a rule indicates accurate prediction). The higher the value of the confirmation measure the more important the attribute was [18, 19].

The model selection, optimization and fitting of the logistic regression and XGBoost models were performed using the scikit-learn [20] and XGboost [13] software packages. DRSA-BRE analysis was performed using the jRS library and jMAF software package [21] which are available for download at http://www.cs.put.poznan.pl/jblaszczynski/Site/jRS.html. The plots and visualizations were generated using the matplotlib [22] software package.

We focused our analysis on four performance metrics: sensitivity, specificity, G-mean and AUC. Sensitivity is defined as a ratio of the predicted genuine positive cases to all positive cases. Specificity is defined as a ratio of the predicted genuine negative cases to all negative cases. Receiver operating characteristics (ROC) curve analysis is a popular tool to analyze classifier performance. More precisely, classifier performance is reflected by the area under the ROC curve (so-called the AUC measure) [23].

Interestingly, however, some researchers have shown that AUC analysis has limitations. For example, in the case of highly skewed class distribution (i.e. class imbalanced problems) it may lead to an overoptimistic estimate of classifier performance [24]. That is why, we also verified simpler measures which are useful for the classifiers providing a purely deterministic prediction (see discussions on the applicability of ROC analysis in [25]). This measure is called G-mean and it is defined as a geometric mean of sensitivity and specificity [26].

Results

The basic descriptive statistics for the continuous numeric variables together with the results of the Mann–Whitney-U test are presented in Table 2. Given that the distributions of variables were not normal, median and inter-quartile ranges (IQR) were used as measures of central tendency. The categorical variables are summarized in Table 3. The inflammatory markers including CRP, neutrophil count, monocyte count and RDW were linked to both SCL and in-hospital mortality in univariate statistics. However, NLR showed a link for in-hospital mortality only. Indeed, these results supported our initial idea of applying the above variables to the construction of machine-learned models.

Table 2 Basic characteristics of continuous numerical variables grouped by outcomes
Table 3 Basic characteristic of nominal features divided by target groups

The predictive performance of logistic regression, XGBoost, and the DRSA-BRE classifiers were assessed in a computational experiment. The parameters of all classifiers were based on the training data only. The classification performance was verified in a stratified fivefold cross-validation which was repeated ten times to improve the repeatability of the obtained results. Table 4 provides the summary of their predictive performance.

Table 4 Best predictive performance results in fivefold cross-validation of classifiers trained on the simplified set and the full set of features

The results presented in Table 4 indicate a remarkably better performance of classifiers in detecting in-hospital mortality than SCL. DRSA-BRE and XGBoost trained with the majority class undersampling performed equally well both in the case of in-hospital mortality and SCL. Logistic regression was undoubtedly the worst classifier of all. Considering the characteristics of the compared classifiers, we focused our attention on sensitivity and specificity measures. G-mean was measured during experiments with DRSA-BRE and was calculated afterwards for logistic regression and XGBoost. AUC, by contrast, was measured only for logistic regression and XGBoost and was approximated for DRSA-BRE based on the measured sensitivity and specificity. DRSA-BRE was also able to handle nominal attributes directly [19]. Hence, the experiments with the full set of features were carried out only with DRSA-BRE.

These experiments, nevertheless, indicated that the full set of features did not contribute to a high increase of predictive performance with respect to the simplified set of features. The best result for in-hospital mortality was achieved by DRSA-BRE: 81.03 ± 2.4% sensitivity, and 81.06 ± 0.5% specificity. The best result for SCL was also achieved by DRSA-BRE: 56.91 ± 0.2% sensitivity, and 66.94 ± 0.2% specificity. These results were obtained with the full set of features. When the simplified set of features was used, DRSA-BRE and XGBoost achieved a comparable predictive performance. The comparison of predictive performance measured by G-mean and AUC leads to similar conclusions. Following the obtained results, we focused our further analysis on the detection of in-hospital mortality since the prediction performance of considered classifiers for SCL was not satisfactory.

Figure 1a, b presents ROC curves for evaluated classifiers. The Xgboost algorithm was superior in terms of sensitivity while logistic regression achieved higher specificity scores, which can also be observed in the ROC curves. These differences, however, might not be significant, and we concluded that the performance of these classifiers was similar in both classification tasks.

Fig. 1
figure 1

Receiver operating curves presenting the performance of XGboost and logistic regression in the detection of significant coronary lesion and in-hospital death. The green dot represents the approximate performance of the DRSA-BRE classifier

Figure 2 presents relative importance scores for the detection of in-hospital mortality. The top 5 most informative features were: neutrophil count, systolic blood pressure, creatinine level, age and hematocrit. Figures 3 and 4 present confirmation measures provided by the DRSA-BRE classifier (full and simplified set of features, respectively). The features with positive confirmation measures in the simplified set included heart rate, age, diastolic and systolic blood pressure, neutrophil count and troponin elevation. This set partially overlaps with the features of the highest importance provided by the XGboost classifier. The features with positive confirmation measures in the full data set included many clinical features such as diabetes, smoking addiction, previous coronary interventions, MI and peripheral artery disease, which are known to be associated with the outcomes of coronary artery disease. Interestingly, the classifier that used that many features performed only slightly better over the classifier trained on the simplified set (G-mean 81.0 ± 1 vs 79.9 ± 1). As was mentioned above, the simplified algorithms used hematological inflammation markers, the anthropometric data and simple measurements (heart rate and blood pressure).

Fig. 2
figure 2

Averaged feature importance scores for the prediction of in-hospital death provided by the XGboost classifier

Fig. 3
figure 3

Confirmation measures for the detection of in-hospital death provided by the DRSA-BRE classifier (full set of features)

Fig. 4
figure 4

Confirmation measures for the detection of in-hospital death provided by the DRSA-BRE classifier (simplified set of features)

The analysis of strong decision rules which were induced by DRSA-BRE may allow to investigate the relationship between the features and their values. That effectively may lead to the detection of in-hospital mortality. The selected rules extracted from the DRSA-BRE classifier are presented below.

  • Rule 1: If systolic blood pressure ≤ 80 and neutrophil count ≥ 7.14, then in-hospital death occurs;

  • Rule 2: If systolic blood pressure ≤ 90 and troponin elevation ratio ≥ 5.29, then in-hospital death occurs;

  • Rule 3: If systolic blood pressure ≤ 80 and RDW ≥ 12.7, then in-hospital death occurs;

  • Rule 4: If systolic blood pressure ≤ 80 and NLR ≥ 3.06, then in-hospital death occurs.

Discussion and limitations

Decision rules based on the DRSA-BRE algorithm reflect some well-known mortality risk factors in ACS. It is remarkable that most rules selected by the DRSA-BRE classifier are also present in the Global Registry of Acute Coronary Events (GRACE) risk score. The GRACE risk score has been extensively validated in multiple studies and its use is currently recommended in the guidelines of the European Society of Cardiology [11].

As it is known, low systolic blood pressure may often be related to a cardiogenic shock. Thus, the low value of systolic blood pressure was included in the majority of strong decision rules. What is more, troponin elevation corresponds to the size and severity of the infarction. The neutrophil to lymphocyte ratio and the red cell distribution width are also known to correlate with the ACS outcomes [1, 2, 27]. Interestingly, it was reported that RDW and the mean platelet volume (MPV) combined with the GRACE risk score results improved its predictive value. However, we found no publications on attempts to create a model that relies mostly on laboratory test results.

Numerous studies exploring the application of ML techniques in the diagnostics of ACS focused primarily on risk stratification in patients with chest pain who were admitted to the emergency room (ER). VanHouten et al. [28] applied random forests and elastic net algorithms to a data set of over 20,000 patients admitted to the ER with chest pain. Their results achieved high accuracy with AUC = 0.85, outperforming both the TIMI and GRACE scores. Their much wider selection of patients indicated that 41.9% of them were considered positive for an ACS event. In our study, due to selection bias (patients were already classified by doctors as having a high chance of SCL), it seemed impossible to make a prediction of SCL based on the laboratory test results only, regardless of which classifier was used.

We identified possible causes of the unsatisfactory performance in detecting SCL. The retrospective data analysis made it possible to use a significant amount of data collected in electronic records but also implies many limitations. Patients were selected for the study based on discharge diagnosis which can introduce a selection bias. In our dataset, there were relatively many records with co-morbidities like the history of heart failure (15.6%) or diabetes (29%) as well as with the history of PCI (34%) or CABG (10%). It might be caused by the fact that for patients who were admitted multiple times during the analyzed period, every hospitalization was included in the study dataset.

Troponin levels are known to have high sensitivity and specificity in detecting myocardial ischemia. However, in our study, we were analyzing laboratory results retrospectively and during the analyzed period different type of troponin assays were used. Moreover, the specificity of troponin elevation in the detection of SCL among patients with chronic heart failure is lower. This might have also affected the performance in detecting SCL.

Wallert et al. [29] used a large multi-center register combined with the data from the Swedish national death registry to predict a 2-year survival vs non-survival. They achieved AUC = 0.77 on their data set of over 50,000 patients. The classification was based on 39 predictors. The best performing model was based on linear regression and age was identified as the most predictive factor.

Fonarow et al. developed a useful and straightforward algorithm based on decision trees to predict in-hospital mortality in acutely decompensated heart failure [30]. It identified low admission systolic blood pressure, high admission creatinine and urea nitrogen levels as the best predictors for mortality. Low systolic blood pressure and elevated creatinine are known predictors of short- and long-term mortality in ACS and are used in the GRACE risk score. In our study the analysis of confirmation measures (provided by the DRSA-BRE algorithm) and feature importance scores (provided by XGboost algorithm) confirmed the high predictive value of these features for short-term mortality.

When analyzing the data retrospectively, it is common to have certain values missing. Some laboratory tests are performed under specific conditions only, which in itself may comprise a confounding factor. Moreover, many variables that have been analyzed in this study can be influenced by numerous health conditions. For example, a patient with a high neutrophil count could have suffered from a severe infection which—as a result—may have affected his/her chance of survival. These features might not be specific enough improve detection of SCL but performed well in predicting in-hospital mortality.

Conclusion

The existing risk scores for the ACS outcomes partially rely on the information from clinical examination. Our results suggest that it may be possible to achieve good outcome predictions on the basis of simple routine measurements that can be obtained without the additional involvement of a physician. This might be of key importance in busy departments where similar systems integrated with electronic medical records could automatically flag high risk patients.

Both DRSA-BRE and the model of gradient boosted trees algorithm for the detection of in-hospital mortality achieved high sensitivity and specificity which makes these models potentially applicable. However, to make a justified statement about the performance of our machine learning models in a clinical setting, they need to be tested prospectively on a different group of patients. Our attempts to detect SCL brought no desired results. This leads to a conclusion that it is not possible to predict the presence of SCL in patients with ACS using the features discussed in this paper.

Inflammatory processes play a key role in the development of atherosclerosis and destabilization of plaques. Our study confirms the findings regarding the important role of neutrophil count in the prognosis of short-term ACS outcomes. However, we could not confirm the prognostic value of the platelet to lymphocyte ratio. The neutrophil to lymphocyte ratio was only associated with in-hospital mortality in univariate tests.

Abbreviations

ACS:

acute coronary syndrome

BMI:

body mass index

CABG:

coronary artery bypass grafting

IQR:

inter-quartile range

MCV:

mean cell volume

MPV:

mean platelet volume

NLR:

neutrophil to lymphocyte ratio

PLR:

platelet to lymphocyte ratio

LDL:

low-density lipoprotein

HDL:

high-density lipoprotein

CRP:

C-reactive protein

GFR:

glomerular filtration rate

RDW:

red cell distribution width

SCL:

significant coronary lesion

DRSA-BRE:

Dominance-based Rough Set Approach Balanced Rule Ensemble

References

  1. Budzianowski J, Pieszko K, Burchardt P, Rzeźniczak J, Hiczkiewicz J. The role of hematological indices in patients with acute coronary syndrome. Dis Markers. 2017. https://doi.org/10.1155/2017/3041565.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Tamhane UU, Aneja S, Montgomery D, Rogers E-K, Eagle KA, Gurm HS. Association between admission neutrophil to lymphocyte ratio and outcomes in patients with acute coronary syndrome. Am J Cardiol. Elsevier. 2008;102:653–7.

    Article  Google Scholar 

  3. He J, Li J, Wang Y, Hao P, Hua Q. Neutrophil-to-lymphocyte ratio (NLR) predicts mortality and adverse-outcomes after ST-segment elevation myocardial infarction in Chinese people. Int J Clin Exp Pathol. 2014;7:4045–56.

    PubMed  PubMed Central  Google Scholar 

  4. Chatterjee S, Chandra P, Guha G, Kalra V, Chakraborty A, Frankel R, et al. Pre-procedural elevated white blood cell count and neutrophil-lymphocyte (N/L) ratio are predictors of ventricular arrhythmias during percutaneous coronary intervention. Cardiovasc Hematol Disord Drug Targets. 2011;11:58–60.

    Article  CAS  PubMed  Google Scholar 

  5. Timóteo AT, Papoila AL, Lousinha A, Alves M, Miranda F, Ferreira ML, et al. Predictive impact on mediumterm mortality of hematological parameters in Acute Coronary Syndromes: added value on top of GRACE risk score. Eur Hear J Acute Cardiovasc Care. 2015;4:172–9.

    Article  Google Scholar 

  6. Acet H, Ertaş F, Akıl MA, Özyurtlu F, Polat N, Bilik MZ, et al. Relationship between hematologic indices and global registry of acute coronary events risk score in patients with ST-segment elevation myocardial infarction. Clin Appl Thromb. 2016;22:60–8.

    Article  Google Scholar 

  7. Kurtul A, Murat SN, Yarlioglues M, Duran M, Ergun G, Acikgoz SK, et al. Association of platelet-to-lymphocyte ratio with severity and complexity of coronary artery disease in patients with acute coronary syndromes. Am J Cardiol. 2014;114:972–8.

    Article  PubMed  Google Scholar 

  8. Wan Z-F, Zhou D, Xue J-H, Wu Y, Wang H, Zhao Y, et al. Combination of mean platelet volume and the GRACE risk score better predicts future cardiovascular events in patients with acute coronary syndrome. Platelets. 2014;25:447–51.

    Article  CAS  PubMed  Google Scholar 

  9. Niu X, Yang C, Zhang Y, Zhang H, Yao Y. Mean platelet volume on admission improves risk prediction in patients with acute coronary syndromes. Angiology. 2015;66:456–63.

    Article  PubMed  Google Scholar 

  10. Beam AL, Kohane IS. Big data and machine learning in health care. JAMA. 2018;319(13):1317–8.

    Article  PubMed  Google Scholar 

  11. Roffi M, Patrono C, Collet J-P, Mueller C, Valgimigli M, Andreotti F, et al. 2015 ESC Guidelines for the management of acute coronary syndromes in patients presenting without persistent ST-segment elevation. Eur Heart J. 2016;37:267–315.

    Article  CAS  PubMed  Google Scholar 

  12. Hu D, Huang Z, Chan T-M, Dong W, Lu X, Duan H. Utilizing Chinese admission records for MACE prediction of acute coronary syndrome. Int J Environ Res Public Health. 2016. https://doi.org/10.3390/ijerph13090912.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Chen T, Guestrin C. XGBoost: a scalable tree boosting system. 2016; https://doi.org/10.1145/2939672.2939785.

  14. Błaszczyński J, Stefanowski J. Neighbourhood sampling in bagging for imbalanced data. Neurocomputing. 2015;150:529–42.

    Article  Google Scholar 

  15. Saleh E, Błaszczyński J, Moreno A, Valls A, Romero-Aroca P, de la Riva-Fernández S, et al. Learning ensemble classifiers for diabetic retinopathy assessment. Artif Intell Med. 2018;85:50–63.

    Article  PubMed  Google Scholar 

  16. Błaszczyński J, Słowiński R, Szelg M. Sequential covering rule induction algorithm for variable consistency rough set approaches. Inf Sci (Ny). 2011;181:987–1002.

    Article  Google Scholar 

  17. Błaszczyński J, Słowiński R, Szeląg M. Induction of ordinal classification rules from incomplete data. In: Yao J, Yang Y, Słowiński R, Greco S, Li H, Mitra S, editors. International conference on rough sets and current trends in computing. Berlin, Heidelberg: Springer; 2012. p. 56–65.

    Chapter  Google Scholar 

  18. Błaszczyński J, Słowiński R, Susmaga R. Rule-based estimation of attribute relevance. In: Yao J, Ramanna S, Wang G, Suraj Z, editors. International conference on rough sets knowledge technoloy. Berlin: Springer; 2011. p. 36–44.

    Chapter  Google Scholar 

  19. Błaszczyński J, Greco S, Słowiński R. Inductive discovery of laws using monotonic rules. Eng Appl Artif Intell. 2012;25:284–94.

    Article  Google Scholar 

  20. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, et al. Scikit-learn: machine learning in python. J Mach Learn Res. 2011;12:2825–30.

    Google Scholar 

  21. Błaszczyński J, Greco S, Matarazzo B, Słowiński R, Szela̧g M. jMAF—dominance-based rough set data analysis framework. In: Skowron A, Suraj Z, editors. Rough sets intelligent systtem-professor Zdzisław Pawlak Memoriam, vol. 1. Berlin: Springer; 2013. p. 185–209.

    Chapter  Google Scholar 

  22. Hunter JD. Matplotlib: a 2D graphics environment. Comput Sci Eng. 2007;9:90–5.

    Article  Google Scholar 

  23. Chawla NV. Data mining for imbalanced datasets: an overview. In: Maimon O, Rokach L, editors. Data mining knowledge discovery handbook. Boston: Springer; 2005. p. 853–67.

    Chapter  Google Scholar 

  24. Stefanowski J. Overlapping, rare examples and class decomposition in learning classifiers from imbalanced data. In: Ramanna S, Jain LC, Howlett RJ, editors. Emerging paradigms machhine learning. Berlin: Springer; 2013. p. 277–306.

    Chapter  Google Scholar 

  25. Wang BX, Japkowicz N. Boosting support vector machines for imbalanced data sets. Knowl Inf Syst. 2010;25:1–20.

    Article  Google Scholar 

  26. Kubat M, Matwin S. Addressing the curse of imbalanced training sets: one sided selection. ICML. 1997;97:179–86.

    Google Scholar 

  27. Uyarel H, Ergelen M, Cicek G, Kaya MG, Ayhan E, Turkkan C, et al. Red cell distribution width as a novel prognostic marker in patients undergoing primary angioplasty for acute myocardial infarction. Coron Artery Dis. 2011;22:138–44.

    Article  PubMed  Google Scholar 

  28. VanHouten JP, Starmer JM, Lorenzi NM, Maron DJ, Lasko TA. Machine learning for risk prediction of acute coronary syndrome. In: AMIA annual symposium proceedings AMIA Symposium. American Medical Informatics Association; 2014;2014:1940–9.

  29. Wallert J, Tomasoni M, Madison G, Held C. Predicting two-year survival versus non-survival after first myocardial infarction using machine learning and Swedish national register data. BMC Med Inform Decis Mak. 2017;17:99.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Fonarow GC, Adams KF, Abraham WT, Yancy CW, WJ Boscardin, ADHERE Scientific Advisory Committee, Study Group, and Investigators. Risk stratification for in-hospital mortality in acutely decompensated heart failure—classification and regression tree analysis. JAMA. 2005;293:572.

    Article  CAS  PubMed  Google Scholar 

Download references

Authors’ contributions

All authors have had access to the data and all drafts of the manuscript. Specific contributions are as follows: study design: KP, JH, PB, JB, JR; data collection: KP, JH, JB; data management and analysis: KP, PB, JH; JB development of machine-learning models: KP, PB, JB, RS; manuscript drafting: KP, PB, JB; manuscript review: all. All authors read and approved the final manuscript.

Acknowledgements

This research, as acknowledged in original submission, was presented in a poster session during the European Society of Cardiology Congress in Munich, 25–19 August 2018 as well as on the Annual Congress of Polish Cardiac Society in Krakow, 13-15.09.2018 where it was awarded a 1st prize for the best poster.

Competing interests

The authors declare that they have no competing interests.

Availability of data and materials

The data sets which were used as input variables for machine learning algorithms contain at least four indirect identifiers of patients (sex, age, weight, height and the place of treatment). For this reason, the data cannot be made publicly available in this form. However, the authors are willing to share their data on reasonable request after the case-by-case assessment of the local ethics committee.

Consent for publication

There are no details of individual patients reported in this manuscript. Therefore, the consent for publication was not required.

Ethics approval and consent to participate

The study utilized only pre-existing medical data. Therefore, patient consent was not required by the ethics committee.

Funding

The author(s) received no specific funding for this work.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Konrad Pieszko.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pieszko, K., Hiczkiewicz, J., Budzianowski, P. et al. Machine-learned models using hematological inflammation markers in the prediction of short-term acute coronary syndrome outcomes. J Transl Med 16, 334 (2018). https://doi.org/10.1186/s12967-018-1702-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12967-018-1702-5

Keywords