Michael W. Kattan,  PhD

Michael W. Kattan, PhD

Department Chair

The Dr. Keyhan and Dr. Jafar Mobasseri Endowed Chair for Innovations in Cancer Research (Joint appointment with Department of Urology in the GUKI)

Lerner Research Institute, 9500 Euclid Avenue, Cleveland, Ohio 44195
Location: JJN3-230
Email: kattanm@ccf.org
Phone: (216) 444-0584
Fax: (216) 445-7659

 

My general research interest lies in medical decision making. More specifically, my research is focused on the development, validation, and use of prediction models. Most of these models are available online, and designed for physician use, athttp://riskcalc.org/. I am also interested in quality of life assessment to support medical decision making (such as utility assessment), decision analysis, cost-effectiveness analysis, and comparative effectiveness. Here are some pages you might want to check out:

  1. My official Cleveland Clinic page
  2. My list of publications
  3. My Google Scholar page
  4. My ResearchGate page
  5. Researchers with the highest h-indices (#1032 through 9/2020)
  6. Most-cited scientists (#2716 through 2018)
  7. The most cited authors in urologic surgery (#1)

In other words ...

Ever since my dissertation, “A Comparison of Machine Learning with Traditional Statistical Techniques,” I’ve had a long-standing interest in machine learning (ML) and artificial intelligence (AI).  At first, I compared AI with human experts [1] to better understand when one should outperform the other [2].  I then compared ML with traditional statistical techniques, similarly, trying to understand when one would prove superior.  I first developed a theoretical framework to describe the factors that should drive the performance in favor of or against ML in any given situation [3].  With the framework in place, I simulated data to illustrate the validity of this framework [4].  I later published a condensed illustration of the framework [5].   In wanting to apply ML in more varied applications, I noticed they were not well suited to handle time-until-event data and built these extensions [6,7].  With that in place, I was able to compare a variety of ML techniques with the standard statistical approach for time-until-event data, Cox proportional hazards regression [8].  What matters most is how well these ML and AI techniques fare in real-world data; to this end, I’ve studied their performances when predicting prostate cancer recurrence [9], clinical deterioration in the ward [10], and pelvic floor disorders after delivery [11].  In a recent comparative effectiveness study of bariatric surgery, we found that random forests were best for 2 of the models, but regression was superior for the remaining 6 models [12].      

1.         Kattan, M.W., Inductive expert systems vs. human experts. AI Expert, 1994: p. 32-38.

2.         Kattan, M.W., D.A. Adams, and M.S. Parks, A Comparison of Machine Learning with Human Judgment. J Management Inf Sys, 1993. 9(4): p. 37-57.

3.         Kattan, M.W. and R.B. Cooper, The predictive accuracy of computer-based classification decision techniques.  A review and research directions. Omega Int J Mgmt Sci, 1998. 26(4): p. 467-482.

4.         Kattan, M.W. and R.B. Cooper, A simulation of factors affecting machine learning techniques: an examination of partitioning and class proportions. Omega Int J Mgmt Sci, 2000. 28: p. 501-512.

5.         Kattan, M.W., Statistical prediction models, artificial neural networks, and the sophism "I am a patient, not a statistic". J Clin Oncol, 2002. 20(4): p. 885-887.

6.         Zupan, B., (Kattan, M.W.)et al., Machine learning for survival analysis: a case study on recurrence of prostate cancer. Artif Intell Med, 2000. 20(1): p. 59-75.

7.         Kattan, M.W., K.R. Hess, and J.R. Beck, Experiments to determine whether recursive partitioning (CART) or an artificial neural network overcomes theoretical limitations of Cox proportional hazards regression. Comput Biomed Res, 1998. 31: p. 363-373.

8.         Kattan, M.W., Comparison of Cox regression with other methods for determining prediction models and nomograms. J Urol, 2003. 170(Supplement): p. S6-S10.

9.         Cordon-Cardo, C., (Kattan, M.W.) et al., Improved prediction of prostate cancer recurrence through systems pathology. J Clin Invest, 2007. 117(7): p. 1876-1883.

10.       Churpek, M.M., (Kattan, M. W.)et al., Multicenter Comparison of Machine Learning Methods and Conventional Regression for Predicting Clinical Deterioration on the Wards. Crit Care Med, 2016. 44(2): p. 368-74.

11.       Jelovsek, J., (Kattan, M. W.)et al., Predicting risk of pelvic floor disorders 12 and 20 years after delivery. Am J Obstet Gynecol, 2018. 218(2): p. 222.e1-222.e19.

12.       Aminian, A., Zajichek, A., Arterburn, D. E., Wolski, K. E., Brethauer, S. A., Schauer, P. R., . . . Kattan, M. W. (2020). Predicting 10-Year Risk of End-Organ Complications of Type 2 Diabetes With and Without Metabolic Surgery: A Machine Learning Approach. Diabetes Care, dc192057. doi:10.2337/dc19-2057


BREAKING NEWS!  Our book is out.  Get it here

Before getting into specific research interests, here are some useful statistical reporting guidelines, and here are ideas for nice figures and tables.   

RESEARCH INTERESTS

Inspired by personal frustrations with medical uncertainty, I am particularly interested in statistical prediction models and medical decision making:

A. Prediction Model Development

  1. Here are the requirements for having a statistical prediction model endorsed by the American Joint Commission on Cancer.
  2. Here is how we process data from Epic to make it research ready.
  3. In the TRIPOD group, we came up with a checklist of what should be reported in a paper presenting a prediction model.
  4. Making a prediction model when there is a time-varying covariate.
  5. Propensity scores do not improve the accuracy of statistical prediction models.
  6. Here's the code to make binary, ordinal, and survival outcome nomograms.
  7. Here's how to make a competing risks regression nomogram. Detailed R code is here.
  8. Machine learning approaches usually lose

B. Prediction Model Assessment

  1. Here's a decent way to compare two rival prediction tools that both predict on an ordinal scale.
  2. How to make a calibration plot for a prediction model in the presence of competing risks.
  3. How to estimate a time-dependent concordance index.
  4. This is why you can't compare two prediction models tested on separate datasets. The figure is updated here.
  5. A guide to the many metrics for assessing prediction models.
  6. How to determine the area under the ROC curve for a binary diagnostic test.
  7. The concordance index is not proper.  Use the Index of Predictive Accuracy (IPA) instead.
  8. This is a framework for reviewers when evaluating statistical prediction modeling manuscripts.

C. Prediction Communication and Interpretation

  1. As cancer survivors, we like to think we both needed the treatment we received and were cured by it, but that is hard to prove.
  2. Here's an example of how patients should be counseled: a table of tailored predictions of benefits and harms crossed by treatment options.
  3. It is useless and confusing to put confidence intervals around a predicted probability.
  4. You must apply a statistical prediction model to achieve informed consent.
  5. What is a real nomogram anyway?
  6. My definition of comparative effectiveness.
  7. Too often we diagnose patients based on some arbitrary cutoff.  Let's stop doing that and recognize risk is on a continuum.
  8. Don't just look at the p-value when judging a new marker.
  9. Cancer staging systems need to go away.
  10. "I'm a patient, not a statistic" is bogus.
  11. Here is how we make our online risk calculators.
  12. Patients found our risk calculator decision aid easy to use and useful.

D. Predictions Doctors Make

  1. The wisdom of crowds of doctors: averaging their individual predictions improves accuracy over the individuals themselves.
  2. Probably due to cognitive biases, predicted probabilities coming from doctors are

E. Decision Analysis and Utility Assessment

  1. The method used to measure utilities affects the decision analytic recommendation.
  2. Unfortunately you probably have to measure individual patient utilities to run a decision analysis on someone.
  3. Stop multiplying health state utilities to get the utility of the combined health state.
  4. Why utilities are more helpful than traditional health-related quality of life measures with respect to medical decision making.
  5. The layout of the time trade-off is problematic.
  6. How to measure standard gamble on paper.

F. Novel Uses of Prediction Models

  1. Here's an example of how to make a synthetic control arm for a single arm study, using a prediction model.  Here's how to calculate the p-value.
  2. Rather than running a decision analysis at the bedside, apply a nomogram instead -- much easier and same answer.


11/09/2020 |  

COVID-19 Risk Model Developed by Cleveland Clinic Now Available to Health Systems Around the World Through Epic

A COVID-19 risk prediction model designed by Cleveland Clinic researchers—including Michael Kattan, PhD, Chair of the Department of Quantitative Health Sciences, and Lara Jehi, MD, Cleveland Clinic’s Chief Research Information Officer—is now available to health systems around the world through Epic.




08/11/2020 |  

New Prediction Model Can Forecast Personalized Risk for COVID-19-Related Hospitalization

Cleveland Clinic researchers have developed and validated a risk prediction model (called a nomogram) that can help physicians predict which patients who have recently tested positive for SARS-CoV-2, the virus that causes COVID-19, are at greatest risk for hospitalization.




08/03/2020 |  

New Analysis Shows Surgery for Drug-Resistant Temporal Lobe Epilepsy is Cost-Effective

U.S. patients with drug-resistant temporal lobe epilepsy (DR-TLE) should be referred for evaluation for epilepsy surgery “without hesitation,” concludes a new model-based analysis of surgery cost effectiveness from Cleveland Clinic researchers.




06/15/2020 |  

Researchers Develop First Model to Predict Likelihood of Testing Positive for COVID-19 and Disease-Related Outcomes

Cleveland Clinic researchers have developed the world’s first risk prediction model for healthcare providers to forecast an individual patient’s likelihood of testing positive for COVID-19 as well as their outcomes from the disease.




03/26/2020 |  

Researchers Develop COVID-19 Case & Mortality Dashboard

Led by Michael Kattan, PhD, chair of the Department of Quantitative Health Sciences (QHS), Lerner Research Institute investigators have created a dashboard to track COVID-19 case and mortality data in the U.S.




11/05/2019 |  

Risk Calculator Predicts Diabetes Complications from Weight Loss Surgery

Patients struggling with type 2 diabetes and obesity are faced with the decision of whether to receive usual medical care or undergo weight-loss surgery. Now, a new risk calculator developed by a team of Cleveland Clinic researchers can show these patients their risks of developing major health complications over the next 10 years depending on which course of treatment they choose.




01/23/2019 |  

New Statistical Guidelines in Urology Research

A panel of urology experts from eleven universities and medical centers across the United States and United Kingdom, including Cleveland Clinic, recently published a new set of guidelines for reporting statistics in urology research. Guideline recommendations are based on the consensus of the statistical consultants to four leading urology medical journals: Urology, European Urology, The Journal of Urology and BJUI, and will be published in each of the four journals.




04/18/2018 |  

Kattan Recognized with National Award

Michael Kattan, PhD, MBA, Chair of Lerner Research Institute's Department of Quantitative Health Sciences and a joint appointee in the Cleveland Clinic Glickman Urological and Kidney Institute, has been elected a Fellow of the American Statistical Association (ASA). The formal induction ceremony will be in Vancouver this coming July. Dr. Kattan was nominated by an ASA-member peer for his excellent reputation and outstanding contributions to statistical science.