"# Risk Analysis of the Space Shuttle: Pre-Challenger Prediction of Failure"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this document we reperform some of the analysis provided in \n",
"*Risk Analysis of the Space Shuttle: Pre-Challenger Prediction of Failure* by *Siddhartha R. Dalal, Edward B. Fowlkes, Bruce Hoadley* published in *Journal of the American Statistical Association*, Vol. 84, No. 408 (Dec., 1989), pp. 945-957 and available at http://www.jstor.org/stable/2290069. \n",
"\n",
"On the fourth page of this article, they indicate that the maximum likelihood estimates of the logistic regression using only temperature are: $\\hat{\\alpha}=5.085$ and $\\hat{\\beta}=-0.1156$ and their asymptotic standard errors are $s_{\\hat{\\alpha}}=3.052$ and $s_{\\hat{\\beta}}=0.047$. The Goodness of fit indicated for this model was $G^2=18.086$ with 21 degrees of freedom. Our goal is to reproduce the computation behind these values and the Figure 4 of this article, possibly in a nicer looking way."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Technical information on the computer on which the analysis is run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will be using the python3 language using the pandas, statsmodels, numpy, matplotlib and seaborn libraries."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 18:10:19) \n",
"[GCC 7.2.0]\n",
"uname_result(system='Linux', node='3a716011d2b6', release='4.4.0-116-generic', version='#140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018', machine='x86_64', processor='x86_64')\n",
"Let's assume O-rings independently fail with the same probability which solely depends on temperature. A logistic regression should allow us to estimate the influence of temperature."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<table class=\"simpletable\">\n",
"<caption>Generalized Linear Model Regression Results</caption>\n",
"The maximum likelyhood estimator of the intercept and of Temperature are thus $\\hat{\\alpha}=5.0849$ and $\\hat{\\beta}=-0.1156$. This **corresponds** to the values from the article of Dalal *et al.* The standard errors are $s_{\\hat{\\alpha}} = 7.477$ and $s_{\\hat{\\beta}} = 0.115$, which is **different** from the $3.052$ and $0.04702$ reported by Dallal *et al.* The deviance is $3.01444$ with 21 degrees of freedom. I cannot find any value similar to the Goodness of fit ($G^2=18.086$) reported by Dalal *et al.* There seems to be something wrong. Oh I know, I haven't indicated that my observations are actually the result of 6 observations for each rocket launch. Let's indicate these weights (since the weights are always the same throughout all experiments, it does not change the estimates of the fit but it does influence the variance estimates)."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<table class=\"simpletable\">\n",
"<caption>Generalized Linear Model Regression Results</caption>\n",
"**I think I have managed to correctly compute and plot the uncertainty of my prediction.** Although the shaded area seems very similar to [the one obtained by with R](https://app-learninglab.inria.fr/moocrr/gitlab/moocrr-session3/moocrr-reproducibility-study/tree/master/challenger.pdf), I can spot a few differences (e.g., the blue point for temperature 63 is outside)... Could this be a numerical error ? Or a difference in the statistical method ? It is not clear which one is \"right\"."
Note: url corrected based on forum post at https://www.fun-mooc.fr/courses/course-v1:inria+41016+self-paced/courseware/7bf2267c336246f9b6518db624692e14/96b7ce47bd11466a9a2e63d8e8a93d99/
Note: url corrected based on forum post at https://www.fun-mooc.fr/courses/course-v1:inria+41016+self-paced/courseware/7bf2267c336246f9b6518db624692e14/96b7ce47bd11466a9a2e63d8e8a93d99/
#+BEGIN_SRC python :session :export both :results value
#+BEGIN_SRC python :session :exports both :results value
data = pd.read_csv("https://app-learninglab.inria.fr/moocrr/gitlab/moocrr-session3/moocrr-reproducibility-study/raw/master/data/shuttle.csv")
data = pd.read_csv("https://app-learninglab.inria.fr/moocrr/gitlab/moocrr-session3/moocrr-reproducibility-study/raw/master/data/shuttle.csv")
data
data
#+END_SRC
#+END_SRC
...
@@ -181,7 +181,7 @@ data
...
@@ -181,7 +181,7 @@ data
We know from our previous experience on this data set that filtering
We know from our previous experience on this data set that filtering
data is a really bad idea. We will therefore process it as such.
data is a really bad idea. We will therefore process it as such.
#+BEGIN_SRC python :session :export both :results output
#+BEGIN_SRC python :session :exports both :results output
#%matplotlib inline
#%matplotlib inline
pd.set_option('mode.chained_assignment',None) # this removes a useless warning from pandas
pd.set_option('mode.chained_assignment',None) # this removes a useless warning from pandas
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
...
@@ -206,7 +206,7 @@ Let's assume O-rings independently fail with the same probability which
...
@@ -206,7 +206,7 @@ Let's assume O-rings independently fail with the same probability which
solely depends on temperature. A logistic regression should allow us to
solely depends on temperature. A logistic regression should allow us to
estimate the influence of temperature.
estimate the influence of temperature.
#+BEGIN_SRC python :session :export both :results value
#+BEGIN_SRC python :session :exports both :results value
import statsmodels.api as sm
import statsmodels.api as sm
data["Success"]=data.Count-data.Malfunction
data["Success"]=data.Count-data.Malfunction
...
@@ -228,7 +228,7 @@ Model Family: Binomial Df Model: 1
...
@@ -228,7 +228,7 @@ Model Family: Binomial Df Model: 1