Commit 09ae2950 authored by Jamal KHAN's avatar Jamal KHAN

Exercise 3 Part 1

parent 215b9255
...@@ -39,9 +39,12 @@ We use only basic R functionality, so a earlier version might be OK, but we did ...@@ -39,9 +39,12 @@ We use only basic R functionality, so a earlier version might be OK, but we did
(print "Please activate R in org-babel (org-babel-do-languages)!")) (print "Please activate R in org-babel (org-babel-do-languages)!"))
#+END_SRC #+END_SRC
#+RESULTS:
* Data preprocessing * Data preprocessing
The data on the incidence of influenza-like illness are available from the Web site of the [[http://www.sentiweb.fr/][Réseau Sentinelles]]. We download them as a file in CSV format, in which each line corresponds to a week in the observation period. Only the complete dataset, starting in 1984 and ending with a recent week, is available for download. The URL is: The data on the incidence of influenza-like illness are available from the Web site of the [[http://www.sentiweb.fr/][Réseau Sentinelles]]. We download them as a file in CSV format, in which each line corresponds to a week in the observation period. Only the complete dataset, starting in 1984 and ending with a recent week, is available for download. The URL is:
#+NAME: data-url #+NAME: data-url
http://www.sentiweb.fr/datasets/incidence-PAY-3.csv http://www.sentiweb.fr/datasets/incidence-PAY-3.csv
...@@ -63,12 +66,27 @@ This is the documentation of the data from [[https://ns.sentiweb.fr/incidence/cs ...@@ -63,12 +66,27 @@ This is the documentation of the data from [[https://ns.sentiweb.fr/incidence/cs
The [[https://en.wikipedia.org/wiki/ISO_8601][ISO-8601]] format is popular in Europe, but less so in North America. This may explain why few software packages handle this format. The Python language does it since version 3.6. We therefore use Python for the pre-processing phase, which has the advantage of not requiring any additional library. (Note: we will explain in module 4 why it is desirable for reproducibility to use as few external libraries as possible.) The [[https://en.wikipedia.org/wiki/ISO_8601][ISO-8601]] format is popular in Europe, but less so in North America. This may explain why few software packages handle this format. The Python language does it since version 3.6. We therefore use Python for the pre-processing phase, which has the advantage of not requiring any additional library. (Note: we will explain in module 4 why it is desirable for reproducibility to use as few external libraries as possible.)
** Download ** Download
After downloading the raw data, we extract the part we are interested in. We first split the file into lines, of which we discard the first one that contains a comment. We then split the remaining lines into columns. To make the data locally available we first download the data and save it for furture use. The data of the retrieval is output as result.
#+BEGIN_SRC python :results output :var data_url=data-url
data_file = 'syndrome_grippal.csv'
#+BEGIN_SRC python :results silent :var data_url=data-url import datetime
from urllib.request import urlopen from urllib.request import urlretrieve
import os
if not os.path.exists(data_file):
urlretrieve(data_url, data_file)
data = urlopen(data_url).read() print(f'Data is retrieved at {datetime.datetime.utcnow()} UTC')
#+END_SRC
#+RESULTS:
: Data is retrieved at 2020-09-16 21:52:00.330209 UTC
Now we extract the interesting part of the data.
#+BEGIN_SRC python :results silent
data = open(data_file, 'rb').read()
lines = data.decode('latin-1').strip().split('\n') lines = data.decode('latin-1').strip().split('\n')
data_lines = lines[1:] data_lines = lines[1:]
table = [line.split(',') for line in data_lines] table = [line.split(',') for line in data_lines]
...@@ -79,6 +97,13 @@ Let's have a look at what we have so far: ...@@ -79,6 +97,13 @@ Let's have a look at what we have so far:
table[:5] table[:5]
#+END_SRC #+END_SRC
#+RESULTS:
| week | indicator | inc | inc_low | inc_up | inc100 | inc100_low | inc100_up | geo_insee | geo_name |
| 202037 | 3 | 22799 | 18087 | 27511 | 35 | 28 | 42 | FR | France |
| 202036 | 3 | 10847 | 8019 | 13675 | 16 | 12 | 20 | FR | France |
| 202035 | 3 | 9918 | 6842 | 12994 | 15 | 10 | 20 | FR | France |
| 202034 | 3 | 6084 | 3090 | 9078 | 9 | 4 | 14 | FR | France |
** Checking for missing data ** Checking for missing data
Unfortunately there are many ways to indicate the absence of a data value in a dataset. Here we check for a common one: empty fields. For completeness, we should also look for non-numerical data in numerical columns. We don't do this here, but checks in later processing steps would catch such anomalies. Unfortunately there are many ways to indicate the absence of a data value in a dataset. Here we check for a common one: empty fields. For completeness, we should also look for non-numerical data in numerical columns. We don't do this here, but checks in later processing steps would catch such anomalies.
...@@ -93,6 +118,9 @@ for row in table: ...@@ -93,6 +118,9 @@ for row in table:
valid_table.append(row) valid_table.append(row)
#+END_SRC #+END_SRC
#+RESULTS:
: ['198919', '3', '0', '', '', '0', '', '', 'FR', 'France']
** Extraction of the required columns ** Extraction of the required columns
There are only two columns that we will need for our analysis: the first (~"week"~) and the third (~"inc"~). We check the names in the header to be sure we pick the right data. We make a new table containing just the two columns required, without the header. There are only two columns that we will need for our analysis: the first (~"week"~) and the third (~"inc"~). We check the names in the header to be sure we pick the right data. We make a new table containing just the two columns required, without the header.
#+BEGIN_SRC python :results silent #+BEGIN_SRC python :results silent
...@@ -110,6 +138,21 @@ Let's look at the first and last lines. We insert ~None~ to indicate to org-mode ...@@ -110,6 +138,21 @@ Let's look at the first and last lines. We insert ~None~ to indicate to org-mode
[('week', 'inc'), None] + data[:5] + [None] + data[-5:] [('week', 'inc'), None] + data[:5] + [None] + data[-5:]
#+END_SRC #+END_SRC
#+RESULTS:
| week | inc |
|--------+--------|
| 202037 | 22799 |
| 202036 | 10847 |
| 202035 | 9918 |
| 202034 | 6084 |
| 202033 | 6106 |
|--------+--------|
| 198448 | 78620 |
| 198447 | 72029 |
| 198446 | 87330 |
| 198445 | 135223 |
| 198444 | 68422 |
** Verification ** Verification
It is always prudent to verify if the data looks credible. A simple fact we can check for is that weeks are given as six-digit integers (four for the year, two for the week), and that the incidence values are positive integers. It is always prudent to verify if the data looks credible. A simple fact we can check for is that weeks are given as six-digit integers (four for the year, two for the week), and that the incidence values are positive integers.
#+BEGIN_SRC python :results output #+BEGIN_SRC python :results output
...@@ -120,6 +163,8 @@ for week, inc in data: ...@@ -120,6 +163,8 @@ for week, inc in data:
print("Suspicious value in column 'inc': ", (week, inc)) print("Suspicious value in column 'inc': ", (week, inc))
#+END_SRC #+END_SRC
#+RESULTS:
No problem - fine! No problem - fine!
** Date conversion ** Date conversion
...@@ -139,6 +184,21 @@ str_data = [(str(date), str(inc)) for date, inc in converted_data] ...@@ -139,6 +184,21 @@ str_data = [(str(date), str(inc)) for date, inc in converted_data]
[('date', 'inc'), None] + str_data[:5] + [None] + str_data[-5:] [('date', 'inc'), None] + str_data[:5] + [None] + str_data[-5:]
#+END_SRC #+END_SRC
#+RESULTS:
| date | inc |
|------------+--------|
| 1984-10-29 | 68422 |
| 1984-11-05 | 135223 |
| 1984-11-12 | 87330 |
| 1984-11-19 | 72029 |
| 1984-11-26 | 78620 |
|------------+--------|
| 2020-08-10 | 6106 |
| 2020-08-17 | 6084 |
| 2020-08-24 | 9918 |
| 2020-08-31 | 10847 |
| 2020-09-07 | 22799 |
** Date verification ** Date verification
We do one more verification: our dates must be separated by exactly one week, except around the missing data point. We do one more verification: our dates must be separated by exactly one week, except around the missing data point.
#+BEGIN_SRC python :results output #+BEGIN_SRC python :results output
...@@ -148,6 +208,9 @@ for date1, date2 in zip(dates[:-1], dates[1:]): ...@@ -148,6 +208,9 @@ for date1, date2 in zip(dates[:-1], dates[1:]):
print(f"The difference between {date1} and {date2} is {date2-date1}") print(f"The difference between {date1} and {date2} is {date2-date1}")
#+END_SRC #+END_SRC
#+RESULTS:
: The difference between 1989-05-01 and 1989-05-15 is 14 days, 0:00:00
** Transfer Python -> R ** Transfer Python -> R
We switch to R for data inspection and analysis, because the code is more concise in R and requires no additional libraries. We switch to R for data inspection and analysis, because the code is more concise in R and requires no additional libraries.
...@@ -163,17 +226,31 @@ data$date <- as.Date(data$date) ...@@ -163,17 +226,31 @@ data$date <- as.Date(data$date)
summary(data) summary(data)
#+END_SRC #+END_SRC
#+RESULTS:
:
: date inc
: Min. :1984-10-29 Min. : 0
: 1st Qu.:1993-10-21 1st Qu.: 5016
: Median :2002-10-07 Median : 15962
: Mean :2002-10-06 Mean : 61490
: 3rd Qu.:2011-09-22 3rd Qu.: 50052
: Max. :2020-09-07 Max. :1001824
** Inspection ** Inspection
Finally we can look at a plot of our data! Finally we can look at a plot of our data!
#+BEGIN_SRC R :results output graphics :file inc-plot.png #+BEGIN_SRC R :exports results :results graphics :file inc-plot.png
plot(data, type="l", xlab="Date", ylab="Weekly incidence") with(data, plot(date, inc, type="l", xlab="Date", ylab="Weekly incidence"))
#+END_SRC #+END_SRC
#+RESULTS:
A zoom on the last few years makes the peaks in winter stand out more clearly. A zoom on the last few years makes the peaks in winter stand out more clearly.
#+BEGIN_SRC R :results output graphics :file inc-plot-zoom.png #+BEGIN_SRC R :results output graphics :file inc-plot-zoom.png
plot(tail(data, 200), type="l", xlab="Date", ylab="Weekly incidence") plot(tail(data, 200), type="l", xlab="Date", ylab="Weekly incidence")
#+END_SRC #+END_SRC
#+RESULTS:
* Study of the annual incidence * Study of the annual incidence
** Computation of the annual incidence ** Computation of the annual incidence
...@@ -201,19 +278,40 @@ annnual_inc = data.frame(year = years, ...@@ -201,19 +278,40 @@ annnual_inc = data.frame(year = years,
head(annnual_inc) head(annnual_inc)
#+END_SRC #+END_SRC
#+RESULTS:
| 1986 | 5100540 |
| 1987 | 2861556 |
| 1988 | 2766142 |
| 1989 | 5460155 |
| 1990 | 5233987 |
| 1991 | 1660832 |
** Inspection ** Inspection
A plot of the annual incidence: A plot of the annual incidence:
#+BEGIN_SRC R :results output graphics :file annual-inc-plot.png #+BEGIN_SRC R :results output graphics :file annual-inc-plot.png
plot(annnual_inc, type="p", xlab="Année", ylab="Annual incidence") plot(annnual_inc, type="p", xlab="Année", ylab="Annual incidence")
#+END_SRC #+END_SRC
#+RESULTS:
** Identification of the strongest epidemics ** Identification of the strongest epidemics
A list sorted by decreasing annual incidence makes it easy to find the most important ones: A list sorted by decreasing annual incidence makes it easy to find the most important ones:
#+BEGIN_SRC R :results output #+BEGIN_SRC R :results output
head(annnual_inc[order(-annnual_inc$incidence),]) head(annnual_inc[order(-annnual_inc$incidence),])
#+END_SRC #+END_SRC
#+RESULTS:
: year incidence
: 4 1989 5460155
: 5 1990 5233987
: 1 1986 5100540
: 28 2013 4182265
: 25 2010 4085126
: 14 1999 3897443
Finally, a histogram clearly shows the few very strong epidemics, which affect about 10% of the French population, but are rare: there were three of them in the course of 35 years. The typical epidemic affects only half as many people. Finally, a histogram clearly shows the few very strong epidemics, which affect about 10% of the French population, but are rare: there were three of them in the course of 35 years. The typical epidemic affects only half as many people.
#+BEGIN_SRC R :results output graphics :file annual-inc-hist.png #+BEGIN_SRC R :results output graphics :file annual-inc-hist.png
hist(annnual_inc$incidence, breaks=10, xlab="Annual incidence", ylab="Number of observations", main="") hist(annnual_inc$incidence, breaks=10, xlab="Annual incidence", ylab="Number of observations", main="")
#+END_SRC #+END_SRC
#+RESULTS:
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment