Update influenza-like-illness-analysis.org

parent 4c24660e
......@@ -64,3 +64,137 @@ The [[https://en.wikipedia.org/wiki/ISO_8601][ISO-8601]] format is popular in Eu
** Download
In order to protect us in case the Réseau Sentinelles Web server disappears or is modified, we make a local copy of this dataset that we store together with our analysis. It is unnecessary and even risky to download the data at each execution, because in case of a malfunction we might be replacing our file by a corrupted version. Therefore we download the data only if no local copy exists.
#+BEGIN_SRC python :results output :var data_url=data-url
data_file = "syndrome-grippal.csv"
import os
import urllib.request
if not os.path.exists(data_file):
urllib.request.urlretrieve(data_url, data_file)
#+END_SRC
We start preprocessing by extracting the part we are interested in. We first split the file into lines, of which we discard the first one that contains a comment. We then split the remaining lines into columns.
#+BEGIN_SRC python :results silent :var data_url=data-url :exports both
data = open(data_file, 'rb').read()
lines = data.decode('latin-1').strip().split('\n')
data_lines = lines[1:]
table = [line.split(',') for line in data_lines]
#+END_SRC
Let's have a look at what we have so far:
#+BEGIN_SRC python :results value :exports both
table[:5]
#+END_SRC
** Checking for missing data
Unfortunately there are many ways to indicate the absence of a data value in a dataset. Here we check for a common one: empty fields. For completeness, we should also look for non-numerical data in numerical columns. We don't do this here, but checks in later processing steps would catch such anomalies.
We make a new dataset without the lines that contain empty fields. We print those lines to preserve a trace of their contents.
#+BEGIN_SRC python :results output :exports both
valid_table = []
for row in table:
missing = any([column == '' for column in row])
if missing:
print(row)
else:
valid_table.append(row)
#+END_SRC
** Extraction of the required columns
There are only two columns that we will need for our analysis: the first (~"week"~) and the third (~"inc"~). We check the names in the header to be sure we pick the right data. We make a new table containing just the two columns required, without the header.
#+BEGIN_SRC python :results silent :exports both
week = [row[0] for row in valid_table]
assert week[0] == 'week'
del week[0]
inc = [row[2] for row in valid_table]
assert inc[0] == 'inc
del inc[0]
data = list(zip(week, inc))
#+END_SRC
Let's look at the first and last lines. We insert ~None~ to indicate to org-mode the separation between the three parts of the table: header, first lines, last lines.
#+BEGIN_SRC python :results value :exports both
[('week', 'inc'), None] + data[:5] + [None] + data[-5:]
#+END_SRC
** Verification
It is always prudent to verify if the data looks credible. A simple fact we can check for is that weeks are given as six-digit integers (four for the year, two for the week), and that the incidence values are positive integers.
#+BEGIN_SRC python :results output :exports both
for week, inc in data:
if len(week) != 6 or not week.isdigit():
print("Suspicious value in column 'week': ", (week, inc))
if not inc.isdigit():
print("Suspicious value in column 'inc': ", (week, inc))
#+END_SRC
No problem - fine!
** Date conversion
In order to facilitate the subsequent treatment, we replace the ISO week numbers by the dates of each week's Monday. This is also a good occasion to sort the lines by increasing data, and to convert the incidences from strings to integers.
#+BEGIN_SRC python :results silent :exports both
import datetime
converted_data = [(datetime.datetime.strptime(year_and_week + ":1" , '%G%V:%u').date(),
int(inc))
for year_and_week, inc in data]
converted_data.sort(key = lambda record: record[0])
#+END_SRC
We'll look again at the first and last lines:
#+BEGIN_SRC python :results value :exports both
str_data = [(str(date), str(inc)) for date, inc in converted_data]
[('date', 'inc'), None] + str_data[:5] + [None] + str_data[-5:]
#+END_SRC
** Date verification
We do one more verification: our dates must be separated by exactly one week, except around the missing data point.
#+BEGIN_SRC python :results output :exports both
dates = [date for date, _ in converted_data]
for date1, date2 in zip(dates[:-1], dates[1:]):
if date2-date1 != datetime.timedelta(weeks=1):
print(f"The difference between {date1} and {date2} is {date2-date1}")
#+END_SRC
** Transfer Python -> R
We switch to R for data inspection and analysis, because the code is more concise in R and requires no additional libraries.
Org-mode's data exchange mechanism requires some Python code for transforming the data to the right format.
#+NAME: data-for-R
#+BEGIN_SRC python :results silent :exports both
[('date', 'inc'), None] + [(str(date), inc) for date, inc in converted_data]
#+END_SRC
In R, the dataset arrives as a data frame, which is fine. But the dates arrive as strings and must be converted.
#+BEGIN_SRC R :results output :var data=data-for-R :exports both
data$date <- as.Date(data$date)
summary(data)
#+END_SRC
** Inspection
Finally we can look at a plot of our data!
#+BEGIN_SRC R :results output graphics :file inc-plot.png :exports both
plot(data, type="l", xlab="Date", ylab="Weekly incidence")
#+END_SRC
A zoom on the last few years makes the peaks in winter stand out more clearly.
#+BEGIN_SRC R :results output graphics :file inc-plot-zoom.png :exports both
plot(tail(data, 200), type="l", xlab="Date", ylab="Weekly incidence")
#+END_SRC
* Study of the annual incidence
** Computation of the annual incidence
Since the peaks of the epidemic happen in winter, near the transition between calendar years, we define the reference period for the annual incidence from August 1st of year /N/ to August 1st of year /N+1/. We label this period as year /N+1/ because the peak is always located in year /N+1/. The very low incidence in summer ensures that the arbitrariness of the choice of reference period has no impact on our conclusions.
This R function computes the annual incidence as defined above:
#+BEGIN_SRC R :results silent :exports both
yearly_peak = function(year) {
debut = paste0(year-1,"-08-01")
fin = paste0(year,"-08-01")
semaines = data$date > debut & data$date <= fin
sum(data$inc[semaines], na.rm=TRUE)
}
#+END_SRC
We must also be careful with the first and last years of the dataset. The data start in October 1984, meaning that we don't have all the data for the peak attributed to the year 1985. We therefore exclude it from the analysis. For the same reason, we define 2018 as the final year. We can increase this value to 2019 only when all data up to July 2019 is available.
#+BEGIN_SRC R :results silent :exports both
years <- 1986:2018
#+END_SRC
We make a new data frame for the annual incidence, applying the function ~yearly_peak~ to each year:
#+BEGIN_SRC R :results value :exports both
annnual_inc = data.frame(year = years,
incidence = sapply(years, yearly_peak))
head(annnual_inc)
#+END_SRC
** Inspection
A plot of the annual incidence:
#+BEGIN_SRC R :results output graphics :file annual-inc-plot.png :exports both
plot(annnual_inc, type="p", xlab="Année", ylab="Annual incidence")
#+END_SRC
** Identification of the strongest epidemics
A list sorted by decreasing annual incidence makes it easy to find the most important ones:
#+BEGIN_SRC R :results output :exports both
head(annnual_inc[order(-annnual_inc$incidence),])
#+END_SRC
Finally, a histogram clearly shows the few very strong epidemics, which affect about 10% of the French population, but are rare: there were three of them in the course of 35 years. The typical epidemic affects only half as many people.
#+BEGIN_SRC R :results output graphics :file annual-inc-hist.png :exports both
hist(annnual_inc$incidence, breaks=10, xlab="Annual incidence", ylab="Number of observations", main="")
#+END_SRC
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment