#+TITLE: Incidence of influenza-like illness in France #+LANGUAGE: en #+OPTIONS: *:nil num:1 toc:t # #+HTML_HEAD: #+HTML_HEAD: #+HTML_HEAD: #+HTML_HEAD: #+HTML_HEAD: #+HTML_HEAD: #+HTML_HEAD: #+PROPERTY: header-args :session :exports both * Foreword For running this analysis, you need the following software: ** Emacs 25 or higher Older versions may suffice. For Emacs versions older than 26, org-mode must be updated to version 9.x. ** Python 3.6 or higher We use the ISO 8601 date format, which has been added to Python's standard library with version 3.6. #+BEGIN_SRC python :results output import sys if sys.version_info.major < 3 or sys.version_info.minor < 6: print("Please use Python 3.6 (or higher)!") #+END_SRC #+BEGIN_SRC emacs-lisp :results output (unless (featurep 'ob-python) (print "Please activate python in org-babel (org-babel-do-languages)!")) #+END_SRC ** R 3.4 We use only basic R functionality, so a earlier version might be OK, but we did not test this. #+BEGIN_SRC emacs-lisp :results output (unless (featurep 'ob-R) (print "Please activate R in org-babel (org-babel-do-languages)!")) #+END_SRC * Data preprocessing The data on the incidence of influenza-like illness are available from the Web site of the [[http://www.sentiweb.fr/][Réseau Sentinelles]]. We download them as a file in CSV format, in which each line corresponds to a week in the observation period. Only the complete dataset, starting in 1984 and ending with a recent week, is available for download. The URL is: #+NAME: data-url http://www.sentiweb.fr/datasets/incidence-PAY-3.csv This is the documentation of the data from [[https://ns.sentiweb.fr/incidence/csv-schema-v1.json][the download site]]: | Column name | Description | |--------------+---------------------------------------------------------------------------------------------------------------------------| | ~week~ | ISO8601 Yearweek number as numeric (year*100 + week nubmer) | | ~indicator~ | Unique identifier of the indicator, see metadata document https://www.sentiweb.fr/meta.json | | ~inc~ | Estimated incidence value for the time step, in the geographic level | | ~inc_low~ | Lower bound of the estimated incidence 95% Confidence Interval | | ~inc_up~ | Upper bound of the estimated incidence 95% Confidence Interval | | ~inc100~ | Estimated rate incidence per 100,000 inhabitants | | ~inc100_low~ | Lower bound of the estimated incidence 95% Confidence Interval | | ~inc100_up~ | Upper bound of the estimated rate incidence 95% Confidence Interval | | ~geo_insee~ | Identifier of the geographic area, from INSEE https://www.insee.fr | | ~geo_name~ | Geographic label of the area, corresponding to INSEE code. This label is not an id and is only provided for human reading | The [[https://en.wikipedia.org/wiki/ISO_8601][ISO-8601]] format is popular in Europe, but less so in North America. This may explain why few software packages handle this format. The Python language does it since version 3.6. We therefore use Python for the pre-processing phase, which has the advantage of not requiring any additional library. (Note: we will explain in module 4 why it is desirable for reproducibility to use as few external libraries as possible.) ** Download After downloading the raw data, we extract the part we are interested in. We first split the file into lines, of which we discard the first one that contains a comment. We then split the remaining lines into columns. #+BEGIN_SRC python :results silent :var data_url=data-url from urllib.request import urlopen data = urlopen(data_url).read() lines = data.decode('latin-1').strip().split('\n') data_lines = lines[1:] table = [line.split(',') for line in data_lines] #+END_SRC Let's have a look at what we have so far: #+BEGIN_SRC python :results value table[:5] #+END_SRC ** Checking for missing data Unfortunately there are many ways to indicate the absence of a data value in a dataset. Here we check for a common one: empty fields. For completeness, we should also look for non-numerical data in numerical columns. We don't do this here, but checks in later processing steps would catch such anomalies. We make a new dataset without the lines that contain empty fields. We print those lines to preserve a trace of their contents. #+BEGIN_SRC python :results output valid_table = [] for row in table: missing = any([column == '' for column in row]) if missing: print(row) else: valid_table.append(row) #+END_SRC ** Extraction of the required columns There are only two columns that we will need for our analysis: the first (~"week"~) and the third (~"inc"~). We check the names in the header to be sure we pick the right data. We make a new table containing just the two columns required, without the header. #+BEGIN_SRC python :results silent week = [row[0] for row in valid_table] assert week[0] == 'week' del week[0] inc = [row[2] for row in valid_table] assert inc[0] == 'inc del inc[0] data = list(zip(week, inc)) #+END_SRC Let's look at the first and last lines. We insert ~None~ to indicate to org-mode the separation between the three parts of the table: header, first lines, last lines. #+BEGIN_SRC python :results value [('week', 'inc'), None] + data[:5] + [None] + data[-5:] #+END_SRC ** Verification It is always prudent to verify if the data looks credible. A simple fact we can check for is that weeks are given as six-digit integers (four for the year, two for the week), and that the incidence values are positive integers. #+BEGIN_SRC python :results output for week, inc in data: if len(week) != 6 or not week.isdigit(): print("Suspicious value in column 'week': ", (week, inc)) if not inc.isdigit(): print("Suspicious value in column 'inc': ", (week, inc)) #+END_SRC No problem - fine! ** Date conversion In order to facilitate the subsequent treatment, we replace the ISO week numbers by the dates of each week's Monday. This is also a good occasion to sort the lines by increasing data, and to convert the incidences from strings to integers. #+BEGIN_SRC python :results silent import datetime converted_data = [(datetime.datetime.strptime(year_and_week + ":1" , '%G%V:%u').date(), int(inc)) for year_and_week, inc in data] converted_data.sort(key = lambda record: record[0]) #+END_SRC We'll look again at the first and last lines: #+BEGIN_SRC python :results value str_data = [(str(date), str(inc)) for date, inc in converted_data] [('date', 'inc'), None] + str_data[:5] + [None] + str_data[-5:] #+END_SRC ** Date verification We do one more verification: our dates must be separated by exactly one week, except around the missing data point. #+BEGIN_SRC python :results output dates = [date for date, _ in converted_data] for date1, date2 in zip(dates[:-1], dates[1:]): if date2-date1 != datetime.timedelta(weeks=1): print(f"The difference between {date1} and {date2} is {date2-date1}") #+END_SRC ** Transfer Python -> R We switch to R for data inspection and analysis, because the code is more concise in R and requires no additional libraries. Org-mode's data exchange mechanism requires some Python code for transforming the data to the right format. #+NAME: data-for-R #+BEGIN_SRC python :results silent [('date', 'inc'), None] + [(str(date), inc) for date, inc in converted_data] #+END_SRC In R, the dataset arrives as a data frame, which is fine. But the dates arrive as strings and must be converted. #+BEGIN_SRC R :results output :var data=data-for-R data$date <- as.Date(data$date) summary(data) #+END_SRC ** Inspection Finally we can look at a plot of our data! #+BEGIN_SRC R :results output graphics :file inc-plot.png plot(data, type="l", xlab="Date", ylab="Weekly incidence") #+END_SRC A zoom on the last few years makes the peaks in winter stand out more clearly. #+BEGIN_SRC R :results output graphics :file inc-plot-zoom.png plot(tail(data, 200), type="l", xlab="Date", ylab="Weekly incidence") #+END_SRC * Study of the annual incidence ** Computation of the annual incidence Since the peaks of the epidemic happen in winter, near the transition between calendar years, we define the reference period for the annual incidence from August 1st of year /N/ to August 1st of year /N+1/. We label this period as year /N+1/ because the peak is always located in year /N+1/. The very low incidence in summer ensures that the arbitrariness of the choice of reference period has no impact on our conclusions. This R function computes the annual incidence as defined above: #+BEGIN_SRC R :results silent yearly_peak = function(year) { debut = paste0(year-1,"-08-01") fin = paste0(year,"-08-01") semaines = data$date > debut & data$date <= fin sum(data$inc[semaines], na.rm=TRUE) } #+END_SRC We must also be careful with the first and last years of the dataset. The data start in October 1984, meaning that we don't have all the data for the peak attributed to the year 1985. We therefore exclude it from the analysis. For the same reason, we define 2018 as the final year. We can increase this value to 2019 only when all data up to July 2019 is available. #+BEGIN_SRC R :results silent years <- 1986:2018 #+END_SRC We make a new data frame for the annual incidence, applying the function ~yearly_peak~ to each year: #+BEGIN_SRC R :results value annnual_inc = data.frame(year = years, incidence = sapply(years, yearly_peak)) head(annnual_inc) #+END_SRC ** Inspection A plot of the annual incidence: #+BEGIN_SRC R :results output graphics :file annual-inc-plot.png plot(annnual_inc, type="p", xlab="Année", ylab="Annual incidence") #+END_SRC ** Identification of the strongest epidemics A list sorted by decreasing annual incidence makes it easy to find the most important ones: #+BEGIN_SRC R :results output head(annnual_inc[order(-annnual_inc$incidence),]) #+END_SRC Finally, a histogram clearly shows the few very strong epidemics, which affect about 10% of the French population, but are rare: there were three of them in the course of 35 years. The typical epidemic affects only half as many people. #+BEGIN_SRC R :results output graphics :file annual-inc-hist.png hist(annnual_inc$incidence, breaks=10, xlab="Annual incidence", ylab="Number of observations", main="") #+END_SRC