Commit c37754d2 authored by Konrad Hinsen's avatar Konrad Hinsen

Module 3/exo 1: mise à jour et version anglaise

parent 14453e5e
......@@ -21,12 +21,12 @@ knitr::opts_chunk$set(echo = TRUE)
## Préparation des données
Les données de l'incidence du syndrome grippal sont disponibles du site Web du [Réseau Sentinelles](http://www.sentiweb.fr/). Nous les récupérons en format CSV dont chaque ligne correspond à une semaine de la période demandée. Les dates de départ et de fin sont codées dans l'URL: "wstart=198501" pour semaine 1 de l'année 1985 et "wend=201730" pour semaine 30 de l'année 2017. L'URL complet est:
Les données de l'incidence du syndrome grippal sont disponibles du site Web du [Réseau Sentinelles](http://www.sentiweb.fr/). Nous les récupérons sous forme d'un fichier en format CSV dont chaque ligne correspond à une semaine de la période demandée. Nous téléchargeons toujours le jeu de données complet, qui commence en 1984 et se termine avec une semaine récente. L'URL est:
```{r}
data_url = "http://websenti.u707.jussieu.fr/sentiweb/api/data/rest/getIncidenceFlat?indicator=3&wstart=198501&wend=201730&geo=PAY1&$format=csv"
data_url = "http://www.sentiweb.fr/datasets/incidence-PAY-3.csv"
```
Voici l'explication des colonnes donnée sur le site d'origine:
Voici l'explication des colonnes donnée sur le [sur le site d'origine](https://ns.sentiweb.fr/incidence/csv-schema-v1.json):
| Nom de colonne | Libellé de colonne |
|----------------+-----------------------------------------------------------------------------------------------------------------------------------|
......@@ -41,6 +41,7 @@ Voici l'explication des colonnes donnée sur le site d'origine:
| `geo_insee` | Code de la zone géographique concernée (Code INSEE) http://www.insee.fr/fr/methodes/nomenclatures/cog/ |
| `geo_name` | Libellé de la zone géographique (ce libellé peut être modifié sans préavis) |
La première ligne du fichier CSV est un commentaire, que nous ignorons en précisant `skip=1`.
### Téléchargement
```{r}
data = read.csv(data_url, skip=1)
......@@ -63,18 +64,7 @@ Les deux colonnes qui nous intéressent sont `week` et `inc`. Vérifions leurs c
class(data$week)
class(data$inc)
```
La colonne `inc` est de classe `factor` à cause du point manquant dont la valeur de `inc` est `'-'`. Pour faciliter le traîtement ultérieur, nous relisons les données en demandant à `R` de traiter cette valeur comme `na`:
```{r}
data = read.csv(data_url, skip=1, na.strings="-")
head(data)
```
Maintenant les deux colonnes `week` et `inc` sont de classe `integer`:
```{r}
class(data$week)
class(data$inc)
```
Ce sont des entiers, tout va bien !
### Conversion des numéros de semaine
......@@ -96,7 +86,7 @@ convert_week = function(w) {
Nous appliquons cette fonction à tous les points, créant une nouvelle colonne `date` dans notre jeu de données:
```{r}
data$date = as.Date(sapply(data$week, convert_week))
data$date = as.Date(convert_week(data$week))
```
Vérifions qu'elle est de classe `Date`:
......@@ -141,9 +131,9 @@ pic_annuel = function(annee) {
}
```
Nous devons aussi faire attention aux premières et dernières années de notre jeux de données. Les données commencent en janvier 1985, ce qui ne permet pas de quantifier complètement le pic attribué à cette année. Nous l'enlevons donc de notre analyse. Par contre, les données se terminent en été 2017, peu avant le 1er août, ce qui nous permet d'inclure cette année.
Nous devons aussi faire attention aux premières et dernières années de notre jeux de données. Les données commencent en octobre 1984, ce qui ne permet pas de quantifier complètement le pic attribué à 1985. Nous l'enlevons donc de notre analyse. Par contre, pour une exécution en octobre 2018, les données se terminent après le 1er août 2018, ce qui nous permet d'inclure cette année.
```{r}
annees = 1986:2017
annees = 1986:2018
```
Nous créons un nouveau jeu de données pour l'incidence annuelle, en applicant la fonction `pic_annuel` à chaque année:
......
This source diff could not be displayed because it is too large. You can view the blob instead.
......@@ -16,9 +16,9 @@
Pour exécuter le code de cette analyse, il faut disposer des logiciels suivants:
** Emacs 25 ou 26
Une version plus ancienne d'Emacs devrait suffire, mais en ce cas il est prudent d'installer une version récente (9.x) d'org-mode.
** Python 3.6
** Emacs 25 ou plus
Une version plus ancienne d'Emacs devrait suffire. Pour une version antérieure à 26, il faut installer une version récente (9.x) d'org-mode.
** Python 3.6 ou plus
Nous utilisons le traitement de dates en format ISO 8601, qui a été implémenté en Python seulement avec la version 3.6.
#+BEGIN_SRC python :results output
......@@ -42,11 +42,11 @@ Nous n'utilisons que des fonctionnalités de base du langage R, une version ant
* Préparation des données
Les données de l'incidence du syndrome grippal sont disponibles du site Web du [[http://www.sentiweb.fr/][Réseau Sentinelles]]. Nous les récupérons en format CSV dont chaque ligne correspond à une semaine de la période demandée. Les dates de départ et de fin sont codées dans l'URL: "wstart=198501" pour semaine 1 de l'année 1985 et "wend=201730" pour semaine 30 de l'année 2017. L'URL complet est:
Les données de l'incidence du syndrome grippal sont disponibles du site Web du [[http://www.sentiweb.fr/][Réseau Sentinelles]]. Nous les récupérons sous forme d'un fichier en format CSV dont chaque ligne correspond à une semaine de la période d'observation. Nous téléchargeons toujours le jeu de données complet (rien d'autre n'est proposé), qui commence en 1984 et se termine avec une semaine récente. L'URL est:
#+NAME: data-url
http://websenti.u707.jussieu.fr/sentiweb/api/data/rest/getIncidenceFlat?indicator=3&wstart=198501&wend=201730&geo=PAY1&$format=csv
http://www.sentiweb.fr/datasets/incidence-PAY-3.csv
Voici l'explication des colonnes donnée sur le site d'origine:
Voici l'explication des colonnes donnée sur [[https://ns.sentiweb.fr/incidence/csv-schema-v1.json][le site d'origine:]]
| Nom de colonne | Libellé de colonne |
|----------------+-----------------------------------------------------------------------------------------------------------------------------------|
......@@ -61,7 +61,7 @@ Voici l'explication des colonnes donnée sur le site d'origine:
| ~geo_insee~ | Code de la zone géographique concernée (Code INSEE) http://www.insee.fr/fr/methodes/nomenclatures/cog/ |
| ~geo_name~ | Libellé de la zone géographique (ce libellé peut être modifié sans préavis) |
L'indication d'une semaine calendaire en format [[https://en.wikipedia.org/wiki/ISO_8601][ISO-8601]] est populaire en Europe, mais peu utilisée aux Etats-Unis. Ceci explique peut-être que peu de logiciels savent gérer ce format. Le langage Python le fait depuis la version 3.6. Nous utilisons donc ce langage pour la préparation de nos données.
L'indication d'une semaine calendaire en format [[https://en.wikipedia.org/wiki/ISO_8601][ISO-8601]] est populaire en Europe, mais peu utilisée aux Etats-Unis. Ceci explique peut-être que peu de logiciels savent gérer ce format. Le langage Python le fait depuis la version 3.6. Nous utilisons donc ce langage pour la préparation de nos données, ce qui a l'avantage de ne nécessiter aucune bibliothèque supplémentaire. (Note: nous expliquerons dans le module 4 pourquoi il est avantageux pour la réproductibilité de se limiter à un minimum de bibliothèques.)
** Téléchargement
Après avoir téléchargé les données, nous commençons par l'extraction des données qui nous intéressent. D'abord nous découpons le contenu du fichier en lignes, dont nous jetons la première qui ne contient qu'un commentaire. Les autres lignes sont découpées en colonnes.
......@@ -70,7 +70,7 @@ Après avoir téléchargé les données, nous commençons par l'extraction des d
from urllib.request import urlopen
data = urlopen(data_url).read()
lines = data.decode('ascii').strip().split('\n')
lines = data.decode('latin-1').strip().split('\n')
data_lines = lines[1:]
table = [line.split(',') for line in data_lines]
#+END_SRC
......@@ -80,70 +80,82 @@ Regardons ce que nous avons obtenu:
table[:5]
#+END_SRC
** Recherche de données manquantes
Il y a malheureusement beaucoup de façon d'indiquer l'absence d'un point de données. Nous testons ici seulement pour la présence de champs vides. Il faudrait aussi rechercher des valeurs non-numériques dans les colonnes à priori numériques. Nous ne le faisons pas ici, mais une vérification ultérieure capterait des telles anomalies.
Nous construisons un nouveau jeu de données sans les lignes qui contiennent des champs vides. Nous affichons ces lignes pour en garder une trace.
#+BEGIN_SRC python :results output
valid_table = []
for row in table:
missing = any([column == '' for column in row])
if missing:
print(row)
else:
valid_table.append(row)
#+END_SRC
** Extraction des colonnes utilisées
Il y a deux colonnes qui nous intéressent: la première (~"week"~) et la troisième (~"inc"~). Nous vérifions leurs noms dans l'en-tête, que nous effaçons par la suite. Enfin, nous créons un tableau avec les deux colonnes pour le traitement suivant.
#+BEGIN_SRC python :results silent
week = [row[0] for row in table]
week = [row[0] for row in valid_table]
assert week[0] == 'week'
del week[0]
inc = [row[2] for row in table]
inc = [row[2] for row in valid_table]
assert inc[0] == 'inc
del inc[0]
raw_data = list(zip(week, inc))
data = list(zip(week, inc))
#+END_SRC
Regardons les premières et les dernières lignes. Nous insérons ~None~ pour indiquer à org-mode la séparation entre les trois sections du tableau: en-tête, début des données, fin des données.
#+BEGIN_SRC python :results value
[('week', 'inc'), None] + raw_data[:5] + [None] + raw_data[-5:]
[('week', 'inc'), None] + data[:5] + [None] + data[-5:]
#+END_SRC
** Vérification
Il est toujours prudent de vérifier si les données semblent crédibles. Nous savons que les semaines sont données par six chiffres (quatre pour l'année et deux pour la semaine), et que les incidences sont des nombres entiers positifs.
#+BEGIN_SRC python :results output
for week, inc in raw_data:
for week, inc in data:
if len(week) != 6 or not week.isdigit():
print("Valeur suspecte dans la colonne 'week': ", (week, inc))
if not inc.isdigit():
print("Valeur suspecte dans la colonne 'inc': ", (week, inc))
#+END_SRC
La vérification a mis en évidence un point manquant dans le jeux de données. Nous l'éliminons, ce qui n'a pas d'impact fort sur notre analyse qui est assez simple.
#+BEGIN_SRC python :results silent
valid_data = [record for record in raw_data if record[1] != '-']
#+END_SRC
Pas de problème !
** Conversions
Pour faciliter les traitements suivants, nous remplaçons les numéros de semaine ISO par les dates qui correspondent aux lundis. A cette occasion, nous trions aussi les données par la date, et nous transformons les incidences en nombres entiers.
#+BEGIN_SRC python :results silent
import datetime
data = [(datetime.datetime.strptime(year_and_week + ":1" , '%G%V:%u').date(),
int(inc))
for year_and_week, inc in valid_data]
data.sort(key = lambda record: record[0])
converted_data = [(datetime.datetime.strptime(year_and_week + ":1" , '%G%V:%u').date(),
int(inc))
for year_and_week, inc in data]
converted_data.sort(key = lambda record: record[0])
#+END_SRC
Regardons de nouveau les premières et les dernières lignes:
#+BEGIN_SRC python :results value
str_data = [(str(date), str(inc)) for date, inc in data]
str_data = [(str(date), str(inc)) for date, inc in converted_data]
[('date', 'inc'), None] + str_data[:5] + [None] + str_data[-5:]
#+END_SRC
** Vérification des dates
Nous faisons encore une vérification: nos dates doivent être séparées d'exactement une semaine, sauf autour du point manquant.
#+BEGIN_SRC python :results output
dates = [date for date, _ in data]
dates = [date for date, _ in converted_data]
for date1, date2 in zip(dates[:-1], dates[1:]):
if date2-date1 != datetime.timedelta(weeks=1):
print(f"Il y a {date2-date1} entre {date1} et {date2}")
#+END_SRC
** Passage Python -> R
Nous passons au langage R pour inspecter nos données. Nous utilisons le mécanisme d'échange de données proposé par org-mode, ce qui nécessite un peu de code Python pour transformer les données dans le bon format.
Nous passons au langage R pour inspecter nos données, parce que l'analyse et la préparation de graphiques sont plus concises en R, sans nécessiter aucune bibliothèque supplémentaire.
Nous utilisons le mécanisme d'échange de données proposé par org-mode, ce qui nécessite un peu de code Python pour transformer les données dans le bon format.
#+NAME: data-for-R
#+BEGIN_SRC python :results silent
[('date', 'inc'), None] + [(str(date), inc) for date, inc in data]
[('date', 'inc'), None] + [(str(date), inc) for date, inc in converted_data]
#+END_SRC
En R, les données arrivent sous forme d'un data frame, mais il faut encore convertir les dates, qui arrivent comme chaînes de caractères.
......@@ -178,9 +190,9 @@ pic_annuel = function(annee) {
}
#+END_SRC
Nous devons aussi faire attention aux premières et dernières années de notre jeux de données. Les données commencent en janvier 1985, ce qui ne permet pas de quantifier complètement le pic attribué à cette année. Nous le supprimons donc de notre analyse. Par contre, les données se terminent en été 2017, peu avant le 1er août, ce qui nous permet d'inclure cette année dans l'analyse.
Nous devons aussi faire attention aux premières et dernières années de notre jeux de données. Les données commencent en octobre 1984, ce qui ne permet pas de quantifier complètement le pic attribué à l'année 1985. Nous le supprimons donc de notre analyse. Pour la même raison, nous arrêtons en 2018. Nous devons attendre les données pour juillet 2019 avant d'augmenter la dernière année à 2019.
#+BEGIN_SRC R :results silent
annees <- 1986:2017
annees <- 1986:2018
#+END_SRC
#+BEGIN_SRC R :results value
......
---
title: "Incidence of influenza-like illness in France"
author: "Konrad Hinsen"
output:
pdf_document:
toc: true
html_document:
toc: true
theme: journal
documentclass: article
classoption: a4paper
header-includes:
- \usepackage[french]{babel}
- \usepackage[upright]{fourier}
- \hypersetup{colorlinks=true,pagebackref=true}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
## Data preprocessing
The data on the incidence of influenza-like illness are available from the Web site of the [Réseau Sentinelles](http://www.sentiweb.fr/). We download them as a file in CSV format, in which each line corresponds to a week in the observation period. Only the complete dataset, starting in 1984 and ending with a recent week, is available for download. The URL is:
```{r}
data_url = "http://www.sentiweb.fr/datasets/incidence-PAY-3.csv"
```
This is the documentation of the data from [the download site](https://ns.sentiweb.fr/incidence/csv-schema-v1.json):
| Column name | Description |
|--------------+---------------------------------------------------------------------------------------------------------------------------|
| `week` | ISO8601 Yearweek number as numeric (year*100 + week nubmer) |
| `indicator` | Unique identifier of the indicator, see metadata document https://www.sentiweb.fr/meta.json |
| `inc` | Estimated incidence value for the time step, in the geographic level |
| `inc_low` | Lower bound of the estimated incidence 95% Confidence Interval |
| `inc_up` | Upper bound of the estimated incidence 95% Confidence Interval |
| `inc100` | Estimated rate incidence per 100,000 inhabitants |
| `inc100_low` | Lower bound of the estimated incidence 95% Confidence Interval |
| `inc100_up` | Upper bound of the estimated rate incidence 95% Confidence Interval |
| `geo_insee` | Identifier of the geographic area, from INSEE https://www.insee.fr |
| `geo_name` | Geographic label of the area, corresponding to INSEE code. This label is not an id and is only provided for human reading |
### Download
The first line of the CSV file is a comment, which we ignore with `skip=1`.
```{r}
data = read.csv(data_url, skip=1)
```
Let's have a look at what we got:
```{r}
head(data)
tail(data)
```
Are there missing data points?
```{r}
na_records = apply(data, 1, function (x) any(is.na(x)))
data[na_records,]
```
The two relevant columns for us are `week` and `inc`. Let's verify their classes:
```{r}
class(data$week)
class(data$inc)
```
Integers, fine!
### Conversion of the week numbers
Date handling is always a delicate subject. There are many conventions that are easily confused. Our dataset uses the [ISO-8601](https://en.wikipedia.org/wiki/ISO_8601) week number format, which is popular in Europe but less so in North America. In `R`, it is handled by the library [parsedate](https://cran.r-project.org/package=parsedate):
```{r}
library(parsedate)
```
In order to facilitate the subsequent treatment, we replace the ISO week numbers by the dates of each week's Monday. This function does it for one value:
```{r}
convert_week = function(w) {
ws = paste(w)
iso = paste0(substring(ws, 1, 4), "-W", substring(ws, 5, 6))
as.character(parse_iso_8601(iso))
}
```
We apply it to all points, creating a new column `date` in our data frame:
```{r}
data$date = as.Date(convert_week(data$week))
```
Let's check that is has class `Date`:
```{r}
class(data$date)
```
The points are in inverse chronological order, so it's preferable to sort them:
```{r}
data = data[order(data$date),]
```
That's a good occasion for another check: our dates should be separated by exactly seven days:
```{r}
all(diff(data$date) == 7)
```
### Inspection
Finally we can look at a plot of our data!
```{r}
plot(data$date, data$inc, type="l", xlab="Date", ylab="Weekly incidence")
```
A zoom on the last few years makes the peaks in winter stand out more clearly.
```{r}
with(tail(data, 200), plot(date, inc, type="l", xlab="Date", ylab="Weekly incidence"))
```
## Annual incidence
### Computation
Since the peaks of the epidemic happen in winter, near the transition between calendar years, we define the reference period for the annual incidence from August 1st of year $N$ to August 1st of year $N+1$. We label this period as year $N+1$ because the peak is always located in year $N+1$. The very low incidence in summer ensures that the arbitrariness of the choice of reference period has no impact on our conclusions.
The argument `na.rm=True` in the sum indicates that missing data points are removed. This is a reasonable choice since there is only one missing point, whose impact cannot be very strong.
```{r}
yearly_peak = function(year) {
debut = paste0(year-1,"-08-01")
fin = paste0(year,"-08-01")
semaines = data$date > debut & data$date <= fin
sum(data$inc[semaines], na.rm=TRUE)
}
```
We must also be careful with the first and last years of the dataset. The data start in October 1984, meaning that we don't have all the data for the peak attributed to the year 1985. We therefore exclude it from the analysis. For the same reason, we define 2018 as the final year. We can increase this value to 2019 only when all data up to July 2019 is available.
```{r}
years = 1986:2018
```
We make a new data frame for the annual incidence, applying the function `yearly_peak` to each year:
```{r}
annnual_inc = data.frame(year = years,
incidence = sapply(years, yearly_peak))
head(annnual_inc)
```
### Inspection
A plot of the annual incidences:
```{r}
plot(annnual_inc, type="p", xlab="Année", ylab="Annual incidence")
```
### Identification of the strongest epidemics
A list sorted by decreasing annual incidence makes it easy to find the most important ones:
```{r}
head(annnual_inc[order(-annnual_inc$incidence),])
```
Finally, a histogram clearly shows the few very strong epidemics, which affect about 10% of the French population, but are rare: there were three of them in the course of 35 years. The typical epidemic affects only half as many people.
```{r}
hist(annnual_inc$incidence, breaks=10, xlab="Annual incidence", ylab="Number of observations", main="")
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Incidence of influenza-like illness in France"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"import pandas as pd\n",
"import isoweek"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The data on the incidence of influenza-like illness are available from the Web site of the [Réseau Sentinelles](http://www.sentiweb.fr/). We download them as a file in CSV format, in which each line corresponds to a week in the observation period. Only the complete dataset, starting in 1984 and ending with a recent week, is available for download."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"data_url = \"http://www.sentiweb.fr/datasets/incidence-PAY-3.csv\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is the documentation of the data from [the download site](https://ns.sentiweb.fr/incidence/csv-schema-v1.json):\n",
"\n",
"| Column name | Description |\n",
"|--------------|---------------------------------------------------------------------------------------------------------------------------|\n",
"| `week` | ISO8601 Yearweek number as numeric (year times 100 + week nubmer) |\n",
"| `indicator` | Unique identifier of the indicator, see metadata document https://www.sentiweb.fr/meta.json |\n",
"| `inc` | Estimated incidence value for the time step, in the geographic level |\n",
"| `inc_low` | Lower bound of the estimated incidence 95% Confidence Interval |\n",
"| `inc_up` | Upper bound of the estimated incidence 95% Confidence Interval |\n",
"| `inc100` | Estimated rate incidence per 100,000 inhabitants |\n",
"| `inc100_low` | Lower bound of the estimated incidence 95% Confidence Interval |\n",
"| `inc100_up` | Upper bound of the estimated rate incidence 95% Confidence Interval |\n",
"| `geo_insee` | Identifier of the geographic area, from INSEE https://www.insee.fr |\n",
"| `geo_name` | Geographic label of the area, corresponding to INSEE code. This label is not an id and is only provided for human reading |\n",
"\n",
"The first line of the CSV file is a comment, which we ignore with `skip=1`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"raw_data = pd.read_csv(data_url, skiprows=1)\n",
"raw_data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Are there missing data points? Yes, week 19 of year 1989 does not have any observed values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"raw_data[raw_data.isnull().any(axis=1)]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We delete this point, which does not have big consequence for our rather simple analysis."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = raw_data.dropna().copy()\n",
"data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Our dataset uses an uncommon encoding; the week number is attached\n",
"to the year number, leaving the impression of a six-digit integer.\n",
"That is how Pandas interprets it.\n",
"\n",
"A second problem is that Pandas does not know about week numbers.\n",
"It needs to be given the dates of the beginning and end of the week.\n",
"We use the library `isoweek` for that.\n",
"\n",
"Since the conversion is a bit lengthy, we write a small Python \n",
"function for doing it. Then we apply it to all points in our dataset. \n",
"The results go into a new column 'period'."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def convert_week(year_and_week_int):\n",
" year_and_week_str = str(year_and_week_int)\n",
" year = int(year_and_week_str[:4])\n",
" week = int(year_and_week_str[4:])\n",
" w = isoweek.Week(year, week)\n",
" return pd.Period(w.day(0), 'W')\n",
"\n",
"data['period'] = [convert_week(yw) for yw in data['week']]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There are two more small changes to make.\n",
"\n",
"First, we define the observation periods as the new index of\n",
"our dataset. That turns it into a time series, which will be\n",
"convenient later on.\n",
"\n",
"Second, we sort the points chronologically."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"sorted_data = data.set_index('period').sort_index()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We check the consistency of the data. Between the end of a period and\n",
"the beginning of the next one, the difference should be zero, or very small.\n",
"We tolerate an error of one second.\n",
"\n",
"This is OK except for one pair of consecutive periods between which\n",
"a whole week is missing.\n",
"\n",
"We recognize the dates: it's the week without observations that we\n",
"have deleted earlier!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"periods = sorted_data.index\n",
"for p1, p2 in zip(periods[:-1], periods[1:]):\n",
" delta = p2.to_timestamp() - p1.end_time\n",
" if delta > pd.Timedelta('1s'):\n",
" print(p1, p2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A first look at the data!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sorted_data['inc'].plot()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A zoom on the last few years shows more clearly that the peaks are situated in winter."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sorted_data['inc'][-200:].plot()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Study of the annual incidence"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since the peaks of the epidemic happen in winter, near the transition\n",
"between calendar years, we define the reference period for the annual\n",
"incidence from August 1st of year $N$ to August 1st of year $N+1$. We\n",
"label this period as year $N+1$ because the peak is always located in\n",
"year $N+1$. The very low incidence in summer ensures that the arbitrariness\n",
"of the choice of reference period has no impact on our conclusions.\n",
"\n",
"Our task is a bit complicated by the fact that a year does not have an\n",
"integer number of weeks. Therefore we modify our reference period a bit:\n",
"instead of August 1st, we use the first day of the week containing August 1st.\n",
"\n",
"A final detail: the dataset starts in October 1984, the first peak is thus\n",
"incomplete, We start the analysis with the first full peak."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"first_august_week = [pd.Period(pd.Timestamp(y, 8, 1), 'W')\n",
" for y in range(1985,\n",
" sorted_data.index[-1].year)]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Starting from this list of weeks that contain August 1st, we obtain intervals of approximately one year as the periods between two adjacent weeks in this list. We compute the sums of weekly incidences for all these periods.\n",
"\n",
"We also check that our periods contain between 51 and 52 weeks, as a safeguard against potential mistakes in our code."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"year = []\n",
"yearly_incidence = []\n",
"for week1, week2 in zip(first_august_week[:-1],\n",
" first_august_week[1:]):\n",
" one_year = sorted_data['inc'][week1:week2-1]\n",
" assert abs(len(one_year)-52) < 2\n",
" yearly_incidence.append(one_year.sum())\n",
" year.append(week2.year)\n",
"yearly_incidence = pd.Series(data=yearly_incidence, index=year)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And here are the annual incidences."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"yearly_incidence.plot(style='*')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A sorted list makes it easier to find the highest values (at the end)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"yearly_incidence.sort_values()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, a histogram clearly shows the few very strong epidemics, which affect about 10% of the French population,\n",
"but are rare: there were three of them in the course of 35 years. The typical epidemic affects only half as many people."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"yearly_incidence.hist(xrot=20)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.1"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
#+TITLE: Incidence of influenza-like illness in France
#+LANGUAGE: en
#+OPTIONS: *:nil num:1 toc:t
# #+HTML_HEAD: <link rel="stylesheet" title="Standard" href="http://orgmode.org/worg/style/worg.css" type="text/css" />
#+HTML_HEAD: <link rel="stylesheet" type="text/css" href="http://www.pirilampo.org/styles/readtheorg/css/htmlize.css"/>
#+HTML_HEAD: <link rel="stylesheet" type="text/css" href="http://www.pirilampo.org/styles/readtheorg/css/readtheorg.css"/>
#+HTML_HEAD: <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
#+HTML_HEAD: <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js"></script>
#+HTML_HEAD: <script type="text/javascript" src="http://www.pirilampo.org/styles/lib/js/jquery.stickytableheaders.js"></script>
#+HTML_HEAD: <script type="text/javascript" src="http://www.pirilampo.org/styles/readtheorg/js/readtheorg.js"></script>
#+PROPERTY: header-args :session :exports both
* Foreword
For running this analysis, you need the following software:
** Emacs 25 or higher
Older versions may suffice. For Emacs versions older than 26, org-mode must be updated to version 9.x.
** Python 3.6 or higher
We use the ISO 8601 date format, which has been added to Python's standard library with version 3.6.
#+BEGIN_SRC python :results output
import sys
if sys.version_info.major < 3 or sys.version_info.minor < 6:
print("Please use Python 3.6 (or higher)!")
#+END_SRC
#+BEGIN_SRC emacs-lisp :results output
(unless (featurep 'ob-python)
(print "Please activate python in org-babel (org-babel-do-languages)!"))
#+END_SRC
** R 3.4
We use only basic R functionality, so a earlier version might be OK, but we did not test this.
#+BEGIN_SRC emacs-lisp :results output
(unless (featurep 'ob-R)
(print "Please activate R in org-babel (org-babel-do-languages)!"))
#+END_SRC
* Data preprocessing
The data on the incidence of influenza-like illness are available from the Web site of the [[http://www.sentiweb.fr/][Réseau Sentinelles]]. We download them as a file in CSV format, in which each line corresponds to a week in the observation period. Only the complete dataset, starting in 1984 and ending with a recent week, is available for download. The URL is:
#+NAME: data-url
http://www.sentiweb.fr/datasets/incidence-PAY-3.csv
This is the documentation of the data from [[https://ns.sentiweb.fr/incidence/csv-schema-v1.json][the download site]]:
| Column name | Description |
|--------------+---------------------------------------------------------------------------------------------------------------------------|
| ~week~ | ISO8601 Yearweek number as numeric (year*100 + week nubmer) |
| ~indicator~ | Unique identifier of the indicator, see metadata document https://www.sentiweb.fr/meta.json |
| ~inc~ | Estimated incidence value for the time step, in the geographic level |
| ~inc_low~ | Lower bound of the estimated incidence 95% Confidence Interval |
| ~inc_up~ | Upper bound of the estimated incidence 95% Confidence Interval |
| ~inc100~ | Estimated rate incidence per 100,000 inhabitants |
| ~inc100_low~ | Lower bound of the estimated incidence 95% Confidence Interval |
| ~inc100_up~ | Upper bound of the estimated rate incidence 95% Confidence Interval |
| ~geo_insee~ | Identifier of the geographic area, from INSEE https://www.insee.fr |
| ~geo_name~ | Geographic label of the area, corresponding to INSEE code. This label is not an id and is only provided for human reading |
The [[https://en.wikipedia.org/wiki/ISO_8601][ISO-8601]] format is popular in Europe, but less so in North America. This may explain why few software packages handle this format. The Python language does it since version 3.6. We therefore use Python for the pre-processing phase, which has the advantage of not requiring any additional library. (Note: we will explain in module 4 why it is desirable for reproducibility to use as few external libraries as possible.)
** Download
After downloading the raw data, we extract the part we are interested in. We first split the file into lines, of which we discard the first one that contains a comment. We then split the remaining lines into columns.
#+BEGIN_SRC python :results silent :var data_url=data-url
from urllib.request import urlopen
data = urlopen(data_url).read()
lines = data.decode('latin-1').strip().split('\n')
data_lines = lines[1:]
table = [line.split(',') for line in data_lines]
#+END_SRC
Let's have a look at what we have so far:
#+BEGIN_SRC python :results value
table[:5]
#+END_SRC
** Checking for missing data
Unfortunately there are many ways to indicate the absence of a data value in a dataset. Here we check for a common one: empty fields. For completeness, we should also look for non-numerical data in numerical columns. We don't do this here, but checks in later processing steps would catch such anomalies.
We make a new dataset without the lines that contain empty fields. We print those lines to preserve a trace of their contents.
#+BEGIN_SRC python :results output
valid_table = []
for row in table:
missing = any([column == '' for column in row])
if missing:
print(row)
else:
valid_table.append(row)
#+END_SRC
** Extraction of the required columns
There are only two columns that we will need for our analysis: the first (~"week"~) and the third (~"inc"~). We check the names in the header to be sure we pick the right data. We make a new table containing just the two columns required, without the header.
#+BEGIN_SRC python :results silent
week = [row[0] for row in valid_table]
assert week[0] == 'week'
del week[0]
inc = [row[2] for row in valid_table]
assert inc[0] == 'inc
del inc[0]
data = list(zip(week, inc))
#+END_SRC
Let's look at the first and last lines. We insert ~None~ to indicate to org-mode the separation between the three parts of the table: header, first lines, last lines.
#+BEGIN_SRC python :results value
[('week', 'inc'), None] + data[:5] + [None] + data[-5:]
#+END_SRC
** Verification
It is always prudent to verify if the data looks credible. A simple fact we can check for is that weeks are given as six-digit integers (four for the year, two for the week), and that the incidence values are positive integers.
#+BEGIN_SRC python :results output
for week, inc in data:
if len(week) != 6 or not week.isdigit():
print("Suspicious value in column 'week': ", (week, inc))
if not inc.isdigit():
print("Suspicious value in column 'inc': ", (week, inc))
#+END_SRC
No problem - fine!
** Date conversion
In order to facilitate the subsequent treatment, we replace the ISO week numbers by the dates of each week's Monday. This is also a good occasion to sort the lines by increasing data, and to convert the incidences from strings to integers.
#+BEGIN_SRC python :results silent
import datetime
converted_data = [(datetime.datetime.strptime(year_and_week + ":1" , '%G%V:%u').date(),
int(inc))
for year_and_week, inc in data]
converted_data.sort(key = lambda record: record[0])
#+END_SRC
We'll look again at the first and last lines:
#+BEGIN_SRC python :results value
str_data = [(str(date), str(inc)) for date, inc in converted_data]
[('date', 'inc'), None] + str_data[:5] + [None] + str_data[-5:]
#+END_SRC
** Date verification
We do one more verification: our dates must be separated by exactly one week, except around the missing data point.
#+BEGIN_SRC python :results output
dates = [date for date, _ in converted_data]
for date1, date2 in zip(dates[:-1], dates[1:]):
if date2-date1 != datetime.timedelta(weeks=1):
print(f"The difference between {date1} and {date2} is {date2-date1}")
#+END_SRC
** Transfer Python -> R
We switch to R for data inspection and analysis, because the code is more concise in R and requires no additional libraries.
Org-mode's data exchange mechanism requires some Python code for transforming the data to the right format.
#+NAME: data-for-R
#+BEGIN_SRC python :results silent
[('date', 'inc'), None] + [(str(date), inc) for date, inc in converted_data]
#+END_SRC
In R, the dataset arrives as a data frame, which is fine. But the dates arrive as strings and must be converted.
#+BEGIN_SRC R :results output :var data=data-for-R
data$date <- as.Date(data$date)
summary(data)
#+END_SRC
** Inspection
Finally we can look at a plot of our data!
#+BEGIN_SRC R :results output graphics :file inc-plot.png
plot(data, type="l", xlab="Date", ylab="Weekly incidence")
#+END_SRC
A zoom on the last few years makes the peaks in winter stand out more clearly.
#+BEGIN_SRC R :results output graphics :file inc-plot-zoom.png
plot(tail(data, 200), type="l", xlab="Date", ylab="Weekly incidence")
#+END_SRC
* Study of the annual incidence
** Computation of the annual incidence
Since the peaks of the epidemic happen in winter, near the transition between calendar years, we define the reference period for the annual incidence from August 1st of year /N/ to August 1st of year /N+1/. We label this period as year /N+1/ because the peak is always located in year /N+1/. The very low incidence in summer ensures that the arbitrariness of the choice of reference period has no impact on our conclusions.
This R function computes the annual incidence as defined above:
#+BEGIN_SRC R :results silent
yearly_peak = function(year) {
debut = paste0(year-1,"-08-01")
fin = paste0(year,"-08-01")
semaines = data$date > debut & data$date <= fin
sum(data$inc[semaines], na.rm=TRUE)
}
#+END_SRC
We must also be careful with the first and last years of the dataset. The data start in October 1984, meaning that we don't have all the data for the peak attributed to the year 1985. We therefore exclude it from the analysis. For the same reason, we define 2018 as the final year. We can increase this value to 2019 only when all data up to July 2019 is available.
#+BEGIN_SRC R :results silent
years <- 1986:2018
#+END_SRC
We make a new data frame for the annual incidence, applying the function ~yearly_peak~ to each year:
#+BEGIN_SRC R :results value
annnual_inc = data.frame(year = years,
incidence = sapply(years, yearly_peak))
head(annnual_inc)
#+END_SRC
** Inspection
A plot of the annual incidence:
#+BEGIN_SRC R :results output graphics :file annual-inc-plot.png
plot(annnual_inc, type="p", xlab="Année", ylab="Annual incidence")
#+END_SRC
** Identification of the strongest epidemics
A list sorted by decreasing annual incidence makes it easy to find the most important ones:
#+BEGIN_SRC R :results output
head(annnual_inc[order(-annnual_inc$incidence),])
#+END_SRC
Finally, a histogram clearly shows the few very strong epidemics, which affect about 10% of the French population, but are rare: there were three of them in the course of 35 years. The typical epidemic affects only half as many people.
#+BEGIN_SRC R :results output graphics :file annual-inc-hist.png
hist(annnual_inc$incidence, breaks=10, xlab="Annual incidence", ylab="Number of observations", main="")
#+END_SRC
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment