Commit c37754d2 authored by Konrad Hinsen's avatar Konrad Hinsen

Module 3/exo 1: mise à jour et version anglaise

parent 14453e5e
......@@ -21,12 +21,12 @@ knitr::opts_chunk$set(echo = TRUE)
## Préparation des données
Les données de l'incidence du syndrome grippal sont disponibles du site Web du [Réseau Sentinelles](http://www.sentiweb.fr/). Nous les récupérons en format CSV dont chaque ligne correspond à une semaine de la période demandée. Les dates de départ et de fin sont codées dans l'URL: "wstart=198501" pour semaine 1 de l'année 1985 et "wend=201730" pour semaine 30 de l'année 2017. L'URL complet est:
Les données de l'incidence du syndrome grippal sont disponibles du site Web du [Réseau Sentinelles](http://www.sentiweb.fr/). Nous les récupérons sous forme d'un fichier en format CSV dont chaque ligne correspond à une semaine de la période demandée. Nous téléchargeons toujours le jeu de données complet, qui commence en 1984 et se termine avec une semaine récente. L'URL est:
```{r}
data_url = "http://websenti.u707.jussieu.fr/sentiweb/api/data/rest/getIncidenceFlat?indicator=3&wstart=198501&wend=201730&geo=PAY1&$format=csv"
data_url = "http://www.sentiweb.fr/datasets/incidence-PAY-3.csv"
```
Voici l'explication des colonnes donnée sur le site d'origine:
Voici l'explication des colonnes donnée sur le [sur le site d'origine](https://ns.sentiweb.fr/incidence/csv-schema-v1.json):
| Nom de colonne | Libellé de colonne |
|----------------+-----------------------------------------------------------------------------------------------------------------------------------|
......@@ -41,6 +41,7 @@ Voici l'explication des colonnes donnée sur le site d'origine:
| `geo_insee` | Code de la zone géographique concernée (Code INSEE) http://www.insee.fr/fr/methodes/nomenclatures/cog/ |
| `geo_name` | Libellé de la zone géographique (ce libellé peut être modifié sans préavis) |
La première ligne du fichier CSV est un commentaire, que nous ignorons en précisant `skip=1`.
### Téléchargement
```{r}
data = read.csv(data_url, skip=1)
......@@ -63,18 +64,7 @@ Les deux colonnes qui nous intéressent sont `week` et `inc`. Vérifions leurs c
class(data$week)
class(data$inc)
```
La colonne `inc` est de classe `factor` à cause du point manquant dont la valeur de `inc` est `'-'`. Pour faciliter le traîtement ultérieur, nous relisons les données en demandant à `R` de traiter cette valeur comme `na`:
```{r}
data = read.csv(data_url, skip=1, na.strings="-")
head(data)
```
Maintenant les deux colonnes `week` et `inc` sont de classe `integer`:
```{r}
class(data$week)
class(data$inc)
```
Ce sont des entiers, tout va bien !
### Conversion des numéros de semaine
......@@ -96,7 +86,7 @@ convert_week = function(w) {
Nous appliquons cette fonction à tous les points, créant une nouvelle colonne `date` dans notre jeu de données:
```{r}
data$date = as.Date(sapply(data$week, convert_week))
data$date = as.Date(convert_week(data$week))
```
Vérifions qu'elle est de classe `Date`:
......@@ -141,9 +131,9 @@ pic_annuel = function(annee) {
}
```
Nous devons aussi faire attention aux premières et dernières années de notre jeux de données. Les données commencent en janvier 1985, ce qui ne permet pas de quantifier complètement le pic attribué à cette année. Nous l'enlevons donc de notre analyse. Par contre, les données se terminent en été 2017, peu avant le 1er août, ce qui nous permet d'inclure cette année.
Nous devons aussi faire attention aux premières et dernières années de notre jeux de données. Les données commencent en octobre 1984, ce qui ne permet pas de quantifier complètement le pic attribué à 1985. Nous l'enlevons donc de notre analyse. Par contre, pour une exécution en octobre 2018, les données se terminent après le 1er août 2018, ce qui nous permet d'inclure cette année.
```{r}
annees = 1986:2017
annees = 1986:2018
```
Nous créons un nouveau jeu de données pour l'incidence annuelle, en applicant la fonction `pic_annuel` à chaque année:
......
This source diff could not be displayed because it is too large. You can view the blob instead.
This diff is collapsed.
---
title: "Incidence of influenza-like illness in France"
author: "Konrad Hinsen"
output:
pdf_document:
toc: true
html_document:
toc: true
theme: journal
documentclass: article
classoption: a4paper
header-includes:
- \usepackage[french]{babel}
- \usepackage[upright]{fourier}
- \hypersetup{colorlinks=true,pagebackref=true}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
## Data preprocessing
The data on the incidence of influenza-like illness are available from the Web site of the [Réseau Sentinelles](http://www.sentiweb.fr/). We download them as a file in CSV format, in which each line corresponds to a week in the observation period. Only the complete dataset, starting in 1984 and ending with a recent week, is available for download. The URL is:
```{r}
data_url = "http://www.sentiweb.fr/datasets/incidence-PAY-3.csv"
```
This is the documentation of the data from [the download site](https://ns.sentiweb.fr/incidence/csv-schema-v1.json):
| Column name | Description |
|--------------+---------------------------------------------------------------------------------------------------------------------------|
| `week` | ISO8601 Yearweek number as numeric (year*100 + week nubmer) |
| `indicator` | Unique identifier of the indicator, see metadata document https://www.sentiweb.fr/meta.json |
| `inc` | Estimated incidence value for the time step, in the geographic level |
| `inc_low` | Lower bound of the estimated incidence 95% Confidence Interval |
| `inc_up` | Upper bound of the estimated incidence 95% Confidence Interval |
| `inc100` | Estimated rate incidence per 100,000 inhabitants |
| `inc100_low` | Lower bound of the estimated incidence 95% Confidence Interval |
| `inc100_up` | Upper bound of the estimated rate incidence 95% Confidence Interval |
| `geo_insee` | Identifier of the geographic area, from INSEE https://www.insee.fr |
| `geo_name` | Geographic label of the area, corresponding to INSEE code. This label is not an id and is only provided for human reading |
### Download
The first line of the CSV file is a comment, which we ignore with `skip=1`.
```{r}
data = read.csv(data_url, skip=1)
```
Let's have a look at what we got:
```{r}
head(data)
tail(data)
```
Are there missing data points?
```{r}
na_records = apply(data, 1, function (x) any(is.na(x)))
data[na_records,]
```
The two relevant columns for us are `week` and `inc`. Let's verify their classes:
```{r}
class(data$week)
class(data$inc)
```
Integers, fine!
### Conversion of the week numbers
Date handling is always a delicate subject. There are many conventions that are easily confused. Our dataset uses the [ISO-8601](https://en.wikipedia.org/wiki/ISO_8601) week number format, which is popular in Europe but less so in North America. In `R`, it is handled by the library [parsedate](https://cran.r-project.org/package=parsedate):
```{r}
library(parsedate)
```
In order to facilitate the subsequent treatment, we replace the ISO week numbers by the dates of each week's Monday. This function does it for one value:
```{r}
convert_week = function(w) {
ws = paste(w)
iso = paste0(substring(ws, 1, 4), "-W", substring(ws, 5, 6))
as.character(parse_iso_8601(iso))
}
```
We apply it to all points, creating a new column `date` in our data frame:
```{r}
data$date = as.Date(convert_week(data$week))
```
Let's check that is has class `Date`:
```{r}
class(data$date)
```
The points are in inverse chronological order, so it's preferable to sort them:
```{r}
data = data[order(data$date),]
```
That's a good occasion for another check: our dates should be separated by exactly seven days:
```{r}
all(diff(data$date) == 7)
```
### Inspection
Finally we can look at a plot of our data!
```{r}
plot(data$date, data$inc, type="l", xlab="Date", ylab="Weekly incidence")
```
A zoom on the last few years makes the peaks in winter stand out more clearly.
```{r}
with(tail(data, 200), plot(date, inc, type="l", xlab="Date", ylab="Weekly incidence"))
```
## Annual incidence
### Computation
Since the peaks of the epidemic happen in winter, near the transition between calendar years, we define the reference period for the annual incidence from August 1st of year $N$ to August 1st of year $N+1$. We label this period as year $N+1$ because the peak is always located in year $N+1$. The very low incidence in summer ensures that the arbitrariness of the choice of reference period has no impact on our conclusions.
The argument `na.rm=True` in the sum indicates that missing data points are removed. This is a reasonable choice since there is only one missing point, whose impact cannot be very strong.
```{r}
yearly_peak = function(year) {
debut = paste0(year-1,"-08-01")
fin = paste0(year,"-08-01")
semaines = data$date > debut & data$date <= fin
sum(data$inc[semaines], na.rm=TRUE)
}
```
We must also be careful with the first and last years of the dataset. The data start in October 1984, meaning that we don't have all the data for the peak attributed to the year 1985. We therefore exclude it from the analysis. For the same reason, we define 2018 as the final year. We can increase this value to 2019 only when all data up to July 2019 is available.
```{r}
years = 1986:2018
```
We make a new data frame for the annual incidence, applying the function `yearly_peak` to each year:
```{r}
annnual_inc = data.frame(year = years,
incidence = sapply(years, yearly_peak))
head(annnual_inc)
```
### Inspection
A plot of the annual incidences:
```{r}
plot(annnual_inc, type="p", xlab="Année", ylab="Annual incidence")
```
### Identification of the strongest epidemics
A list sorted by decreasing annual incidence makes it easy to find the most important ones:
```{r}
head(annnual_inc[order(-annnual_inc$incidence),])
```
Finally, a histogram clearly shows the few very strong epidemics, which affect about 10% of the French population, but are rare: there were three of them in the course of 35 years. The typical epidemic affects only half as many people.
```{r}
hist(annnual_inc$incidence, breaks=10, xlab="Annual incidence", ylab="Number of observations", main="")
```
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment