Commit ee212696 authored by Alexandre Jesus's avatar Alexandre Jesus

Add data download to notebook

parent 276af950
This diff is collapsed.
...@@ -41,7 +41,11 @@ We use only basic R functionality, so a earlier version might be OK, but we did ...@@ -41,7 +41,11 @@ We use only basic R functionality, so a earlier version might be OK, but we did
* Data preprocessing * Data preprocessing
The data on the incidence of influenza-like illness are available from the Web site of the [[http://www.sentiweb.fr/][Réseau Sentinelles]]. We download them as a file in CSV format, in which each line corresponds to a week in the observation period. Only the complete dataset, starting in 1984 and ending with a recent week, is available for download. The URL is: The data on the incidence of influenza-like illness are available from
the Web site of the [[http://www.sentiweb.fr/][Réseau Sentinelles]]. We download them as a file in
CSV format, in which each line corresponds to a week in the
observation period. Only the complete dataset, starting in 1984 and
ending with a recent week, is available for download. The URL is:
#+NAME: data-url #+NAME: data-url
http://www.sentiweb.fr/datasets/incidence-PAY-3.csv http://www.sentiweb.fr/datasets/incidence-PAY-3.csv
...@@ -60,15 +64,40 @@ This is the documentation of the data from [[https://ns.sentiweb.fr/incidence/cs ...@@ -60,15 +64,40 @@ This is the documentation of the data from [[https://ns.sentiweb.fr/incidence/cs
| ~geo_insee~ | Identifier of the geographic area, from INSEE https://www.insee.fr | | ~geo_insee~ | Identifier of the geographic area, from INSEE https://www.insee.fr |
| ~geo_name~ | Geographic label of the area, corresponding to INSEE code. This label is not an id and is only provided for human reading | | ~geo_name~ | Geographic label of the area, corresponding to INSEE code. This label is not an id and is only provided for human reading |
The [[https://en.wikipedia.org/wiki/ISO_8601][ISO-8601]] format is popular in Europe, but less so in North America. This may explain why few software packages handle this format. The Python language does it since version 3.6. We therefore use Python for the pre-processing phase, which has the advantage of not requiring any additional library. (Note: we will explain in module 4 why it is desirable for reproducibility to use as few external libraries as possible.) The [[https://en.wikipedia.org/wiki/ISO_8601][ISO-8601]] format is popular in Europe, but less so in North
America. This may explain why few software packages handle this
format. The Python language does it since version 3.6. We therefore
use Python for the pre-processing phase, which has the advantage of
not requiring any additional library. (Note: we will explain in module
4 why it is desirable for reproducibility to use as few external
libraries as possible.)
** Download ** Download
After downloading the raw data, we extract the part we are interested in. We first split the file into lines, of which we discard the first one that contains a comment. We then split the remaining lines into columns. We first check if the data is available in a data file. Otherwise, we
download it into the datafile
#+NAME: data-file
data.csv
#+BEGIN_SRC python :results silent :var data_url=data-url #+BEGIN_SRC python :results output :var data_url=data-url :var data_file=data-file
from urllib.request import urlopen from urllib.request import urlretrieve
from pathlib import Path
data = urlopen(data_url).read() f = Path(data_file.strip())
if not f.exists():
urlretrieve(data_url, f)
assert(f.is_file())
#+END_SRC
#+RESULTS:
After that we extract the part of the data we are interested in. We
first split the file into lines, of which we discard the first one
that contains a comment. We then split the remaining lines into
columns.
#+BEGIN_SRC python :results silent :var data_file=data-file
data = open(data_file.strip(), 'rb').read()
lines = data.decode('latin-1').strip().split('\n') lines = data.decode('latin-1').strip().split('\n')
data_lines = lines[1:] data_lines = lines[1:]
table = [line.split(',') for line in data_lines] table = [line.split(',') for line in data_lines]
...@@ -79,6 +108,13 @@ Let's have a look at what we have so far: ...@@ -79,6 +108,13 @@ Let's have a look at what we have so far:
table[:5] table[:5]
#+END_SRC #+END_SRC
#+RESULTS:
| week | indicator | inc | inc_low | inc_up | inc100 | inc100_low | inc100_up | geo_insee | geo_name |
| 202011 | 3 | 101704 | 93652 | 109756 | 154 | 142 | 166 | FR | France |
| 202010 | 3 | 104977 | 96650 | 113304 | 159 | 146 | 172 | FR | France |
| 202009 | 3 | 110696 | 102066 | 119326 | 168 | 155 | 181 | FR | France |
| 202008 | 3 | 143753 | 133984 | 153522 | 218 | 203 | 233 | FR | France |
** Checking for missing data ** Checking for missing data
Unfortunately there are many ways to indicate the absence of a data value in a dataset. Here we check for a common one: empty fields. For completeness, we should also look for non-numerical data in numerical columns. We don't do this here, but checks in later processing steps would catch such anomalies. Unfortunately there are many ways to indicate the absence of a data value in a dataset. Here we check for a common one: empty fields. For completeness, we should also look for non-numerical data in numerical columns. We don't do this here, but checks in later processing steps would catch such anomalies.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment