Commit 53fb8c4e authored by Antoine's avatar Antoine

feat(module3): exo1 done

parent 0b6cab29
...@@ -26,11 +26,15 @@ if sys.version_info.major < 3 or sys.version_info.minor < 6: ...@@ -26,11 +26,15 @@ if sys.version_info.major < 3 or sys.version_info.minor < 6:
print("Please use Python 3.6 (or higher)!") print("Please use Python 3.6 (or higher)!")
#+END_SRC #+END_SRC
#+RESULTS:
#+BEGIN_SRC emacs-lisp :results output #+BEGIN_SRC emacs-lisp :results output
(unless (featurep 'ob-python) (unless (featurep 'ob-python)
(print "Please activate python in org-babel (org-babel-do-languages)!")) (print "Please activate python in org-babel (org-babel-do-languages)!"))
#+END_SRC #+END_SRC
#+RESULTS:
** R 3.4 ** R 3.4
We use only basic R functionality, so a earlier version might be OK, but we did not test this. We use only basic R functionality, so a earlier version might be OK, but we did not test this.
...@@ -39,6 +43,8 @@ We use only basic R functionality, so a earlier version might be OK, but we did ...@@ -39,6 +43,8 @@ We use only basic R functionality, so a earlier version might be OK, but we did
(print "Please activate R in org-babel (org-babel-do-languages)!")) (print "Please activate R in org-babel (org-babel-do-languages)!"))
#+END_SRC #+END_SRC
#+RESULTS:
* Data preprocessing * Data preprocessing
The data on the incidence of influenza-like illness are available from the Web site of the [[http://www.sentiweb.fr/][Réseau Sentinelles]]. We download them as a file in CSV format, in which each line corresponds to a week in the observation period. Only the complete dataset, starting in 1984 and ending with a recent week, is available for download. The URL is: The data on the incidence of influenza-like illness are available from the Web site of the [[http://www.sentiweb.fr/][Réseau Sentinelles]]. We download them as a file in CSV format, in which each line corresponds to a week in the observation period. Only the complete dataset, starting in 1984 and ending with a recent week, is available for download. The URL is:
...@@ -48,7 +54,7 @@ http://www.sentiweb.fr/datasets/incidence-PAY-3.csv ...@@ -48,7 +54,7 @@ http://www.sentiweb.fr/datasets/incidence-PAY-3.csv
This is the documentation of the data from [[https://ns.sentiweb.fr/incidence/csv-schema-v1.json][the download site]]: This is the documentation of the data from [[https://ns.sentiweb.fr/incidence/csv-schema-v1.json][the download site]]:
| Column name | Description | | Column name | Description |
|--------------+---------------------------------------------------------------------------------------------------------------------------| |-------------+---------------------------------------------------------------------------------------------------------------------------|
| ~week~ | ISO8601 Yearweek number as numeric (year*100 + week nubmer) | | ~week~ | ISO8601 Yearweek number as numeric (year*100 + week nubmer) |
| ~indicator~ | Unique identifier of the indicator, see metadata document https://www.sentiweb.fr/meta.json | | ~indicator~ | Unique identifier of the indicator, see metadata document https://www.sentiweb.fr/meta.json |
| ~inc~ | Estimated incidence value for the time step, in the geographic level | | ~inc~ | Estimated incidence value for the time step, in the geographic level |
...@@ -62,14 +68,29 @@ This is the documentation of the data from [[https://ns.sentiweb.fr/incidence/cs ...@@ -62,14 +68,29 @@ This is the documentation of the data from [[https://ns.sentiweb.fr/incidence/cs
The [[https://en.wikipedia.org/wiki/ISO_8601][ISO-8601]] format is popular in Europe, but less so in North America. This may explain why few software packages handle this format. The Python language does it since version 3.6. We therefore use Python for the pre-processing phase, which has the advantage of not requiring any additional library. (Note: we will explain in module 4 why it is desirable for reproducibility to use as few external libraries as possible.) The [[https://en.wikipedia.org/wiki/ISO_8601][ISO-8601]] format is popular in Europe, but less so in North America. This may explain why few software packages handle this format. The Python language does it since version 3.6. We therefore use Python for the pre-processing phase, which has the advantage of not requiring any additional library. (Note: we will explain in module 4 why it is desirable for reproducibility to use as few external libraries as possible.)
** Download ** Getting the data
First we import the data used for our computational document into Org. We check for the existence of an ~incidence.csv~ file in the same directory and download it from Sentinel if it does not exist.
After downloading the raw data, we extract the part we are interested in. We first split the file into lines, of which we discard the first one that contains a comment. We then split the remaining lines into columns. After downloading the raw data, we extract the part we are interested in. We first split the file into lines, of which we discard the first one that contains a comment. We then split the remaining lines into columns.
#+BEGIN_SRC python :results silent :var data_url=data-url #+BEGIN_SRC python :results silent :var data_url=data-url
from urllib.request import urlopen from urllib.request import urlopen
import os
FILENAME = "incidence.csv"
data = None
if os.path.isfile(FILENAME):
with open('incidence.csv', 'r', encoding='utf-8') as f:
data = f.read()
else:
data = urlopen(data_url).read().decode('latin-1')
with open('incidence.csv', 'w', encoding='utf-8') as f:
f.write(data)
data = urlopen(data_url).read() assert(data != None)
lines = data.decode('latin-1').strip().split('\n') lines = data.strip().split('\n')
data_lines = lines[1:] data_lines = lines[1:]
table = [line.split(',') for line in data_lines] table = [line.split(',') for line in data_lines]
#+END_SRC #+END_SRC
...@@ -79,6 +100,13 @@ Let's have a look at what we have so far: ...@@ -79,6 +100,13 @@ Let's have a look at what we have so far:
table[:5] table[:5]
#+END_SRC #+END_SRC
#+RESULTS:
| week | indicator | inc | inc_low | inc_up | inc100 | inc100_low | inc100_up | geo_insee | geo_name |
| 202542 | 3 | 86075 | 73875 | 98275 | 128 | 110 | 146 | FR | France |
| 202541 | 3 | 88482 | 79016 | 97948 | 132 | 118 | 146 | FR | France |
| 202540 | 3 | 79169 | 71180 | 87158 | 118 | 106 | 130 | FR | France |
| 202539 | 3 | 72930 | 64872 | 80988 | 109 | 97 | 121 | FR | France |
** Checking for missing data ** Checking for missing data
Unfortunately there are many ways to indicate the absence of a data value in a dataset. Here we check for a common one: empty fields. For completeness, we should also look for non-numerical data in numerical columns. We don't do this here, but checks in later processing steps would catch such anomalies. Unfortunately there are many ways to indicate the absence of a data value in a dataset. Here we check for a common one: empty fields. For completeness, we should also look for non-numerical data in numerical columns. We don't do this here, but checks in later processing steps would catch such anomalies.
...@@ -93,6 +121,9 @@ for row in table: ...@@ -93,6 +121,9 @@ for row in table:
valid_table.append(row) valid_table.append(row)
#+END_SRC #+END_SRC
#+RESULTS:
: ['198919', '3', '-', '', '', '-', '', '', 'FR', 'France']
** Extraction of the required columns ** Extraction of the required columns
There are only two columns that we will need for our analysis: the first (~"week"~) and the third (~"inc"~). We check the names in the header to be sure we pick the right data. We make a new table containing just the two columns required, without the header. There are only two columns that we will need for our analysis: the first (~"week"~) and the third (~"inc"~). We check the names in the header to be sure we pick the right data. We make a new table containing just the two columns required, without the header.
#+BEGIN_SRC python :results silent #+BEGIN_SRC python :results silent
...@@ -100,7 +131,7 @@ week = [row[0] for row in valid_table] ...@@ -100,7 +131,7 @@ week = [row[0] for row in valid_table]
assert week[0] == 'week' assert week[0] == 'week'
del week[0] del week[0]
inc = [row[2] for row in valid_table] inc = [row[2] for row in valid_table]
assert inc[0] == 'inc assert inc[0] == 'inc'
del inc[0] del inc[0]
data = list(zip(week, inc)) data = list(zip(week, inc))
#+END_SRC #+END_SRC
...@@ -110,6 +141,21 @@ Let's look at the first and last lines. We insert ~None~ to indicate to org-mode ...@@ -110,6 +141,21 @@ Let's look at the first and last lines. We insert ~None~ to indicate to org-mode
[('week', 'inc'), None] + data[:5] + [None] + data[-5:] [('week', 'inc'), None] + data[:5] + [None] + data[-5:]
#+END_SRC #+END_SRC
#+RESULTS:
| week | inc |
|--------+--------|
| 202542 | 86075 |
| 202541 | 88482 |
| 202540 | 79169 |
| 202539 | 72930 |
| 202538 | 61435 |
|--------+--------|
| 198448 | 78620 |
| 198447 | 72029 |
| 198446 | 87330 |
| 198445 | 135223 |
| 198444 | 68422 |
** Verification ** Verification
It is always prudent to verify if the data looks credible. A simple fact we can check for is that weeks are given as six-digit integers (four for the year, two for the week), and that the incidence values are positive integers. It is always prudent to verify if the data looks credible. A simple fact we can check for is that weeks are given as six-digit integers (four for the year, two for the week), and that the incidence values are positive integers.
#+BEGIN_SRC python :results output #+BEGIN_SRC python :results output
...@@ -120,6 +166,8 @@ for week, inc in data: ...@@ -120,6 +166,8 @@ for week, inc in data:
print("Suspicious value in column 'inc': ", (week, inc)) print("Suspicious value in column 'inc': ", (week, inc))
#+END_SRC #+END_SRC
#+RESULTS:
No problem - fine! No problem - fine!
** Date conversion ** Date conversion
...@@ -139,6 +187,21 @@ str_data = [(str(date), str(inc)) for date, inc in converted_data] ...@@ -139,6 +187,21 @@ str_data = [(str(date), str(inc)) for date, inc in converted_data]
[('date', 'inc'), None] + str_data[:5] + [None] + str_data[-5:] [('date', 'inc'), None] + str_data[:5] + [None] + str_data[-5:]
#+END_SRC #+END_SRC
#+RESULTS:
| date | inc |
|------------+--------|
| 1984-10-29 | 68422 |
| 1984-11-05 | 135223 |
| 1984-11-12 | 87330 |
| 1984-11-19 | 72029 |
| 1984-11-26 | 78620 |
|------------+--------|
| 2025-09-15 | 61435 |
| 2025-09-22 | 72930 |
| 2025-09-29 | 79169 |
| 2025-10-06 | 88482 |
| 2025-10-13 | 86075 |
** Date verification ** Date verification
We do one more verification: our dates must be separated by exactly one week, except around the missing data point. We do one more verification: our dates must be separated by exactly one week, except around the missing data point.
#+BEGIN_SRC python :results output #+BEGIN_SRC python :results output
...@@ -148,6 +211,9 @@ for date1, date2 in zip(dates[:-1], dates[1:]): ...@@ -148,6 +211,9 @@ for date1, date2 in zip(dates[:-1], dates[1:]):
print(f"The difference between {date1} and {date2} is {date2-date1}") print(f"The difference between {date1} and {date2} is {date2-date1}")
#+END_SRC #+END_SRC
#+RESULTS:
: The difference between 1989-05-01 and 1989-05-15 is 14 days, 0:00:00
** Transfer Python -> R ** Transfer Python -> R
We switch to R for data inspection and analysis, because the code is more concise in R and requires no additional libraries. We switch to R for data inspection and analysis, because the code is more concise in R and requires no additional libraries.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment