A couple of days ago The Guardian’s Data Blog published data about the location and status of nuclear plants around the globe. I used their data to create a Google Map that shows a 20km Fukushima evacuation zone and a 30km Chernobyl exclusion zone around each active plant. Look at the map here.
Another interesting map was made by Robert Johnston. He collected data on worldwide nuclear test zones. Look at the data on his website or look at this map generated from Johnston’s Google Earth KML file.
When I like to visualize data, I am mostly using the gglot2 library in R. My impression is that this package provides the most advance data visualization environment in R. But there is one thing, that you can’t do with ggplot2: Creating interactive graphics.
There is a fair number of tutorials in the web. I recommend to get your hands on Visualizing Data written by one of the developers of Processing, Ben Fry. It has some where useful examples and is written super accesible.
Other tutorials I found useful:
- Several adaptions of the Processing code from the book above to processing.js compatible code
- A very good example how to integrate processing.js with jQuery
There is one thing, that took me a while to recognize: If you use processing.js you need to put the processing code inside the html body-tag! If it is outside (in the head-tag), it won’t work.
Frequently I scrape textual data from the Internet or digital documents (e.g. pdfs) and combine these data with some other data for an analysis. I usually use not only Python for scraping/refining, but also R. And usually I also switch between both environments during data collection, refining and analysis. A typical workflow goes like this: Scrape the data and make a basic cleanup (strip html etc.), send the data to R for some data aggregation and re-structuring (merge with other data etc.), send the data to Processing to make some nice interactive graphics (if necessary) and finally go back to R and run a model. The key question is: How to exchange the data between these three environments? The most obvious answer: Use csv-files. But in my experience, a local MySQL database is more useful, since then it is easier to:
- subset the data and selectively import / export data to / from each environment
- directly search the data without importing the full dataset (e.g –> R is super-slow in searching text vectors)
- separate tables in a MySQL database can be used to make every refining step reversible
- easy to migrate the data to the web
Of course, all these advantages only apply if you work with big datasets. One the other hand, if you work with textual data, you certainly quickly approaching “big”.
There are tons of tutorials in the web, explaining how to use set-up MySQL and use it with Python, R and Processing. Here is a list of those that I found most helpful at the beginning:
- Getting started with MySQL
- Access MySQL from R (MAC)
- Connecting from Python to MySQL
- Using MySQL from Processing
- Install the 5.1. Version of MySQL – not the 5.5! Looks like that the new version has some bugs (see also here). After installation my MySQL server didn’t start at all.
- If you get an error while installing MySQL-Python (the driver to connect from Python to MySQL) via easy_install, use this (replace XYZ with your MySQL Version!):
PATH=$PATH:/usr/local/mysql-XYZ/bin sudo easy_install -Z MySQL-pythonNo worries, this is only modifying your PATH once – not permanently! (Source)
- if you want to play around without installing MySQL, download XAMPP and create a socket using
sudo ln -s /tmp/mysql.sock /Applications/XAMPP/xamppfiles/var/mysql/mysql.sockThat way R/Python/Processing can connect to it.
DIE ZEIT, a german weekly newspaper, asked people to forecast the election results of the state election in Hamburg. 810 participated from Feb 16 to 19 on Facebook and Twitter. Below the distribution of the forecasts for each party, along with the final election results from Feb 20 (red line) and the result of a telephone survey (green line) undertaken by GMS – a commercial public opinion polling service – between Feb 15 to 17. Data source: here, here and here.
It is pretty amazing, that the crowd forecast is almost identical with the survey results from GMS. Although both failed to predict the election results with a reasonable accuracy, the experiment shows that crowd forecasting might be an interesting method to poll the population with lesser cost and almost the same accuracy as conventional polling. I am sort of surprised about the high correlation between the GMS results and the crowd forecast, since I had suspected that they diverge due to self-selection biases in the crowd forecast experiment.