Getting started with Python for Scientific Computing

So you’d like to do some data analysis or other scientific computer with Python. How do you start?

The Anaconda distribution

A Python ‘distribution’ is a bundle of Python goodies, typically Python itself, a set of Python libraries and possibly an integrated development environment.

Anaconda is a Python distribution specifically for data science. It includes the most popular data science and machine learning Python packages, Jupyter for quick exploratory data analysis and Spyder for creating and running Python scripts.

For more information and to install Anaconda go to the Anaconda Distribution page

Jupyter Notebook

A Jupyter notebook lets you try out different Python commands and create a story which shows your steps and the results. For instance:

Once you have installed Anaconda, or otherwise installed Jupyter:

  1. Open a Terminal or Command Prompt
  2. jupyter notebook
  3. Jupyter will open in your browser
  4. Click on the ‘New’ button (right hand side), and select ‘Python 3’
  5. Start typing
  6. To execute a cell, hit Ctrl-Enter
  7. Jupyter automatically saves the notebook. Click on the title (top left hand corner, next to Jupyter logo) to give it a sensible name

Getting started with Python

So you’d like to give Python a go. How do you start?

(If you are going to be using Python for Scientific Computing, including Data Analysis, have a look at this article instead)

Installing Python

Make sure you install Python 3, which is the modern version of Python. There is also a legacy version of Python, Python 2.7, but this is being phased out and should not be used for new projects.

You can find installation files for Windows and Mac OSX at https://www.python.org/downloads/. When you start the installation on Windows there will be an option to add Python to the system path. I recommend you select this option, as it makes it easier to run your Python scripts. I have not tried this on Mac OSX; it may have the same option.

For Linux you can use your software package manager, such as aptitude, yum or zypper to install ‘python3’. This will give you Python 3

Running Python – REPL/Console

For trying out some simple Python commands you can use the Python Console. This is also called the REPL (Read, Execute, Print Loop). To start the Python Console, just run Python. This will give you something like this:

Have a little play with this. For instance:

When you are done, press ^Z (Windows) or ^D (Mac OSX and Linux). Or enter ‘exit()’

Running Python – IDLE editor

The console is great for quick experiments. For anything more permanent it is better to create a script, a text file which contains Python code. When you installed Python it came with IDLE, a very simple integrated development environment.

Start IDLE from your operating system’s menu. You will see something like this:

Now select File, New File. Enter some Python commands, like:

Hit ‘F5’ to run the program. You will be prompted to save the file first, so give it a name and save it. You will see the result of your script in the original (shell) window:

Running a Python script from the command line

Say you’ve written a Python script, or someone else has given you a script. How do you run it?

  1. Start a Terminal or (as Windows calls it) a Command Prompt.
  2. Use the ‘cd <path to folder>’ command to go to the folder which contains the script
  3. Enter: ‘python <scriptname>.py’. For instance: ‘python test.py’

Other editors

IDLE is great for getting you started quickly, but for any serious Python development I suggest you use a professional text editor or IDE (Integrated Development Environment). Both a text editor and an IDE let you create and edit text files. An IDE can also run, debug, test and more. For instance:

  • PyCharm. My favourite IDE. It gives you so much power to write, run, debug and test your scripts, I don’t know where to start. Just check it out at …. Start with the free Community edition.
  • Visual Studio Code. I hear good things about this IDE, and it recently became more popular than PyCharm, so it must be doing something right.
  • Sublime Text. An excellent text editor

Data Analysis with Python

Python is a very popular tool for data extraction, clean up, analysis and visualisation. I’ve recently done some work in this area, and would love to do some more. I particularly enjoy using my maths background and creating pretty, clear and helpful visualisations

  • Short client project, analysing sensor data. I took readings from two accelerometers and rotated the readings to get the relative movement between them. Using NumPy, Pandas and MatplotLib, I created a number of different charts, looking for a correlation between the equipment’s setting and the movement. Unfortunately the sensors aren’t sensitive enough to return usable information. Whilst not the outcome they were hoping for, the client told me “You’ve been really helpful and I’ve learned a lot”
  • At PyCon UK (Cardiff, September 2018) I attended 14 data analysis sessions. It was fascinating to see the range of tools and applications in Python data analytics. At a Bristol PyData MeetUp I summarised the sessions in a 5 minute lightening talk. This made me pay extra attention and keep useful notes during the conference
  • Short client project, researching best way to import a large data set, followed by implementation. The client regularly accesses large datasets using a folder hierarchy\to structure that data. They were looking to replace this with a professional database, i.e. PostgreSQL. I analysed their requirements, researched the different storage methods in PostgreSQL, reported my findings and created an import script.

Django Rest Framework API Microservice

I recently completed a small project for Zenstores. They simplify the shipping process for ecommerce sites. Their online service lets online businesses use multiple shipping companies for deliveries.

Each shipping companies offers a different own API, for booking shipments, etc. My client uses a separate microservice for each shipping company. These microservices listen to requests from the main system and translate them to the shipping company’s standard.

My client asked me to use Django Rest Framework to create a microservice which supports a new shipping company. DRF is a popular and powerful library to create RESTful APIs using Django.

The supplier provided me with a sandbox API and extensive documentation. The documentation was somewhat incomplete and out of date. Fortunately their support contact was very helpful all along.

I used Test Driven Design for complex functions where I understood the functionality well. For the rest I used a more experimental approach and added unit tests afterwards. Testing coverage was over 90{d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb}.

The client has integrated the microservice within their system and the first test shipments have gone through.

Teaching Python

Recently Learning Tree, a well-respected training company, invited me to teach Python for them. Last week I delivered my first course for them, their Advanced Python course

A room full of people, nearly 500 slides, about 10 step-by-step practical exercises and four days to make sure every left with a better understanding of Python

Even though I’ve been programming in Python for 6 years, I still don’t know it all. The language itself is constantly growing, there are 150,000+ open source Python packages, and only so many bytes of storage in my brain. In preparation I read through the slides, and looked up anything which i wasn’t fully clear on myself. I was pleasantly surprised by how much I do know

And, on the flip side, I added some of my own experiences whilst delivering the slides, adding some depth and flavour to the course

I made sure to regularly check the delegates’ understanding, and to fine tune my delivery. I’ve yet to receive a compilation of the feedback but, as far as I can tell, everyone made good progress and enjoyed it.

 

How I’m learning French

Or, how to learn without studying

Introduction

I love learning, but I don’t like studying. Take for instance learning a foreign language. There are many ways to do this, including “studying”: studying the grammar, rote learning words and reading literature. There is nothing wrong with studying, if that works for you. It just isn’t for me.

Instead I am learning French a bit like I learned my first and languages (Dutch and English) – by a lot of natural exposure and use in my daily life, not as a separate activity. I have added a French “flavour” to many of my day to day activities

Music

Listen to French music. Particularly with the internet, you should be able to find some music you like. I attribute my love of French music to a French musical we were shown on video in secondary school (Michael Fugain et le Big Bazar). I got the album and have played it over and over again. Little by little I’m picking out (and learning) more and more words

I often listen to music whilst I’m working. Some days I’ll hear three or more hours of French music. I’ve collected quite a few French CDs and downloads, listen to a French online radio station (e.g. Chante France) or to French musicians on Spotify or YouTube

For a while I even collected (as downloads and as a playlist) French version of songs I already knew in Dutch or English. Because I already know the lyrics, it is easier to make sense of the French lyrics

There is a great website LyricsTranslate where people submit song lyrics in the original language and others translate them. So you can find many French song lyrics with an English translation. It also has YouTube videos, so you listen to the French words and try to read along with the French or English lyrics

Also on YouTube, you can find many French songs with French and English subtitles. Listen to the French lyrics and read the subtitles

Movies

Watch movies with French audio. I love French movies. Many have a different pace, a bit slower and more thoughtful, than a Hollywood super hero blockbuster. This also makes it a bit easier to hear and understand the dialogue. My favourite French director, with lots of French dialogue, is Eric Rohmer.

When in France I look out for second hand DVDs, especially movies that I really want to watch. It shouldn’t become a chore. Ideally they should have subtitles. Some streaming services (e.g. Netflix) let you choose the subtitles and audio language for some of movies and programmes.

I  watch the following

  • French movies with English subtitles – as I read the English subtitles I try to hear how you say it in French
  • French movies with French subtitles – I find it easier to understand written French than spoken French, so this way I can more or less follow the story whilst practicing my listening skills
  • French movies without any subtitles – I still miss a lot whilst doing this, but it is good practice from time to time. And I may watch the movie a second time, with subtitles, to see what I’ve missed or misunderstood
  • English movies with French subtitles. Many of my favourite movies are in English. Ilisten to the English and see how the French say the same thing
  • For something really multinational I watch Ultimate Beastmaster on Netflix. Athletes from 6 different countries compete on an obstacle course. Each country has its own commentators. With French subtitles I get the US and UK commentators speaking in English with French subtitles, French commentators speaking in French with no subtitles, and some other languages I don’t understand but with French subtitles

Reading

Read French. I love reading – but it has to be something I’m interested in. Reading a boring French children’s book just to learn French doesn’t do it for me

Looking up words as i read doesn’t excite me either. It kills the joy of reading for me. Sometimes I get curious and look up a few words

How do you read interesting French when you just get started

  • Follow some French people or groups on Facebook, like Topito, or the Facebook page of a French town you’re visiting on holiday. If, like me, you spend too much time on Facebook, at least you’ll start picking up some French words
  • Switch your computer and/or mobile phone to French. But write down how you did it, so you can switch back later. There are many different settings you can change: your browser (so Google will return French websites), your operating system (so things like “open” and “save” will be in French), your Facebook, Twitter, etc, settings, so your “wall” becomes your “mur” (French for “wall”) etc. Or your phone, and your GPS directions may now be in French – maybe not as helpful but quite fun, in particular when the French lady starts mis-pronouncing the English road names
  • Read the French version of some of your favourite books. For instance, I’ve read The Lord of the Rings many times, and I know the story well. This helped when I started reading it in French. I don’t have to worry about losing the plot, and can just skip over words I don’t know or sentences I don’t understand
  • Try out different books. If you can’t get into a certain book, just put it aside and try another one. Again, second hand book shops and market stalls (in France) are very good for this. I’ve bought books for 1 euro. I’ve got over 50 unread books, which gives me plenty of choice
  • Or try your local library. Many libraries have a foreign literature section
  • Comic books are good too. The pictures help you to understand the story

Podcasts

Listen to podcasts. I’m a great fan of podcasts. I’ll listen to them whilst out running, doing the dishes and other chores, going off to sleep, doing some finger exercises on the guitar, and even whilst flossing my teeth. Here are some recommendations

  • Coffee break French. I started with this one. They have an archive with four seasons, from absolute beginners to advanced, so pick your level
  • Learn French by Podcast. Their lessons pack a lot in a short podcast. They cover many practical topics (e.g. how to talk about yourself). 195 podcasts (and still going), some of them very topical (politics, science, society)
  • Journal en Français facile. The (French) news in easy French. 10 minutes daily news

More

And a few more ideas

  • Visit the country
  • Immerse yourself in the culture
  • Make French friends, stay in touch on Facebook or whatever you use
  • If you play a musical instrument or sing, learn some French songs. I’ve even taken some French+guitar lessons with Cécile, a French singer/songwriter whose songs I really enjoy
  • Here in Bristol we’ve got some French singing workshops, which I’ve found very enjoyable – particularly because, as I’ve already mentioned, I love French songs
  • Find your local French Meetup groups

 

A couple of Python coding dojo’s

As Joe Wright puts it

A Coding Dojo is a programming session based around a simple coding challenge. Programmers of different skill levels are invited to engage in deliberate practice as equals. The goal is to learn, teach and improve with fellow software developers in a non-competitive setting.”

There is something quite satisfying about having a brief period to create something, by yourself or with others. So I recently went to a couple of coding dojo’s

PyCon UK 2018, Cardiff, September 2018

On the third evening of the conference, about 60 people took on the challenge of using Pygame Zero to create something on the theme of “Four seasons”

We hit on the idea of combining the four seasons of the year with a pizza quattro stagioni (four season pizza). This became an infinite scrolling background of the four seasons and a ‘rolling’ four season in the foreground

We used a peer coding approach, to simplify code sharing. And, with it being quite a simple concept to implement, we didn’t need to code in parallel. So, despite being one of the more experienced developers on the team, I sourced and prepared the assets (i.e. the pictures), whilst supporting my team mate who was behind the keyboard.

The end result was quite well received

You can find all the submissions at https://github.com/PyconUK/dojo18. Ours is under “shaunsfinger”.

CodeHub Python Coding DoJo MeetUp, October 2018

About 15 developers got together for this meetup, and took on the challenge of creating a “TypeRacer”

As far as I could tell, this meant typing as fast as possible. This probably referred to the TypeRacer website. I had not seen this before, but did know something similar, the space shooting typing game ZType

I imagined our game as a car which moves when you type the next correct character. After a brief discussion, we agreed to use PyGame. I have used it for a number of personal projects, and my two fellow team mates were interested in trying it out

We roughly divided the tasks between us, and my team mate set up a shared GitHub repo. I quickly found an image of a racing track as the background and a couple of cool looking racing cars. Starting from some simple sample PyGame code, I created the first version – showing the background image, and a car which moved a little on every tick of the game loop. In the meantime, my team mates showed the text and responded to the keyboard.

We brought this all together, did a bit more polishing, and finished just in time

Our game worked very well, and was exactly as I’d envisaged it. Our fellow Code-dojo-ers seemed to like it too

As it was for an informal coding exercise, not for public consumption or publication, and because of the time constraints, I decided to use copyrighted images. I have now replaced these with copyright-free images, from CraftPix and OpenGameArt

The final result is currently in a private repo. I have asked my team mate to make it public, and will update this post once this is done

With thanks to Katja Durrani and Eleni Lixourioti for organising this. It was well organised, with plenty of snacks and drinks, and a friendly atmosphere. And thanks to my team mates Andrew Chan and Eleni Lixourioti. It was a pleasure working with both of them

20 Rasperry Pi’s – one massive art installation

A couple of internationally renowned artists asked me for some help with their largest installation to date. As part of Hull’s City of Culture, Davy and Kristin McGuire created a large cardboard city and brought it to life with video projections.

They needed nearly 20 video players, so I created a bootable linux image for the Raspberry Pi which automatically plays a video from a standard location. I copied this to 20 memory cards, and tested them all.

The installation looked amazing and was a great success.

Grafana, InfluxDB and Python, simple sample

I recently came across an interesting contract position which uses Grafana and InfluxDB. I’d had a play with ElasticSearch before, and done some work with KairosDB, so was already familiar with time series and json-based database connections. Having manually created a dashboard, Grafana looked rather interesting. So I thought I’d do a quick trial – generate some random data, store it in InfluxDB and show it with Grafana

Starting with a clean virtual machine:

InfluxDB

  1. Set up InfluxDB
    1. I followed InfluxDB’s installation instructions, which worked first time without any problems
    2. Start it
      sudo /etc/init.d/influxdb start
      
  2. Test InfluxDB
    influx
    &amp;amp;gt; create database mydb
    &amp;amp;gt; show databases
    name: databases
    ---------------
    name
    _internal
    mydb
    
    &amp;amp;gt; use mydb
    &amp;amp;gt; INSERT cpu,host=serverA,region=us_west value=0.64
    &amp;amp;gt; SELECT host, region, value FROM cpu
    name: cpu
    ---------
    time            host    region  value
    1466603916401121705 serverA us_west 0.64
    
  3. Set up and test influxdb-python, so we can access InfluxDB using Python
    sudo apt-get install python-pip
    pip install influxdb
    python
    &amp;amp;gt;&amp;amp;gt;&amp;amp;gt; import influxdb
    &amp;amp;gt;&amp;amp;gt;&amp;amp;gt;
    
  4. Run through this example of writing and reading some InfluxDB data using Python
    &amp;amp;gt;&amp;amp;gt;&amp;amp;gt; from influxdb import InfluxDBClient
    &amp;amp;gt;&amp;amp;gt;&amp;amp;gt; json_body = [
    ...     {
    ...         "measurement": "cpu_load_short",
    ...         "tags": {
    ...             "host": "server01",
    ...             "region": "us-west"
    ...         },
    ...         "time": "2009-11-10T23:00:00Z",
    ...         "fields": {
    ...             "value": 0.64
    ...         }
    ...     }
    ... ]
    &amp;amp;gt;&amp;amp;gt;&amp;amp;gt; client = InfluxDBClient('localhost', 8086, 'root', 'root', 'example')
    &amp;amp;gt;&amp;amp;gt;&amp;amp;gt; client.switch_database('mydb')
    &amp;amp;gt;&amp;amp;gt;&amp;amp;gt; client.write_points(json_body)
    True
    &amp;amp;gt;&amp;amp;gt;&amp;amp;gt; print client.query('select value from cpu_load_short;')
    ResultSet({'(u'cpu_load_short', None)': [{u'value': 0.64, u'time': u'2009-11-10T23:00:00Z'}]})
    
  5. Create some more data, using a slimmed down version of this tutorial script
    import argparse
    
    from influxdb import InfluxDBClient
    from influxdb.client import InfluxDBClientError
    import datetime
    import random
    import time
    
    
    USER = 'root'
    PASSWORD = 'root'
    DBNAME = 'mydb'
    
    
    def main():
        host='localhost'
        port=8086
    
        nb_day = 15  # number of day to generate time series
        timeinterval_min = 5  # create an event every x minutes
        total_minutes = 1440 * nb_day
        total_records = int(total_minutes / timeinterval_min)
        now = datetime.datetime.today()
        metric = "server_data.cpu_idle"
        series = []
    
        for i in range(0, total_records):
            past_date = now - datetime.timedelta(minutes=i * timeinterval_min)
            value = random.randint(0, 200)
            hostName = "server-{d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb}d" {d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb} random.randint(1, 5)
            # pointValues = [int(past_date.strftime('{d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb}s')), value, hostName]
            pointValues = {
                    "time": past_date.strftime ("{d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb}Y-{d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb}m-{d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb}d {d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb}H:{d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb}M:{d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb}S"),
                    # "time": int(past_date.strftime('{d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb}s')),
                    "measurement": metric,
                    'fields':  {
                        'value': value,
                    },
                    'tags': {
                        "hostName": hostName,
                    },
                }
            series.append(pointValues)
        print(series)
    
        client = InfluxDBClient(host, port, USER, PASSWORD, DBNAME)
    
        print("Create a retention policy")
        retention_policy = 'awesome_policy'
        client.create_retention_policy(retention_policy, '3d', 3, default=True)
    
        print("Write points #: {0}".format(total_records))
        client.write_points(series, retention_policy=retention_policy)
    
        time.sleep(2)
    
        query = 'SELECT MEAN(value) FROM "{d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb}s" WHERE time &amp;amp;gt; now() - 10d GROUP BY time(500m);' {d34bf16ac7b745ad0d2811187511ec8954163ba9b5dbe9639d7e21cc4b3adbdb} (metric)
        result = client.query(query, database=DBNAME)
        print (result)
        print("Result: {0}".format(result))
    
    if __name__ == '__main__':
        main()
    
  6. Save as create_sample_data.py, run and test it
    python create_sample_data.py
    ......
    influx
    Visit https://enterprise.influxdata.com to register for updates, InfluxDB server management, and monitoring.
    Connected to http://localhost:8086 version 0.13.0
    InfluxDB shell version: 0.13.0
    &amp;gt; use database mydb
    &amp;gt; SELECT MEAN(value) FROM "server_data.cpu_idle" WHERE time &amp;gt; now() - 10d GROUP BY time(500m)
    time			mean
    1466280000000000000	94.03846153846153
    1466310000000000000	98.47
    1466340000000000000	95.43
    1466370000000000000	104.3
    1466400000000000000	104.01
    1466430000000000000	114.18
    1466460000000000000	106.19
    1466490000000000000	96.67
    1466520000000000000	107.77
    1466550000000000000	103.08
    1466580000000000000	100.53
    1466610000000000000	94
    

Grafana

  1. Install Grafana using the installation instructions:
    $ wget https://grafanarel.s3.amazonaws.com/builds/grafana_3.0.4-1464167696_amd64.deb
    $ sudo apt-get install -y adduser libfontconfig
    $ sudo dpkg -i grafana_3.0.4-1464167696_amd64.deb
    
  2. Start the server and automatically start the server on boot up
    sudo service grafana-server start
    sudo systemctl enable grafana-server.service
    
  3. Test
    1. In your browser, go to localhost:3000
    2. Log in as (user) admin, (password) admin
  4. Connect to the InfluxDB database
    1. I followed the Instructions at http://docs.grafana.org/datasources/influxdb/
    2. Click on the Grafana icon
    3. Select “Data Sources”
    4. Click on “+ Add data source”
      1. Name: demo data
      2. Type: InfluxDB
      3. URL: http://localhost:8086
      4. Database: mydb
      5. User: root
      6. Password: root
      7. Click on “Save and Test”
    5. Create a new Dashboard
      1. Click on the Grafana icon
      2. Select “Dashboards”
      3. Click on “New”
    6. Define a metric (graph)
      1. Click on the row menu, i.e. the green icon (vertical bar) to the left of the row
      2. Select “Add Panel”
      3. Select “Graph”
      4. On the Metrics tab (selected by default)
        1. Click on the row just below the tab, starting with “> A”
        2. Click on “select measurement” and select “server_data.cpu_idle”
          1. You should now see a chart
        3. Close this, by clicking on the cross, top right hand corner of the Metrics panel
    7. Save the dashboard
      1. Click on the save icon (top of the screen)
      2. Click on the yellow star, next to the dashboard name (“New dashboard”)
    8. Test it
      1. In a new browser tab or window, go to http://localhost:3000/
      2. Log in (admin, admin)
      3. The “New dashboard” will now show up in the list of starred dashboards (and probably also under “Recently viewed dashboards”)
      4. Click on “New dashboard” to see the chart

You should now see something like this:

Grafana InfluxDB

Namepy step 7 – Bringing it all together

(This is part of the namepy project. Start at Namepy – on the shoulders of giants)

Time to show some real results on a web page.

  1. Extend the API to show the letter scoring tables, no pagination, in __init__.py add:
    manager.create_api(models.Set, methods=['GET'], results_per_page=0) 
    
  2. Rename helloworld.html to index.html
  3. At the end of views.py, update the template name to index.html, and stop passing in ‘names’ since this is now done through the API, and rename the endpoint function to ‘index’:
    @app.route("/") 
    def index(): 
        return render_template('index.html') 
    

That’s it for the changes to the back end. The rest of the changes will all be in the front end, in index.html

  1. Rename the app from HelloWorldApp to NamePyApp
  2. Rename the controller from HelloWorldController to NamePyController
  3. Load the letter scoring table, and simplify it for faster lookup
    $scope.sets = [];
    angular.forEach(response.data.objects, function(set, index) {
        scores = {};
        angular.forEach(set.scores, function(score, index) {
            scores[score.letter] = score.score;
        });
        $scope.sets.push({ name: set.name, scores: scores});
    });
    
  4. Calculate the score for each of the sets
    angular.forEach($scope.sets, function(set, index) {
        var total = 0;
        var error = false;
        angular.forEach(name.split(''), function(character, index2) {
            if (character in set.scores) {
                total += set.scores[character];
            } else {
                error = true;
            }
        });
    
        if (error == false) {
            result.push([set.name, total]);
        }
    
        $scope.sort_on_element(result, 1);
    
        $scope.scores = result;
    });
    
  5. Show the result on the page, using Highcharts. For the code see the source code, function “showLetterScores”

Show baby name distribution

  1. Get data for entered name
    var filters = [{ name: 'name', 
        op: 'ilike', 
        val: $scope.visitor_name}];
    
    $http({
        method: 'GET',
        url: 'api/name',
        params: {"q": JSON.stringify({"filters": filters})}
        })
        .then(
            $scope.show_name_distribution,  
            function(response) {            
                $('#babynames_container').hide();
            }
        );
    
  2. Restructure the results for Highcharts
    var boy_frequency = [];
    var girl_frequency = [];
    var boys_found = false;
    var girls_found = false;
    
    angular.forEach(response.data.objects[0].frequencies, 
        function(frequency) {
            boy_frequency.push([
                Date.UTC(frequency.year, 1, 1), 
                frequency.boys_count]);
    
            girl_frequency.push([ 
                Date.UTC(frequency.year, 1, 1), 
                frequency.girls_count]);
    
            if (frequency.boys_count) boys_found = true;
            if (frequency.girls_count) girls_found = true;
        });
    
    $scope.sort_on_element(boy_frequency, 0);
    $scope.sort_on_element(girl_frequency, 0);
    
  3. Show the results using Highcharts. See the source code, function “show_name_distribution”

Done

Done Done

This is the final blog post for this little project. I hope you found it useful.