Author Archives: coen

Data Analysis with Python

Python is a very popular tool for data extraction, clean up, analysis and visualisation. I’ve recently done some work in this area, and would love to do some more. I particularly enjoy using my maths background and creating pretty, clear and helpful visualisations

  • Short client project, analysing sensor data. I took readings from two accelerometers and rotated the readings to get the relative movement between them. Using NumPy, Pandas and MatplotLib, I created a number of different charts, looking for a correlation between the equipment’s setting and the movement. Unfortunately the sensors aren’t sensitive enough to return usable information. Whilst not the outcome they were hoping for, the client told me “You’ve been really helpful and I’ve learned a lot”
  • At PyCon UK (Cardiff, September 2018) I attended 14 data analysis sessions. It was fascinating to see the range of tools and applications in Python data analytics. At a Bristol PyData MeetUp I summarised the sessions in a 5 minute lightening talk. This made me pay extra attention and keep useful notes during the conference
  • Short client project, researching best way to import a large data set, followed by implementation. The client regularly accesses large datasets using a folder hierarchy\to structure that data. They were looking to replace this with a professional database, i.e. PostgreSQL. I analysed their requirements, researched the different storage methods in PostgreSQL, reported my findings and created an import script.

Django Rest Framework API Microservice

I recently completed a small project for Zenstores. They simplify the shipping process for ecommerce sites. Their online service lets online businesses use multiple shipping companies for deliveries.

Each shipping companies offers a different own API, for booking shipments, etc. My client uses a separate microservice for each shipping company. These microservices listen to requests from the main system and translate them to the shipping company’s standard.

My client asked me to use Django Rest Framework to create a microservice which supports a new shipping company. DRF is a popular and powerful library to create RESTful APIs using Django.

The supplier provided me with a sandbox API and extensive documentation. The documentation was somewhat incomplete and out of date. Fortunately their support contact was very helpful all along.

I used Test Driven Design for complex functions where I understood the functionality well. For the rest I used a more experimental approach and added unit tests afterwards. Testing coverage was over 90%.

The client has integrated the microservice within their system and the first test shipments have gone through.

Teaching Python

Recently Learning Tree, a well-respected training company, invited me to teach Python for them. Last week I delivered my first course for them, their Advanced Python course

A room full of people, nearly 500 slides, about 10 step-by-step practical exercises and four days to make sure every left with a better understanding of Python

Even though I’ve been programming in Python for 6 years, I still don’t know it all. The language itself is constantly growing, there are 150,000+ open source Python packages, and only so many bytes of storage in my brain. In preparation I read through the slides, and looked up anything which i wasn’t fully clear on myself. I was pleasantly surprised by how much I do know

And, on the flip side, I added some of my own experiences whilst delivering the slides, adding some depth and flavour to the course

I made sure to regularly check the delegates’ understanding, and to fine tune my delivery. I’ve yet to receive a compilation of the feedback but, as far as I can tell, everyone made good progress and enjoyed it.

 

How I’m learning French

Or, how to learn without studying

Introduction

I love learning, but I don’t like studying. Take for instance learning a foreign language. There are many ways to do this, including “studying”: studying the grammar, rote learning words and reading literature. There is nothing wrong with studying, if that works for you. It just isn’t for me.

Instead I am learning French a bit like I learned my first and languages (Dutch and English) – by a lot of natural exposure and use in my daily life, not as a separate activity. I have added a French “flavour” to many of my day to day activities

Music

Listen to French music. Particularly with the internet, you should be able to find some music you like. I attribute my love of French music to a French musical we were shown on video in secondary school (Michael Fugain et le Big Bazar). I got the album and have played it over and over again. Little by little I’m picking out (and learning) more and more words

I often listen to music whilst I’m working. Some days I’ll hear three or more hours of French music. I’ve collected quite a few French CDs and downloads, listen to a French online radio station (e.g. Chante France) or to French musicians on Spotify or YouTube

For a while I even collected (as downloads and as a playlist) French version of songs I already knew in Dutch or English. Because I already know the lyrics, it is easier to make sense of the French lyrics

There is a great website LyricsTranslate where people submit song lyrics in the original language and others translate them. So you can find many French song lyrics with an English translation. It also has YouTube videos, so you listen to the French words and try to read along with the French or English lyrics

Also on YouTube, you can find many French songs with French and English subtitles. Listen to the French lyrics and read the subtitles

Movies

Watch movies with French audio. I love French movies. Many have a different pace, a bit slower and more thoughtful, than a Hollywood super hero blockbuster. This also makes it a bit easier to hear and understand the dialogue. My favourite French director, with lots of French dialogue, is Eric Rohmer.

When in France I look out for second hand DVDs, especially movies that I really want to watch. It shouldn’t become a chore. Ideally they should have subtitles. Some streaming services (e.g. Netflix) let you choose the subtitles and audio language for some of movies and programmes.

I  watch the following

  • French movies with English subtitles – as I read the English subtitles I try to hear how you say it in French
  • French movies with French subtitles – I find it easier to understand written French than spoken French, so this way I can more or less follow the story whilst practicing my listening skills
  • French movies without any subtitles – I still miss a lot whilst doing this, but it is good practice from time to time. And I may watch the movie a second time, with subtitles, to see what I’ve missed or misunderstood
  • English movies with French subtitles. Many of my favourite movies are in English. Ilisten to the English and see how the French say the same thing
  • For something really multinational I watch Ultimate Beastmaster on Netflix. Athletes from 6 different countries compete on an obstacle course. Each country has its own commentators. With French subtitles I get the US and UK commentators speaking in English with French subtitles, French commentators speaking in French with no subtitles, and some other languages I don’t understand but with French subtitles

Reading

Read French. I love reading – but it has to be something I’m interested in. Reading a boring French children’s book just to learn French doesn’t do it for me

Looking up words as i read doesn’t excite me either. It kills the joy of reading for me. Sometimes I get curious and look up a few words

How do you read interesting French when you just get started

  • Follow some French people or groups on Facebook, like Topito, or the Facebook page of a French town you’re visiting on holiday. If, like me, you spend too much time on Facebook, at least you’ll start picking up some French words
  • Switch your computer and/or mobile phone to French. But write down how you did it, so you can switch back later. There are many different settings you can change: your browser (so Google will return French websites), your operating system (so things like “open” and “save” will be in French), your Facebook, Twitter, etc, settings, so your “wall” becomes your “mur” (French for “wall”) etc. Or your phone, and your GPS directions may now be in French – maybe not as helpful but quite fun, in particular when the French lady starts mis-pronouncing the English road names
  • Read the French version of some of your favourite books. For instance, I’ve read The Lord of the Rings many times, and I know the story well. This helped when I started reading it in French. I don’t have to worry about losing the plot, and can just skip over words I don’t know or sentences I don’t understand
  • Try out different books. If you can’t get into a certain book, just put it aside and try another one. Again, second hand book shops and market stalls (in France) are very good for this. I’ve bought books for 1 euro. I’ve got over 50 unread books, which gives me plenty of choice
  • Or try your local library. Many libraries have a foreign literature section
  • Comic books are good too. The pictures help you to understand the story

Podcasts

Listen to podcasts. I’m a great fan of podcasts. I’ll listen to them whilst out running, doing the dishes and other chores, going off to sleep, doing some finger exercises on the guitar, and even whilst flossing my teeth. Here are some recommendations

  • Coffee break French. I started with this one. They have an archive with four seasons, from absolute beginners to advanced, so pick your level
  • Learn French by Podcast. Their lessons pack a lot in a short podcast. They cover many practical topics (e.g. how to talk about yourself). 195 podcasts (and still going), some of them very topical (politics, science, society)
  • Journal en Français facile. The (French) news in easy French. 10 minutes daily news

More

And a few more ideas

  • Visit the country
  • Immerse yourself in the culture
  • Make French friends, stay in touch on Facebook or whatever you use
  • If you play a musical instrument or sing, learn some French songs. I’ve even taken some French+guitar lessons with Cécile, a French singer/songwriter whose songs I really enjoy
  • Here in Bristol we’ve got some French singing workshops, which I’ve found very enjoyable – particularly because, as I’ve already mentioned, I love French songs
  • Find your local French Meetup groups

 

A couple of Python coding dojo’s

As Joe Wright puts it

A Coding Dojo is a programming session based around a simple coding challenge. Programmers of different skill levels are invited to engage in deliberate practice as equals. The goal is to learn, teach and improve with fellow software developers in a non-competitive setting.”

There is something quite satisfying about having a brief period to create something, by yourself or with others. So I recently went to a couple of coding dojo’s

PyCon UK 2018, Cardiff, September 2018

On the third evening of the conference, about 60 people took on the challenge of using Pygame Zero to create something on the theme of “Four seasons”

We hit on the idea of combining the four seasons of the year with a pizza quattro stagioni (four season pizza). This became an infinite scrolling background of the four seasons and a ‘rolling’ four season in the foreground

We used a peer coding approach, to simplify code sharing. And, with it being quite a simple concept to implement, we didn’t need to code in parallel. So, despite being one of the more experienced developers on the team, I sourced and prepared the assets (i.e. the pictures), whilst supporting my team mate who was behind the keyboard.

The end result was quite well received

You can find all the submissions at https://github.com/PyconUK/dojo18. Ours is under “shaunsfinger”.

CodeHub Python Coding DoJo MeetUp, October 2018

About 15 developers got together for this meetup, and took on the challenge of creating a “TypeRacer”

As far as I could tell, this meant typing as fast as possible. This probably referred to the TypeRacer website. I had not seen this before, but did know something similar, the space shooting typing game ZType

I imagined our game as a car which moves when you type the next correct character. After a brief discussion, we agreed to use PyGame. I have used it for a number of personal projects, and my two fellow team mates were interested in trying it out

We roughly divided the tasks between us, and my team mate set up a shared GitHub repo. I quickly found an image of a racing track as the background and a couple of cool looking racing cars. Starting from some simple sample PyGame code, I created the first version – showing the background image, and a car which moved a little on every tick of the game loop. In the meantime, my team mates showed the text and responded to the keyboard.

We brought this all together, did a bit more polishing, and finished just in time

Our game worked very well, and was exactly as I’d envisaged it. Our fellow Code-dojo-ers seemed to like it too

As it was for an informal coding exercise, not for public consumption or publication, and because of the time constraints, I decided to use copyrighted images. I have now replaced these with copyright-free images, from CraftPix and OpenGameArt

The final result is currently in a private repo. I have asked my team mate to make it public, and will update this post once this is done

With thanks to Katja Durrani and Eleni Lixourioti for organising this. It was well organised, with plenty of snacks and drinks, and a friendly atmosphere. And thanks to my team mates Andrew Chan and Eleni Lixourioti. It was a pleasure working with both of them

20 Rasperry Pi’s – one massive art installation

A couple of internationally renowned artists asked me for some help with their largest installation to date. As part of Hull’s City of Culture, Davy and Kristin McGuire created a large cardboard city and brought it to life with video projections.

They needed nearly 20 video players, so I created a bootable linux image for the Raspberry Pi which automatically plays a video from a standard location. I copied this to 20 memory cards, and tested them all.

The installation looked amazing and was a great success.

Grafana, InfluxDB and Python, simple sample

I recently came across an interesting contract position which uses Grafana and InfluxDB. I’d had a play with ElasticSearch before, and done some work with KairosDB, so was already familiar with time series and json-based database connections. Having manually created a dashboard, Grafana looked rather interesting. So I thought I’d do a quick trial – generate some random data, store it in InfluxDB and show it with Grafana

Starting with a clean virtual machine:

InfluxDB

  1. Set up InfluxDB
    1. I followed InfluxDB’s installation instructions, which worked first time without any problems
    2. Start it
      sudo /etc/init.d/influxdb start
      
  2. Test InfluxDB
    influx
    > create database mydb
    > show databases
    name: databases
    ---------------
    name
    _internal
    mydb
    
    > use mydb
    > INSERT cpu,host=serverA,region=us_west value=0.64
    > SELECT host, region, value FROM cpu
    name: cpu
    ---------
    time            host    region  value
    1466603916401121705 serverA us_west 0.64
    
  3. Set up and test influxdb-python, so we can access InfluxDB using Python
    sudo apt-get install python-pip
    pip install influxdb
    python
    >>> import influxdb
    >>>
    
  4. Run through this example of writing and reading some InfluxDB data using Python
    >>> from influxdb import InfluxDBClient
    >>> json_body = [
    ...     {
    ...         "measurement": "cpu_load_short",
    ...         "tags": {
    ...             "host": "server01",
    ...             "region": "us-west"
    ...         },
    ...         "time": "2009-11-10T23:00:00Z",
    ...         "fields": {
    ...             "value": 0.64
    ...         }
    ...     }
    ... ]
    >>> client = InfluxDBClient('localhost', 8086, 'root', 'root', 'example')
    >>> client.switch_database('mydb')
    >>> client.write_points(json_body)
    True
    >>> print client.query('select value from cpu_load_short;')
    ResultSet({'(u'cpu_load_short', None)': [{u'value': 0.64, u'time': u'2009-11-10T23:00:00Z'}]})
    
  5. Create some more data, using a slimmed down version of this tutorial script
    import argparse
    
    from influxdb import InfluxDBClient
    from influxdb.client import InfluxDBClientError
    import datetime
    import random
    import time
    
    
    USER = 'root'
    PASSWORD = 'root'
    DBNAME = 'mydb'
    
    
    def main():
        host='localhost'
        port=8086
    
        nb_day = 15  # number of day to generate time series
        timeinterval_min = 5  # create an event every x minutes
        total_minutes = 1440 * nb_day
        total_records = int(total_minutes / timeinterval_min)
        now = datetime.datetime.today()
        metric = "server_data.cpu_idle"
        series = []
    
        for i in range(0, total_records):
            past_date = now - datetime.timedelta(minutes=i * timeinterval_min)
            value = random.randint(0, 200)
            hostName = "server-%d" % random.randint(1, 5)
            # pointValues = [int(past_date.strftime('%s')), value, hostName]
            pointValues = {
                    "time": past_date.strftime ("%Y-%m-%d %H:%M:%S"),
                    # "time": int(past_date.strftime('%s')),
                    "measurement": metric,
                    'fields':  {
                        'value': value,
                    },
                    'tags': {
                        "hostName": hostName,
                    },
                }
            series.append(pointValues)
        print(series)
    
        client = InfluxDBClient(host, port, USER, PASSWORD, DBNAME)
    
        print("Create a retention policy")
        retention_policy = 'awesome_policy'
        client.create_retention_policy(retention_policy, '3d', 3, default=True)
    
        print("Write points #: {0}".format(total_records))
        client.write_points(series, retention_policy=retention_policy)
    
        time.sleep(2)
    
        query = 'SELECT MEAN(value) FROM "%s" WHERE time > now() - 10d GROUP BY time(500m);' % (metric)
        result = client.query(query, database=DBNAME)
        print (result)
        print("Result: {0}".format(result))
    
    if __name__ == '__main__':
        main()
    
  6. Save as create_sample_data.py, run and test it
    python create_sample_data.py
    ......
    influx
    Visit https://enterprise.influxdata.com to register for updates, InfluxDB server management, and monitoring.
    Connected to http://localhost:8086 version 0.13.0
    InfluxDB shell version: 0.13.0
    > use database mydb
    > SELECT MEAN(value) FROM "server_data.cpu_idle" WHERE time > now() - 10d GROUP BY time(500m)
    time			mean
    1466280000000000000	94.03846153846153
    1466310000000000000	98.47
    1466340000000000000	95.43
    1466370000000000000	104.3
    1466400000000000000	104.01
    1466430000000000000	114.18
    1466460000000000000	106.19
    1466490000000000000	96.67
    1466520000000000000	107.77
    1466550000000000000	103.08
    1466580000000000000	100.53
    1466610000000000000	94
    

Grafana

  1. Install Grafana using the installation instructions:
    $ wget https://grafanarel.s3.amazonaws.com/builds/grafana_3.0.4-1464167696_amd64.deb
    $ sudo apt-get install -y adduser libfontconfig
    $ sudo dpkg -i grafana_3.0.4-1464167696_amd64.deb
    
  2. Start the server and automatically start the server on boot up
    sudo service grafana-server start
    sudo systemctl enable grafana-server.service
    
  3. Test
    1. In your browser, go to localhost:3000
    2. Log in as (user) admin, (password) admin
  4. Connect to the InfluxDB database
    1. I followed the Instructions at http://docs.grafana.org/datasources/influxdb/
    2. Click on the Grafana icon
    3. Select “Data Sources”
    4. Click on “+ Add data source”
      1. Name: demo data
      2. Type: InfluxDB
      3. URL: http://localhost:8086
      4. Database: mydb
      5. User: root
      6. Password: root
      7. Click on “Save and Test”
    5. Create a new Dashboard
      1. Click on the Grafana icon
      2. Select “Dashboards”
      3. Click on “New”
    6. Define a metric (graph)
      1. Click on the row menu, i.e. the green icon (vertical bar) to the left of the row
      2. Select “Add Panel”
      3. Select “Graph”
      4. On the Metrics tab (selected by default)
        1. Click on the row just below the tab, starting with “> A”
        2. Click on “select measurement” and select “server_data.cpu_idle”
          1. You should now see a chart
        3. Close this, by clicking on the cross, top right hand corner of the Metrics panel
    7. Save the dashboard
      1. Click on the save icon (top of the screen)
      2. Click on the yellow star, next to the dashboard name (“New dashboard”)
    8. Test it
      1. In a new browser tab or window, go to http://localhost:3000/
      2. Log in (admin, admin)
      3. The “New dashboard” will now show up in the list of starred dashboards (and probably also under “Recently viewed dashboards”)
      4. Click on “New dashboard” to see the chart

You should now see something like this:

Grafana InfluxDB

Namepy step 7 – Bringing it all together

(This is part of the namepy project. Start at Namepy – on the shoulders of giants)

Time to show some real results on a web page.

  1. Extend the API to show the letter scoring tables, no pagination, in __init__.py add:
    manager.create_api(models.Set, methods=['GET'], results_per_page=0) 
    
  2. Rename helloworld.html to index.html
  3. At the end of views.py, update the template name to index.html, and stop passing in ‘names’ since this is now done through the API, and rename the endpoint function to ‘index’:
    @app.route("/") 
    def index(): 
        return render_template('index.html') 
    

That’s it for the changes to the back end. The rest of the changes will all be in the front end, in index.html

  1. Rename the app from HelloWorldApp to NamePyApp
  2. Rename the controller from HelloWorldController to NamePyController
  3. Load the letter scoring table, and simplify it for faster lookup
    $scope.sets = [];
    angular.forEach(response.data.objects, function(set, index) {
        scores = {};
        angular.forEach(set.scores, function(score, index) {
            scores[score.letter] = score.score;
        });
        $scope.sets.push({ name: set.name, scores: scores});
    });
    
  4. Calculate the score for each of the sets
    angular.forEach($scope.sets, function(set, index) {
        var total = 0;
        var error = false;
        angular.forEach(name.split(''), function(character, index2) {
            if (character in set.scores) {
                total += set.scores[character];
            } else {
                error = true;
            }
        });
    
        if (error == false) {
            result.push([set.name, total]);
        }
    
        $scope.sort_on_element(result, 1);
    
        $scope.scores = result;
    });
    
  5. Show the result on the page, using Highcharts. For the code see the source code, function “showLetterScores”

Show baby name distribution

  1. Get data for entered name
    var filters = [{ name: 'name', 
        op: 'ilike', 
        val: $scope.visitor_name}];
    
    $http({
        method: 'GET',
        url: 'api/name',
        params: {"q": JSON.stringify({"filters": filters})}
        })
        .then(
            $scope.show_name_distribution,  
            function(response) {            
                $('#babynames_container').hide();
            }
        );
    
  2. Restructure the results for Highcharts
    var boy_frequency = [];
    var girl_frequency = [];
    var boys_found = false;
    var girls_found = false;
    
    angular.forEach(response.data.objects[0].frequencies, 
        function(frequency) {
            boy_frequency.push([
                Date.UTC(frequency.year, 1, 1), 
                frequency.boys_count]);
    
            girl_frequency.push([ 
                Date.UTC(frequency.year, 1, 1), 
                frequency.girls_count]);
    
            if (frequency.boys_count) boys_found = true;
            if (frequency.girls_count) girls_found = true;
        });
    
    $scope.sort_on_element(boy_frequency, 0);
    $scope.sort_on_element(girl_frequency, 0);
    
  3. Show the results using Highcharts. See the source code, function “show_name_distribution”

Done

Done Done

This is the final blog post for this little project. I hope you found it useful.

Namepy step 6 – Load the data into the database

(This is part of the namepy project. Start at Namepy – on the shoulders of giants)

We will need the following data in the database:

  • Name frequencies – baby names by year
  • Scrabble™ letter values, by (country) Scrabble™ set

Name frequencies

  1. Download the data from https://www.ssa.gov/oact/babynames/names.zip
  2. Unzip it in <project_root>/raw_data/yob1880.txt, etc
  3. You may want to add raw_data to .gitignore, so it doesn’t get stored in your git repo
  4. Create some code to read files and store in PostgreSQL – read_name_frequencies.py
    def read_frequencies_from_file(filename, names):
        print(filename)
        year = int(filename[3:7])
    
        year_frequencies = {}
        for name in names:
            year_frequencies[name] = {'F': 0, 'M': 0}
    
        with open('raw_data/%s' % filename) as file:
            for line in file.readlines():
                try:
                    name_text, sex, count = line.split(",")
                except:
                    print("Couldn't parse line")
                    print(line)
                    print
                    continue
    
                if name_text not in names:
                    name = Name(name=name_text)
                    db.session.add(name)
                    db.session.commit()
                    names[name_text] = name.id
                    year_frequencies[name_text] = {'F': 0, 'M': 0}
    
                year_frequencies[name_text][sex] = int(count)
    
            for name, name_frequency in year_frequencies.iteritems():
                if name_frequency['F'] + name_frequency['M']:
                    name_id = names[name]
                    frequency_record = NameFrequency(name_id=name_id,
                        year=year,
                        boys_count=name_frequency['M'],
                        girls_count=name_frequency['F'])
                    db.session.add(frequency_record)
                    db.session.commit()
    
    def read_name_frequencies():
        db.create_all()
    
        # Start with an empty list
        print("Deleting any previous data")
        db.session.query(NameFrequency).delete()
        db.session.query(Name).delete()
        db.session.commit()
        print("Done")
    
        names = {}
    
        # Get file list
        for filename in listdir('raw_data'):
            if filename[:3] == 'yob':
                read_frequencies_from_file(filename, names)
    
  5. Run the code. Note that this may take a while to run. On my development machine it took about 8 minutes
  6. Check this in the database, for instance with phpPgAdmin or pgAdmin

Scrabble™ letter values

For a list of Scrabble™ letter values by Scrabble™ set see this Wikipedia entry. The following code will grab this page, extract the letter values and save this in the database

  1. Add the new tables to models.py
    class Set(db.Model):
        id = db.Column(db.Integer, primary_key=True)
        name = db.Column(db.String())
        scores = db.relationship('LetterScore', backref='set', lazy='dynamic')
    
    class LetterScore(db.Model):
        id = db.Column(db.Integer, primary_key=True)
        set_id = db.Column(db.Integer, db.ForeignKey('set.id'))
        score = db.Column(db.Integer)
        letter = db.Column(db.String(1))
    
  2. Write some code to parse this page and store the results in the database. See the source code in my GitHub repo
  3. Run the code
  4. Check the results in the database. See above (end of name frequencies section) for some suggested tools

Done

Next

Time to pull it all together and show some real charts

Continue to Step 7 – Bringing it all together

Namepy step 5 – Flask-Restless

(This is part of the namepy project. Start at Namepy – on the shoulders of giants)

We will need Angular to use an Ajax call to request the data from Flask, using a REST-style request, and show it in Highcharts.

The two main Flask libraries for creating a REST API are Flask-Restful and Flask-Restless. We will be using Flask-Restless, because it is particularly suited for what we’re trying to do: “Flask-Restless provides simple generation of ReSTful APIs for database models defined using SQLAlchemy (or Flask-SQLAlchemy)” – from the Flask-Restless documentation.

Create and test the REST API

  1. Install flask-restless
    (virtualenv) pip install flask-restless
  2. import at at the start of __init__py:
    import flask.ext.restless
  3. Create the API endpoint, add following to end of __init__.py:
    manager = flask.ext.restless.APIManager(app, flask_sqlalchemy_db=db) 
    manager.create_api(models.Name, methods=['GET']) 
    
  4. Test this – python index.py; point your browser to 127.0.0.1/api/name, this should show a JSON structure with the names and frequencies

Use Angular to request and process the REST data from the back end system

  1. Create a new function which takes a response object, extracts the json data, formats it, and passes it to Highcharts
                        $scope.showChart = function(response_data) {
    
                            chart_data = []
                            angular.forEach(response_data.objects, function(name_object, key) {
    
                                boys_count = []
                                angular.forEach(name_object.frequencies, function(frequency, key) {
                                    boys_count.push(frequency.boys_count);
                                });
                                chart_data.push({ name: name_object.name, data: boys_count });
                            });
    
                            $('#container').highcharts({
                                chart: {
                                    type: 'column'
                                },
                                title: {
                                    text: 'Name frequencies'
                                },
                                series: chart_data
                            });
                        };
    

    Note that this doesn’t quite make sense, for instance the year isn’t being shown in the chart. We’ll fix all of that later. For now the aim is to get the infrastructure set up – database, REST API, Angular, etc.

  2. Use Angular’s $http.get() function to call the api, and pass the response object to the showChart function upon completion
                        $http.get('api/name') 
                        	.then(function(response) { 
                        		$scope.showChart(response.data); 
                        	}); 
    
  3. Test: Make sure the Flask app is running and go to http://127.0.0.1:5000/. You should still see the name frequencies chart

Done

Next

That completes the technical set up, for now. We’re ready to do some real coding, starting with getting the data into the database.

Continue to Step 6 – Load the data into the database