It was announced yesterday that Fran, our CEO, will be on the new steering group of “digital and data visionaries” helping the UK government become more and more data-driven. We’re obviously very excited about this – both because it’s an honour to be asked to participate, and because we’re genuinely keen to put data to work in a way that really affects our lives for the better.
The full list of members is:
- Sir Nigel Shadbolt from the ODI
- Mustafa Sulyman from Google DeepMind
- Fran Bennett from Mastodon C
- Xavier Rolet from the London Stock Exchange
- Mark Thompson from Judge Business School
- Dame Fiona Caldicott, former Chair of the National Information Governance Board for Health and Social Care
We’re getting very close to launching the minimum viable product (MVP) version of Witan, the city modelling platform we’ve been working on since this summer.
The MVP will be focussed on running and evolving demographic models – having a picture of how many people, of what type, will be where, in a city, underpins all sorts of other critical services, so is a good place to start with creating a data-driven, legible model of the city. Plus, it’s helping us to develop a really great user experience for the key parts of city modelling:
- Working with input data from many places – in this case, 33 London Boroughs, providing various housing scenarios as inputs
- Forecasting based on variable assumptions, and making those assumptions transparent – for example, how high migration is expected to be in future years
- The ability to keep some scenarios and data private, but also to share your scenarios with colleagues and keep track of how they are developed over time as policy or knowledge evolve
The interface is pretty clean and simple at the moment. We’re happy with how it’s coming together, and excited about having it live before Christmas – sneak preview below.
The team has also been busy on personal projects – while building Witan v1, we’ve also manufactured one and three-quarters babies between us.
If you’d like to keep up with Witan progress, request a demo, or request more very cute baby photos, please do get in touch.
[cross posted from the London DataStore blog]
There’s been a lot of discussion, on this blog and elsewhere, of what to actually do with city data once it’s out in the world. We think that, given the capital’s booming population and the consequent policy focus on addressing a wide range of infrastructure needs, one of the most important applications is using data (both open data, and the more private stuff) to gain insight into how the city is functioning right now, how this could potentially change based on a range of inputs, and to ultimately set out different scenarios for its future.
I’m very pleased to say that our company, Mastodon C, is going to be working with the GLA in 2015 and 2016 to try and do just this: to prototype a city data platform which will help the GLA’s modelling experts, data analysts, policy makers, and the public to integrate and make sense of different types of model and forecast, in order to explore scenarios for the future of London.
We plan to make use of the best of modern “big data” and web-based technologies, to make it easier for:
- experts to ‘look into’ and adjust the equations, assumptions and connections of their models (without the need for programming skills), and take advantage of functions they don’t have at the moment, like version control and the capacity to scale to very big or complex datasets and simulations
- policy makers to explore scenarios much more widely without the need to engage directly with the equations.
- all of us to be able to see what’s being planned and what the thought process behind that is, through a graphical interface.
The platform will build on open source technology, will be open source itself, and will provide an open API interface to read data from and publish data to other systems.
This is all made possible by Innovate UK’s SBRI programme, which is funding the prototype development – which gives us a really unusual and exciting opportunity to tackle some really important problems using the latest technology.
We are, of course, very excited about this whole thing, and will be blogging regularly as things unfold – our first job being to spend some quality time with the GLA’s experts to understand what they really need from such a platform. Watch this space for more news!
We just finished a piece of work for Nesta, looking at how to spot innovative software developers and firms using publicly available data. It was an intriguing exercise – and if you’re a developer, you might want to have a look and see if you can spot anyone you know (we certainly did!).
Longer explanation is here if you want to find out more.
We just published a new how-to to the Digital Catapult’s Open Health Data Platform, walking through how to get and use UK air quality data at scale.
The explanation and example code might be useful to you if you’re thinking of adding air quality data into your analysis.
We were on national radio recently, talking about being a disruptive UK technology company.
You can hear the whole thing at http://www.bbc.co.uk/radio/player/b04wtwfl from 2:55
Our latest superstar intern, Max, has just left us to go back to his last year of university at Queen Mary University of London.
He’s been working with us on server automation and on separating out our code base into better modules – heavy work, and with a steep learning curve, but important work for the company. Happily, he’s done a brilliant job. We asked him to write up some thoughts on how he found it, and what he did and didn’t like – and we thought you’d like to read them too.
From the start, I felt very included. Even as an intern I was never left out of anything and that was my first big impression of the team at MastodonC. I remember in my first week when everyone was moving to the meeting room for democake, I was saying that I have nothing to show. But Anna explained that didn’t matter, and I could just talk about what I had learned.
During my internship, I was given the opportunity to work on many different and interesting tasks. From initially learning the basics of Clojure and getting the chance to do some ClojureScript too, all the way to creating virtual machines to allow certain tasks to be run locally for testing. I learned a new way of thinking with Clojure which was different to all of my previous OO experience, and using virtual machines locally to run servers was completely new to me too. In the end I eventually managed to conquer setting up a virtual FTP server, which took me far too long!
A lot of my work over the summer involved writing tests for existing code. I had to check whether given inputs would match a schema. However, the team wanted to tests these in massive numbers, not just a couple of hard-coded tests. So I got to learn about generative testing using Clojure’s test.check library. Through this I got the chance to write my own small Clojure library. The team wanted to generate test data from existing schemas built from the Prismatic Schema library. After a lot of researching I couldn’t find anything that would do this well enough, so I had a go at creating my own. It is completely open source and can be found (and added to!) on GitHub. Doing this showed me how friendly and helpful the Clojure community is, when I had problems I could ask questions on specific Google Groups or the more general London Clojurians, as well as various IRC channels.
Now that it has finished, I definitely miss everyone at MastodonC and I would happily work there again if I get the chance to. Thank you so much for a fun, interesting and rewarding time!
Thank you Fran for being a great boss, who I could talk to about any questions I had. Thank you Bruce, for making me into a person who paredit is for. Thank you Neale for helping me with all of my Git mishaps and showing off ridiculous Emacs commands. Thank you Anna for coercing me into going to Clojure Dojos.
Thanks very much Max for spending time with us – it’s been a great experience, and we hope to get you back someday.