API wrangling

Following on from my Building 3.0 blog post, here’s another update on Ushahidi 3.0.

We’re currently still pushing on finishing the 3.0 API, or at least the first cut at it. The original due date for this was March 30, but thats slipped to mid April. Conversations have quieted down a little as we get to work on getting things done. Thats good, but means we need to keep working hard to keep the community up to date on our progress.

API design primer

Designing an API seemed simple.. but once you get into the weeds and start building things there are a lot of questions to answer. I’ve done a chunk of reading as I tried to figure out 1. what a proper RESTful approach would look like, 2. what’s normal practice, ie. where do developers often cut corners? why? is this good or bad?

A few valuable resources:


The last couple of weeks have been dominated by the thousands of tiny decisions made at each step of building the API. I’ve tried to keep the wiki updated with the general patterns of our API. This gives a few guidelines

  • What methods to use, and how to accomodate more complex queries (ie. search)
  • What HTTP response codes to use
  • What to cover in functional tests
  • How to expose relations and links beteen resources

We’ve got the basics in now: forms, with their attributes and group, and posts. This gives us enough to start experimenting, but there are still a lot of loose ends. The next things we’re looking at are: sets, tags, users - along with their roles and permissions - and extensions to posts - adding revisions, translations and updates. I’ve built out most of the translations and revisions support already, but it still needs polish and documentation.


This is probably the next major decision. How do we handle API authentication? do we use OAuth? 1.0 or 2.0? Where do we handle actual user login/registration?… the entire app is going to be built on the API.

At the moment I’m leaning towards OAuth 2.0, primarily because its what Swiftriver uses, and consistency between products is important. However OAuth 2.0 has some issues:

  • the editor for the spec withdrew saying OAuth 2 had failed
  • there are a few ways to mess up an implementation security wise
  • and one implementation isn’t guaranteed to be interoperable with another

That said the many almost-oauth APIs out there probably have more, and similar problems: rolling our own is not an option.

OAuth 2 is probably still going to get a lot of use, and there are good resources appearing on how to do it right, and a couple of good oauth2 clients.

One recommendation that stuck out was “If your API can require HTTPS, use OAuth 2.0 with bearer tokens. Otherwise, use OAuth 1.0a”. I need to dig into the details of this, but given deployers won’t always use SSL this is worth considering.

Stumbling blocks

In my last post I mentioned adding support for other databases, particularly PostgreSQL/PostGIS. I’ve already hit the first snag with this: minion migrations doesn’t handle other database engines at all. There are a few forks on github that try to solve this, but its mixed in with some other major changes. This isn’t a show stopper, but there’s some unpicking to be done and for now we’re concentrating on building a working API.

If anyone has working PostGIS knowledge and whats to have a go at getting this working, grab the code, leave a note and hopefully send a pull request. I’m happy to walk you through the existing code and answer questions.