Posted on July 30, 2011 by Max Ogden in Dispatches
Some experiments in collecting images + location from mobile devices Over the last couple of years I‟ve been fascinated with technology that collects and aggregates mappable information. Specifically I have been searching for or creating tools that have these properties:
Lets users discover things near them, allows submission of location, photos and text Built on an open data API so that others can access and build on the data Easy for new developers to clone and deploy the app. A big roadblock on this front has been the Apple App Store. $100 and weeks of back and forth just to publish a public data app that uploads photos? Rubbish.
My first foray into “civic web” applications was in the form of a project called Portland Smells, which was a map-wiki that collected smelly places (both good and bad) and threw them on a map.
Portland Smells was meant to be more of an art project than a public utility. That is, until a local government representative contacted me regarding a project that was being built out (now available at whatsinourair.org) to help report and monitor illegal chemical dump sites in the Portland area. Apparently the functionality created for Portland Smells was greatly needed for social and environmental justice. Needless to say I got hooked on the idea of open data.
From smells to carts and cats After mapping smells I took a stab at an application to map food carts that also allowed users to keep food cart locations, menus and hours of operation up to date. The app was made in early 2010 (code here) using the Titanium Mobile framework and used PDX API as the backend. Unfortunately Microsoft also decided to release a closed source food cart application (with a huge marketing budget) which effectively killed my excitement about the cart mapping project.
The OpenElm Project is a great open source Phonegap application built using jQuery Mobile for the UI and CouchDB on the server. Andrew Gleave, lead developer on the project, even went the extra mile and wrote an attachment uploader plugin for iOS Phonegap devices that makes uploading files and photos super smooth.
Email Submissions Many smartphones can interpret mailto: links and launch email composition forms pre-loaded with the recipient address from the mailto:. On Android devices users are given the option of adding attachments to an existing message, but on iOS there is no way to add an attachment to a message being composed – you have to start the interaction by visiting the photo browser and choosing „Share‟, however there is no way to initialize the photo sharing interface from a mobile web app. This severely complicates the UX for iOS users. Regardless of the UX complications, email can be a nice and simple way to offer photo submissions without going through the rigamarole of registering native developer credentials. There are two ways that I‟ve tried initiating email submissions Visiting a mobile web app first and being given a mailto: link Catmapper.com (source), a prototype application, is an example of first collecting location and then providing an email address where photos can be submitted later. The requires that users remember to visit catmapper.com when they are walking
down the street and visit a cat, and also that they are able to take an email address from a website and successfully email photos to it.
Sending photos to a particular email address first and receiving instructions in a reply email Pictured below is another now defunct prototype that was meant to demonstrate how defibrillator locations might be submitted by thoughtful citizens who encounter them. First the smartphone must have GPS enabled so that the photos contain coordinates in the embedded EXIF data. Photos are emailed in and users will receive a response email containing a visualization of the GPS coordinates from the EXIF information. Upon tapping on the map they are taken to a catmapper-style mobile interface that lets them update the specific coordinates for their submission.
This particular email processing workflow was implemented using a Ruby server that accessed a gmail account through IMAP. You can get the code by visiting the older commits in this repository. Warning: trying to write a realtime IMAP client for Gmail was pretty painful and prompted me to write a realtime node.js email processor, haraka-couchdb. Tweets with geolocation and photos There are tons of native Twitter applications that allow uploading to third party image sharing services like yfrog or twitpic. Some colleagues of mine at Code for America took a Node.js daemon from civic hacker Mark Headd and extended it to save images + latitude/longitude from tweets into CouchDB. This approach requires users to enable geolocation in their Twitter client. It also has an interesting social property that can boost photo sharing engagement.
MMS Gateways This would be super cool for bridging the gap from smartphones to feature phones but unfortunately is not supported by the major telephony API providers (Twilio and Tropo). Desktop uploading Photos often don‟t need to be uploaded in the field and can be synced and uploaded at a later time. This broadens the range of cameras that can be used to capture images and lets users without smartphones participate in using your application. Here is another prototype, aedmapper.com (code here), that is meant to show how to collect accurate information about defibrillator locations using address autocompletion, web based mapping and drag and drop photo upload forms.
Using the same code I created two separate upload forms for another project, Open211, a directory of social services. One form collects locations while the other offers dropbox-like attachment uploading functionality.
Pen + Paper The most ubiquitous mobile crowdsourcing interface comes in the form of walking papers, a project by Michal Migurski. Designed as a tool for collecting data for OpenStreetMap, the computer vision technology that powers walking papers could also prove useful for large scale low technology data collection deployments. I haven‟t yet had a chance to use this tech but am itching at the opportunity.
OpenStreetMap contributors have gathered for mapping parties, which consist of treks through unmapped neighborhoods where participants document what they see by writing down landmarks, roads, parks and playgrounds on a special map that can be scanned and uploaded to walking-papers.org where they are then georectified and traced directly onto OpenStreetMap. (Note: This was originally published on maxogden.com.)
Over the last couple of years, open data enthusiast Max Ogden has been fascinated with technology that collects and aggregates mappable information.
Thus, this article looks into some experiments in collecting images and location from mobile devices.
The most ubiquitous mobile crowdsourcing interface comes in the form of walking papers, a project by Michal Migurski. Designed as a tool for collecting data for OpenStreetMap, the computer vision technology that powers walking papers could also prove useful for large scale low technology data collection deployments.