As part of my job in the global energy industry I meet a lot of geoscientists. Highly passionate about all aspects of earth science, they’re geologists, geophysicists, or environmental scientists. They use GIS daily but don’t consider themselves to be GIS professionals any more than they are Excel or software professionals. For them, GIS is a means to an end.
When one geoscientist recently said that “GIS is not as simple as it used to be”, it pretty much summed up the mood I’m picking up in a lot of places. The state of GIS, data and IT are a big frustration. The geoscience community has been using geospatial tools for decades, but the issues they face with GIS have remained unchanged – in fact, they’re getting worse.
The technology has advanced dramatically in recent years, so what’s going on?
Geoscience ain’t Google or Facebook
We all know how to fire up Google Maps and look for the nearest coffee shop, but geoscientists do not have that luxury. Their data comes from everywhere: digitised from analog sources, scraped from literature or intelligence reports, imported from spreadsheets, dug up from archives, copied from network drives, extracted from databases, downloaded from web feeds.
As geoscientists forensically piece together their story – say, an environmental impact study, or the potential of a gas field hidden under a mile of sand or water – there are as many data formats as datasets. This is quite different to internet companies whose businesses are built on data that is natively digital: they can simply plug in the firehose and suck up whatever comes out.
So when geoscientists have painstakingly assembled their data, the last thing they need is a GIS that doesn’t do what they need, or is like brain surgery to operate.
What has GIS ever done for geoscientists?
From the perspective of a geoscientist, there has been little progress in the world of GIS. If you put yourself in their shoes, this is what we’ve given them over the past decade or so:
First, there were the delusional aspirations of standardizing everything, so that data conversion would no longer be required. The world would consist of spatial data infrastructures and portals, seamlessly joined together by open standards. We all know what happened to these, because no GIS professional today can survive without FME or GDAL. Most industries do not operate like the public sector. The “spatial is special” mantra merely reinforced data silos, hampering integration with other domains.
Coupled with SDIs, web-based GIS also proved that the GIS industry can take a perfectly simple web browser and turn it into a fiendishly slow and complex map system. This came in different incarnations until Google Maps finally put us out of our misery in 2005.
Meanwhile, desktop GIS has gone full-circle from being the professional’s tool to “everyone should use it”, and back to being the professional’s tool. Well, at least it is no longer pretending to be simple, and vendors can now freely praise its complexity as sophistication.
More recently arrived the mobile mapping apps, which – at last! – were simple to use (probably because they were not designed by GIS people). But alas, it is quite a leap from finding the nearest Starbucks to doing geological data interpretation. And so again, the geoscientists fell through the cracks. GIS vendors, forced to respond to the threat of internet mapping start-ups, spread themselves very thinly and ended up pleasing neither its new nor its core user bases.
Then finally came Open Source, the saviour who was going to deliver long-suffering GIS users from the evil clutches of proprietary vendor lock-in. Great. Except that, open source GIS is still GIS. It’s still growing arms and legs, just like any other GIS. It tries to cater for every audience, just like any other GIS. Progress is measured in added functionality. In this model it simply does not occur to developers to take something away to make it more usable. Steve Jobs would be horrified.
What happened to GIS empowering people?
Professional mapping can now be done in hundreds of ways including proprietary, open source, desktop, web/cloud-hosted, and mobile solutions. Faced with such an array of generic options, where everything is possible but nothing does what you need straight off the shelf, we’ll have to forgive the geoscientists if they’re not ecstatic about what’s on offer.
The task of making GIS usable for a particular purpose falls to the users themselves who, of course, have neither the time nor the skills to do it. Sure, there are partner developers who can provide add-ons or customisations, but that’s not the point.
GIS was always about empowering people. But GIS tools defeat their purpose when you need a whole GIS department to support basic workflows, and a whole IT department to support the GIS stack. People end up working for the technology, not the other way round.
So when the ever-quotable Brian Timoney recently tweeted that most enterprise GIS requirements would be satisfied with Google Earth and networked KML files on a shared drive, he was not far off the truth. Such a pragmatic architecture would certainly have its drawbacks, but in some scenarios it might be easier to work around these than create what you need with a ‘proper’ GIS stack.
It is not surprising that some geoscientists still feel nostalgic for ArcView 3. Sure, this might show their age, but in their eyes ArcView 3 represented everything that a GIS should be: simple enough to use, advanced and flexible enough to do something useful with. Of course, in a modern connected environment ArcView 3 would be woefully inadequate. But in today’s world nothing seems to have taken its place in terms of usability. QGIS comes close but unfortunately it’s beginning to look more and more like ArcMap, which it hopes to displace. For many geoscientists this is barking up the wrong tree.
First the cake, then the icing
To this day GIS has not truly offered geoscientists what they need. Where are the innovative solutions for dealing with the variety of geoscience data? Where are the productivity tools to simply assemble digital scrapbooks of georeferenced information? Where are the flexible data models that enable thematic harvesting and analysis irrespective of data type? Where are the analytical tools that can handle dirty and incomplete data without hours of pre-processing? Where are the predictive user interfaces that only show relevant options?
These tools are emerging, of course… but not in the GIS world. GIS software still assumes that the data is just there, nice and clean. And so it requires a lot of sweat, mud and tears before you even get there – many don’t.
Meanwhile, data mining and analytics packages are slowly absorbing GIS functions into their tools. From the open-source R to the proprietary SAS, many now come with mapping functionality as a standard. It is the same with specialist geoscience packages, such as those from Schlumberger or Landmark. None of these may ever reach the full geospatial specification of a GIS, but why should they? Once you’ve got all the data in one place, with decent analytics tools mapping just becomes one data representation out of many. For illustration, just look at the SAS page above.
Spatial is not special. We in the GIS community have always assumed that because geoscientists like working with maps, they will always like GIS. This is not the case. If GIS is the icing, we first need to help our users bake the cake. If we don’t do it, somebody else will. And if the icing is too complex or expensive, they will just eat the cake without it.
UPDATE 09OCT13: This post has now received over 1,000 hits and 30+ re-tweets. Thank you. But there’s been relatively few comments… surely many of you won’t just simply agree? Agree or not, feel free to add more thoughts below.