Archive for Google

Should You be Making Maps?


A couple of recent blog discussions reminded me of an age-old controversy around computers. Computers automate tasks and allow wider information access, making it easier for more people to do more things with more information. The computer tools continue to improve as more data goes online, thereby accelerating this ongoing trend. Clearly, this has changed many common human activities and given the masses the tools to do things once done by limited circles of people.

Activity in the world of maps, with the rapid growth of online mapping technologies and geographic data, reflects this trend. However, along with the automation comes some heated discussions about the role of professionals.

Google’s Ed Parsons, in “Cartography is dead, long live the map makers” argues that because the display mechanism for maps is now usually computer screens and not paper, that the skill is becoming less relevant.  As I commented on his blog, I think the paintbrush treatment of a complex subject does it some disservice. Do we need cartographers to make all maps? Absolutely not. Do we need them for some maps? Absolutely yes. We also need maps, online or paper, to reflect sound cartographic principles because those principles are based in years of research. Ed’s definition limiting cartography to print is erroneous.

Importantly, and often overlooked, just because it is easy to make maps online does not mean that it is easy to make good maps online. Anyone can use a word processor to write, yet much of what is written is useless to most people.


Much about online mapping is problematic, not only to cartographers but to many disciplines. So called mashups can combine data that is, yes, geographically overlapping. Yet the data is often from sources of different accuracies, time, and scale. Data sources vary in reliability also. So what results from the mashups? Without proper oversight and discipline, mashups are often meaningless or worse, misleading.

I’m all for the explosion of maps and wider uses of geographic information, online and off. But to cast aside cartography, a discipline that was, in part, responsible for us getting here in the first place, and is still actively improving geographic visualization, is simply wrong.

Along these same lines, Sean Gorman recently wrote “The Professional vs. the Amateur: Thoughts on the ESRI UC” about the delineations between “professionals” and “amateurs” made at the user conference. Sean thinks ESRI and other vendors define GIS professionals as those knowing how to use their software, rather than those with expertise in the field of study. This may be true, yet I’ve heard Jack Dangermond discuss this topic and his main issue seems to be on the data side – people with questionable authority providing geodata to be used by others. There is risk in the map making for sure, but if the data sources are unreliable, the resulting visualization will be questionable regardless of the level of expertise of the map maker.

Simply put, good maps come from good data combined with the application of sound cartographic and geographic analysis principles. Both are necessary and whether they come from certified professionals or not is a side issue.

3D Cities to Virtual Worlds

Berlin Molkenmarkt

Recently, The members of the Open Geospatial Consortium, Inc. (OGC) adopted version 1.0.0 of the OpenGIS® CityGML Encoding Standard as an official OGC Standard. According to OIGC, CityGML is an open data model framework and XML-based encoding standard for the storage and exchange of virtual 3D urban models. Also, CityGML is an application schema of the OpenGIS Geography Markup Language 3 (GML3) Encoding Standard, an international standard for spatial data exchange and encoding approved by the OGC and ISO.

According to the CityGMLWiki, “targeted application areas explicitly include urban and landscape planning; architectural design; tourist and leisure activities; 3D cadastres; environmental simulations; mobile telecommunications; disaster management; homeland security; vehicle and pedestrian navigation; training simulators; and mobile robotics.”

CityGML derived from efforts in Germany to integrate and link building information to the surrounding land. Traditionally, this integration has been weak, resulting in many challenges to the building industry as well as planners. And it’s not only technology where there are gaps, the entire building and GIS industries have been at arms length for decades. The hope is that CityGML can provide the standards necessary to bridge those gaps so that models can more accurately reflect the real-world juxtaposition and interrelationships between buildings and land.

In my opinion, all of this leads to virtual worlds. Now, virtual worlds are primarily the domain of gamers and socializers. But virtual worlds are no passing fad. According to a recent Technology Intelligence Group report Virtual World Industry Outlook 2008-2009, “Over one billion dollars were spent by the venture community on startups directly within or supporting virtual worlds between August 2007 and August 2008, and according to virtual world vendors and developers …”

Exciting to me is that with the inevitable merger of real-world models with virtual world technologies, sometimes called the Metaverse, geography and geographic information will be critical. According to the Metaverse Roadmap Overview, the Metaverse is the convergence of 1) virtually-enhanced physical reality and 2) physically persistent virtual space. It is a fusion of both, while allowing users to experience it as either.

I’ve written about The Business Relevance of Virtual Worlds. Others have discussed 3D models in the context of the GeoWeb, which is happening now and will be the precursor to geographically accurate virtual worlds. All of the big players are in this – Autodesk, Bentley, ESRI, Google, and Microsoft, as are some smaller companies such as Galdos Systems and Onuma. The Metaverse requires standards for interoperability, and CityGML is an important standard for now and the future of geographic information online.

GPS Going Into Orbit, Where 2.0 2008

Where 2.0 2008 There has been lots of news lately about GPS-enabled applications, data, and devices, some tied to the Where 2.0 2008 conference last week. ABI Research said that by 2012 more than 550 million GPS-enabled handsets would ship. Navteq announced updates to its North American traffic database, adding Puerto Rico and Canada as well as expanded coverage on high-volume surface roads. Meanwhile, Nokia said its Maps on Ovi service would allow customers to save map information on the Internet and then synch it to their phones.

Oh, and Google not only opened its API to geospatial data but it shook hands with ESRI around the idea of Google searches finding ESRI data and pulling it into Virtual Earth. Google is not the only company focusing on geographic search; FortiusOne announced the beta release of its Finder! search service. In addition, Where 2.0 hosted a dozen new companies finding ways to better address the needs for geospatial information.

Also, after six months of review, navigation device maker TomTom finally got EU holy water sprinkled on its deal to acquire Tele Atlas for $4.5 billion. Trimble announced new rugged handheld devices for difficult environments and high-accuracy needs. Lastly, the U.S. Air Force awarded Lockheed Martin a contract for building the first eight GPS III satellites. GPS III is supposed to have enhanced military coverage and civilian capabilities.

There are still many people who don’t see the importance of location in business and consumer worlds. However, these announcements and events are indicative of the movement of the industry toward improved data accessibility and accuracy. What that means is that geographic data will be increasingly available as a framework for decisions of many types, existing and new.

Jack Dangermond: “This is no longer a dream. It is actually starting to work”

2008 ESRI Federal User Conference

Today in frigid Washington D.C., the 2008 ESRI Federal User Conference started. I attended and share here some observations on the opening presentation by ESRI’s President, Jack Dangermond.

This is the 20th version of the federal user conference. ESRI officials told me that 2,500 pre-registered, an increase of 600 from 2007. Incredible growth for a technology conference these days.

DC Convention Center

The nice new Washington Convention Center is the venue; the rooms are, well, roomy. The food is decent … but let’s move to the good stuff.

Jack Dangermond kicked off the plenary discussing how his audience is “working on the nation’s problems.” He showed dozens of maps covering about 20 categories of applications including humanitarian programs, emergency management, environment, energy, defense, homeland security, and facility management. The heart of his message was that Read more

Super Tuesday, Lousy Maps?

NY Times Democratic Primary ResultsThe U.S. primary election yesterday took place in 24 states on what is called Super Tuesday. Of course the day is super important for those trying to become the next president. The event begs for maps to show us what’s going on before, during, and after. Unfortunately, the popular news Web sites as a group do a rather poor job. Here’s my quick take on what’s out there.

Elections are a great time for people to learn about places and their differences. Elections bring out not only the political differences in people, but the differences in people located in different places. Only maps can adequately portray these differences. There is additional detail beyond who wins and loses that someone should map – breakdowns by gender, age, nationality, income, and other demographics. It is impossible to understand what is happening politically in this country without good maps. If only there were more of them.

The Good

The Wall Street Journal puts the map on the top of its main page, with one tab each for the Democrats and Republicans. States are colored shades of blue for the Democrat one and red for the Republican maps, with different shades of those colors indicating the winners. On both maps, by hovering over the colored states that voted, one gets a simple text of the winner or projected winners from both parties. No numbers appear – sometimes simple is better. Nicely done, with a lot of information in a small space, but in a way that makes visual sense and gives people what they need to know.

The New York Times has a few maps hanging off its “Election Guide 2008” page. One links to results details, with separate U.S. maps of both party contests. These maps are big and loaded with information on them and on tables next to them. Click on a state and zoom into a state view with county results shown. The colors are pleasing and the information detailed. A second map shows primary dates, using maps and other graphics to show the distribution. Again here, lots of information and great design work. A third set of maps shows campaign finances by candidate and by location. Plus one can view an animation showing how the financial contributions changed over time. Fascinating material here is not found elsewhere. The Times has by far the best maps I came across.

The Bad

CNN focused on Read more