I’ve been invited to participate in the W3C/OGC Joint Workshop Series For Maps On The Web AKA “let’s work on MapML”, starting with a position statement.

Just so we’re clear here: I’m a former board member of the OpenStreetMap Foundation, a charter member of OSGeo, and one of the Leaflet maintainers; yet, this statement is my own, and does not (neccesarily) represent the position of the entities I consider myself affiliated with.

(Addendum 2020-07-29: The upcoming writing contains a few instances of expletives, words that a native english reader might consider impolite. They are to be interpreted within my cultural framework: we Spaniards insult when we convey strong emotion. These expletives are to be understood not as a lack of respect, but rather as strong emphasis, as written proof of the raging passion flowing through my veins when I originally wrote this text. I ask of you, the reader, to try and be empathetic with my experience and cultural framework, instead of pushing your cultural framework on me.)

I don’t like the idea of MapML.

And the reason is not technical: it’s political. Not political in the sense of “what goes on in a city/state” (from the greek polis). We are in the realm of the internet and distributed struggles for governance on lots of fields, so it’s better to understand “political” as per Carl Schmitt’s “concept of the political”: who is your friend and who is your enemy.

For me, in this case, who will this benefit and who will this impair.

You see, in the call for participation one can find this statement:

Adding a native map viewer for the Web platform, similar to how HTML <video> was added for video content

I do remember the times before <video> (it’s OK to call me old). It happens that I did work with MJPG and crappy video encoding back in 2002, along with ActiveX embedded video players. Yes, ActiveX. And the VLC plugin for Mozilla. And the flash jwplayer came and provided a modicum of comfort. Those were the times.

You see, the user experience for video back in the day, both for the user (“I have to download what and install it where?”) and the web developer (user agent sniffing plus a <object> inside an <embed> with a MJPG fallback in the form of a <img>) was quite abismal. A <video> element back then made sense.

So let’s fast-forward ten years, and see what <video> has really brought us. For me, two things spring to mind:

  • Chunked video, so a video player will perform a HTTP(S) request to load a few seconds of video through incomprehensible URLs and opaque propietary APIs, then another, then another. FSM bless youtube-dl (and its mpv integration).

    We’ll get told that the reason is adaptive streaming, but that’s bullshit, since the same lobby could have pushed to leverage the <source> element for that very same purpose.

  • Fucking DRM (in the form of EME). It was a shitstorm for the W3C, and with good reason.

    Remember, if a system which claims to be about DRM is unable to enforce the share-alike provisions in GPL, LGPL or CC-by-sa works, then it’s not about “rights”.

So we web devs were promised less headaches when dealing with video, but the end result that a few media giants got all of the cake (think YouTube, Twich, Netflix). The decisions that led to the current status of the <video> and EME specs have deeply political implications. It was a fucking textbook example of tweaking the standards process to embrace, extend, and extinguish.


So here we are today, discussing about standards for maps with the excuse that maps in the web are too difficult today, and they’re not interoperable.

Let me say this out loud and clear: interoperability is not a technical issue, it’s a political issue. Exhibit A, Terms and Conditions for Google Maps API:

3.2.3 Restrictions Against Misusing the Services.

[…]

(e) No Use With Non-Google Maps. To avoid quality issues and/or brand confusion, Customer will not use the Google Maps Core Services with or near a non-Google Map in a Customer Application. For example, Customer will not (i) display or use Places content on a non-Google map, (ii) display Street View imagery and non-Google maps on the same screen, or (iii) link a Google Map to non-Google Maps content or a non-Google map.

Exhibit B, ESRI’s Click-Through Master Agreement (the one that applies to its basemap tiles AFAIK):

3.0 DATA

[…]

3.2 Permitted Uses.

a. Unless otherwise authorized in writing, Customer may only use Data with the Products for which Esri has provided the Data.

[…]

c. Customer may take ArcGIS Online basemaps offline through Esri Content Packages and subsequently deliver (transfer) them to any device for use with licensed ArcGIS Runtime applications and ArcGIS Desktop. Customer may not otherwise scrape, download, or storeData.

I could go on and quote ToU from Apple, TomTom, HERE or what-have-you, but I think readers get the point by now.

So: what is the point of having a standard/interoperable way of loading geographical data in a web browser if major players have shown unwillingness to do so?

I have a firm belief that the Internet should benefit people, not corporations. To that, I fully endorse IAB’s «The Internet is for End Users». And all the scenarios I run in my head related to the MapML proposal end up with a embrace-extend-extinguish situation benefitting Google, Apple, Esri et al.

This is how my head thinks: Any piece of in-browser technology that can benefit web map viewers is gonna be leveraged by the Google Maps API, Apple’s MapkitJS, Esri’s Javascript API et al, while the freelancer maintainers of Leaflet and OpenLayers will struggle to keep up. And in the meantime, ab-so-fuck-ing-lu-te-ly nobody will type <mapml> in a text editor (remember the transition from flash video players to <video>); and after a few years ab-so-fuck-ing-lu-te-ly nobody will be using <mapml> because they will all be using one of the aforementioned high-level libraries which managed to become incompatible among them.

And to think that the decisions leading to this are not political is to think that you are already friends with the ones benefitting from the outcome of those decisions.

Convince me otherwise. Please. I dare you.


So, assuming that the risk of an E-E-E scenario is real, and assuming that we want to prevent industry giant to get a bigger piece of the cake, what to do about it? I propose the following radical approach (“radical” from the latin “radix” meaning “root”, as in “targeting the root of the problem”):

  1. Acknowledge that value to the end-user shall be above private profit
    • This should be, like, basic ethical behaviour on computer science these days
    • Am I suggesting that implementing parties should release an ethical/political statement? Heck yes I am. This is why the proposal is radical.
  2. In order for an implementation to be compliant, it must be developed under strong copyleft (i.e. GPL share-alike)
    • This would negate incentives for a E-E-E scenario
    • Any non-standard extensions would have a readily available reference implementation, prompting a review of the standard
    • Am I suggesting that the adherence to the standard would depend on licensing conditions of the implementation? Heck yes I am.
  3. Any implementation by a party shall be feature-complete when exclusively using data from a different implementing party
    • This negates (most?) incentives to become an only provider of a specific kind of data
    • Am I suggesting that time-to-market shall be capped by other parties? Heck yes I am. User choice and implementation consistency is more important than your time-to-market.
    • Am I asking Google to develop Chromium’s code in such a way that Google Maps would work no better than Esri’s or Apple’s or OSM’s? Heck yes I am.

Please note that this proposal would not prevent any party from making an implementation: it asks for some conditions; this has similarities to how some standards bodies (e.g. OGC) force parties involved in the standarization process to waive some patents rights, only this proposal goes way (waaaaaaaaaaaay) further.

(Addendum 2020-07-29) This would also not prevent a party to create an implementation not adhering to this: it would just be a non-compliant, non-certifiable implementation.


And now, let me briefly comment most of the topics listed for the joint workshop. This is not part of my position statement, but rather a brain dump of all the topics for which I have a strong opinion:

Adding a native map viewer for the Web platform, similar to how HTML <video> was added for video content; including

  • new scripting APIs for Web authors to enhance map viewers

Yay, let’s hint a embrace-extend strategy!

  • CSS integration for styling maps and map features

Nope. A map feature is represented by zero or more symbols. Doable, but it’d be a pseudo-elements mess, not to talk about the selectors. See some parts of the mapboxGL style spec and despair.

  • other integration / relationship of maps with existing browser APIs (e.g., geolocation API, geo/map URL protocols, image maps, SVG and canvas graphics)

Oh, the list is longer than my arm. I’ve worked with mutation observers, fullscreen, indexedDB and service workers for offline caches, sensor API (pan by tilt), the pointer events nightmare, and, of course, the mind-boggling thing that WebGL is. And I should look at resize observers some day. Making sure that all standards play nice between them is fun, I tell you.

Standardizing how a browser-based map viewer fetches data from map services and how that data should be formatted; including

  • benefits and limitations of existing map data sources for Web use

This is political, not technical.

  • working with offline cached map data

Service workers. No need to do anything here.

  • integrating geographic data and (hyper)text annotations or other semantic information about spatial things
  • supporting discovery of Web-based geospatial resources by crawlers, indexes, and search engines

Force implementors to make each geographical feature have its own URL, just as OSM does. One cannot have semantics nor discovery if one cannot identify and point to features.

  • working with map data in different projections or coordinate reference systems

Send cash to the proj maintainers. That should give you an idea of the workload involved with dealing with projections (also, I’m not a fan of embedding proj in all web browsers).

  • federating map services with links between providers

They won’t wanna. They won’t letcha.

Creating accessible Web map experiences that adapt to the different ways people interact with the Web; including

  • best-practice interaction patterns for manipulating Web (and other interactive/slippy) maps using different input methods (mouse, touch, keyboard, etc.)

A <slippy> or overflow:slippy spec entices me, but then I go back to the whole embrace-extend stuff. If it’d be non-mappy I could get behind it (but only if medical imaging people would get involved).

Limiting privacy and security impacts of a more geo-enhanced Web; including

  • identifying both obvious and indirect ways malicious actors could misuse Web maps to expose personal data or fingerprintable patterns

Google is a malicious actor in this regard. Convince me otherwise.

  • creating options to support user-friendly functionality while limiting exposure of personal location and geographic data

Explicitly allow web authors to sync a portion of basemaps to their own webservers. Of course this is an P2P implementation nightmare and basemap providers won’t wanna because of their SaaS revenue stream.

  • validation and verification of map data sources, and avoiding misinformation

Nothing to do with web maps. Misinformation can be subjective (contested territories, OSM’s “results-oriented vandalism” concept). Purely political.

  • cross-origin security risks when integrating map data sources (some of which may be personalized, or contain confidential business information)

Nothing to do here. CORS already takes care of everything.


Whoa. That was quite the rant.