Skip to content

Evacuation

Wikimedia Commons is a great project. Over 21 million freely licensed files speak a clear language. But, like all projects of such a magnitude, there are some issues that somewhat dampen the joy. The major source of conflict is the duality of Commons: on one hand, it is a stand-alone repository of free files; on the other hand, it is the central repository for files used on many other projects of the Wikimedia family, first and foremost Wikipedia. Some Wikipedias, the Spanish one for example, have completely deactivated local file storage and rely completely on Commons; others prefer Commons, but keep local file storage around for special cases, like the “fair use” files on English Wikipedia, and “Freedom of Panorama” images on German Wikipedia.

The ExCommons interface on the deletion page.

The interface on the deletion page.

Commons admins (myself among them, albeit more in a technical capacity) want the keep Commons “clean”, and remove non-free images. While this is only proper, it can lead to issues with the projects depending on Commons, when files that are used e.g. on Wikipedia get deleted on Commons. The issue becomes aggravated if the deletion appears to be “overzealous”; some admins interpret “only free images” as “there must not be a shadow of a doubt”. When files that are very likely to be free, and will (in all likelihood) never see a takedown notice from a third party, are deleted because of a vague feeling of unease, and thus vanish from a dozen Wikipedias without warning, it is bound to raise the ire of someone. Yet, Commons admins do have a point when they want to keep their project in excellent legal shape, no matter the consequences.

One of my most popular tools is the CommonsHelper, which helps to transfer suitable images from other projects to Commons. Today, I try to at least reduce the impact of happy-trigger-finger file deletions on Commons by throwing some tech at the admins there: I present ExCommons. It is a small JavaScript that reverses the direction of CommonsHelper: It can transfer files from Commons to other Wikimedia projects, using OAuth.

This is what an "evacuated" file looks like.

This is what an “evacuated” file looks like.

It presents as a list of projects that use the file; the list automatically loads on the special page for deletion (Commons admins can try it here; but don’t actually delete the image!).

  • Sites that use the file in one or more articles are marked in bold; other sites use the file in other namespaces, where a loss might not be as critical
  • Sites that are known to have local upload disabled are shown but unavailable
  • Sites that are known to allow a wider range of files and use the file in an article are automatically checked; others can be checked manually

If you have authorized OAuth file upload, you can click the “Copy file to selected wikis” button, and the tool will attempt to copy the file there. The upload will be done under your user name. A Commons deletion template will be removed to avoid confusion; a {{From Commons}} template will be added. I created that template for the English Wikipedia; it states that the file was deleted on Commons, that it might not be suitable on en.wp either, it has a No-Commons template to prevent re-uploading, a category for tracking etc.

For this tool to be permanently enabled, add the line

importScript('MediaWiki:ExCommons.js') ;

to your common.js page.

This tool should allow Commons admins to quickly and painlessly “rescue” files to Wikipedias that use them, prior to their deletion on Commons. It is said that social problems cannot be fixed by technology; but sometimes they can be alleviated.

The Games Continue

Two weeks after releasing the first version of The Wikidata Game, I feel a quick look at the progress is in order.

First, thank you everyone for trying, playing, and feedback! The response has been overwhelming; sometimes quite literally so, thus I ask your forgiveness if I can’t quite keep up with the many suggestions coming coming in through the issue tracker, email, twitter, and various Wikidata talk pages. Also, my apologies to the Wikidata admins, especially those patrolling RfD, for the flood of deletion requests that the merge sub-game can cause at times.

Now, some numbers. There are now six sub-games you can play, and play you do. At the time of writing, 643 players have made an astonishing 352,710 decisions through the game, many of which result in improving Wikidata directly, or at least keep other players from having to make the same decision over again.

Let’s look at a single game as an example. The merge game has a total of ~200K candidate item pairs, selected by identical labels; one of these pairs, selected at random, is presented to the user to decide if the items describe the same “object” (and should thus be merged), or if they just happen to have the same name, and should not be shown in the game again. ~20% of item pairs have such a decision so far, which comes to ~3.000 item pairs per day in this game alone. At that speed, all candidates could be checked in two months time; realistically, a “core” of pairs having only articles in smaller languages is likely to linger much longer.

Of the item pairs with decisions, ~30% were judged to be identical (and thus merged), while ~31% were found to be different. But wait, what about the other 39%? Well, there are automatic cleanup operations going on, while you play! ~26% of item pairs, when loaded for presentation to the used, were found to contain at least one deleted item; most likely, someone merged them “by hand” (these probably include a few thousand species items that I accidentally created earlier…). ~6% contained at least one item that was marked as a disambiguation page since the candidate list was created. And ~9% were automatically discarded because one of the items had a link to the other, which implies a relation and, therefore, that the items are not identical.

As with the other sub-games, new candidates are automatically added every day. At the same time, users and automated filters resolve the candidate status. So far, resolving happens much quicker than addition of new candidates, which means there is light at the end of the tunnel.

Gender property assignments over time.

Gender property assignments over time.

Merging is a complex and slow decision. Some “quicker” games look even better in terms of numbers: The “gender” game, assigning a male or female tag to person items, has completed 42% of its ~390K candidates, a rate of almost 12K per day. The “sex ratio” is ~80% male to ~18% female (plus ~2% items already tagged or deleted on Wikidata). This is slightly “better” for women than the Wikidata average (85% vs. 15%), maybe because it does “solve” rare and ambiguous names as well, which are usually not tagged by bots, or because it has no selection bias when presenting candidates.

The disambiguation game is already running out of candidates (at 82% of ~23K candidates). Even the “birth/death date” game, barely a day old, has already over 10K decisions made (with over 84% resulting in the addition of one or two dates to Wikidata).

In closing, I want to thank everyone involved again, and encourage you to keep playing, or help this effort in other ways; by helping out on Wikidata RfD, by fixing potentially problematic items on flagged items, by submitting code patches, or even by becoming a co-maintainer for The Game.

The Game Is On

Game main page.

Game main page.

Gamification. One of those horrible buzzwords that are thrown around by everyone these days, between “cloud computing” and “the internet of things” (as opposed to the internet of people fitted with ethernet jacks, or those who get a good WiFi signal on their tooth fillings). Sure enough, gamification in the Wiki-verse has not been met with resounding success yet, unless you consider Wikipedia itself a MMORPG, or count the various vandal-fighting tools. Now, editing Wikipedia is a complex issue, which doesn’t really lend itself to game-like activity. But there’s a (not quite so) new player in town: Wikidata. I saw an opening, and I went for it. Without further ado, I give you Wikidata – The Game (for desktop and mobile)! (Note for the adventurous: Tools Labs is slightly shaky at the moment of writing this, if the page doesn’t load, try again in an hour…)

So what’s the approach here? I feel the crucial issue for gamification is breaking complicated processes down into simple actions, which themselves are just manifest decisions – “A”, “B”, or “I don’t want to decide this now!”. I believe the third option to be of essential importance; it is, unfortunately, mostly absent from Real Life™, and the last thing people want in a game is feeling pressured into making a decision. My Wikidata game acts as a framework of sub-games, all of which are using that three-options approach. The framework takes care of things like landing page, high scores, communications etc., so the individual game modules can focus on the essentials. For this initial release, I have:

The "merge items" game.

The “merge items” game.

  • Merge items shows you two items that have the same label or alias. Are they the same topic, and should thus be merged? One button will merge them on Wikidata (and leave a deletion request), the other will mark the pair as “not the same” within the game, not showing this specific combination again.
  • Person shows you an item that has no “instance of” property, but might be a person based on its label (the first word of the label is also the first word in another item, which is a person). One button sets “instance of:person” on Wikidata, the other prevents it from being offered in this game again.
  • Gender shows you an item that is a person, but has no gender property set. Set the property on Wikidata to “male” or “female”, or skip this item (like you can do with the other games – skipped items will show up again eventually).

There is also an option to randomly pick one game each time you press a button in the previous one – slightly more “challenging” than the single-game mode, which one can play at quite high speed. Of course, this simplification misses a lot of “fine-tuning” – what if you are asked to decide the gender of an item that has been accidentally tagged as “person”? What if the gender of this person is something other than “male” or “female”? Handling all these special cases would, of course, be possible – but it would destroy the simplicity of the three-button interface. The games always leave you a “way out” – when in doubt, skip the decision. Someone else will take care of it, eventually, probably on Wikidata proper.

Another point worth mentioning is the speed of the game. I took some measures to ensure the user never, ever, has to wait for the game. First, all the potential decisions are made server-side, and written into a database; for example, there are ~290K people waiting for “gender assignment”, and candidates are updated once a day. Upon loading the game website, a single candidate entry from each game is loaded in the background, so one will be ready for you instantaneously, no matter which game you choose. Upon opening a specific game, the cache is loaded with four more candidates, and kept at that level; at no point, you will have to wait for a new page to appear once you made a decision on the current one (I actually had to add a brief fade-out-fade-in sequence, so that the user can notice that a new page has been loaded – it’s that fast). Actions (merging items, requesting deletions, adding statements, remembering to not show items again) is done in the background as well, so no waiting for that either.

What else is there to say? The tool requires the user to allow OAuth edits, for both high-score keeping and accountability for the edits through the game. The game interface is English-only at the moment, but at least the main page has been designed with i18n in mind. The games are designed to work on desktop and mobile alike; passing time on the bus has never been that world-knowledge-improving! As a small additional incentive, there are high-score lists per game, and the overall process players have made in improving Wikidata. Finally, the code for the individual games is quite small; ~50 lines of code for the Person game, plus the updating code to find more candidates, run daily.

Finally, I hope some of you will enjoy playing Wikidata – The Game, and maybe some of you would like to work with me, either as programmers to share the tool (maybe even the good folks of WMF?), or with ideas for new games. I already have a few of those; I’m thinking images…

Is it a bot? Is it a user?

So I was recently blocked on Wikidata. That was one day after I passed a quarter million edits there. These two events are related, in an odd way. I have been using one of my tools to perform some rudimentary mass-adding of information; specifically, the tool was adding “instance of:human” to all Wikidata items in the English Wikipedia category “living people”. I had been running this for over a day before (there were ~50K items missing this basic fact!), but eventually, someone got annoyed with me for flooding Recent Changes, and I was blocked when I didn’t reply on-Wiki quickly enough.

I’ve since been un-blocked, and I throttled the tool, now waiting 10 seconds between every (!) edit. No harm done, but I believe it is an early sign of a larger controversy: Was I running an “unauthorized bot”, as the message on my talk page was titled? I don’t think I was. Let me explain.

Bots have been with Wikipedia and other Wikimedia projects almost since the beginning. Almost as old are complaints about bots, the best known probably Rambot‘s addition of >30K auto-generated stubs on English Wikipedia. Besides making every second article on Wikipedia about a town in the U.S. no one ever head of (causing exceptional dull-age in the “random page” function), it also flooded Recent Changes, which eventually let to bot policies and the bot flag, hiding bot edits from the default Recent Changes view. These days, bots make up a large amount of Wikipedia editing; I seem to remember that most Wikipedia edits are actually done by bots, ranging from talk page archiving to vandal fighting.

So how was my mass-adding of information different from Rambot’s? Rambot was written to perform a very specific purpose: Construct plain-text descriptions of towns from a dataset, then add these to Wikipedia. It was run once, for that specific purpose, by its creator. Other bots, like automatically reverting of certain types of vandalism, run without any supervision at the time (which is the whole point, in that case).

Herein lies the separation: Yes, I did write the tool, and I did operate it, but as two different people. That is, anyone with a Wikidata user name can use that tool, under his user name, via OAuth. Also, while the tool does perform an algorithmically defined function, it is not really constrained to a purpose, as a “classic” bot would be. That alone would most likely disqualify it to get a “bot permission” on Wikidata (unless the mood has really changed for the better there since the last time I tried). Certainly, there are overlaps between what a bot does, and what my tool does; that does not justify putting the “bot” label on it, just because it’s the only label you’ve got.

To be sure, no one (as far a I know) disputed that the edits were actually correct (unlike Rambot, which added a few thousand “broken” articles initially). And the fact that ~50K Wikidata items about living people were not even “tagged” as being about people surely highlights the necessity for such edits. Certainly, no one would object to me getting a list of items that need the “instance of:human” statement, and adding them manually. All the tool does is make such editing easier and faster for me.

Now, there is the issue of me “flooding” the Recent Changes page. I do agree that this is an issue (which is why I’m throttling the tool at the moment). I have filed a bug report to address this issue, so I can remove the throttling again eventually. So Users, bots, and OAuth-based tools can live in harmony again.

Post scriptum

I am running a lot of tools on Labs. As with most software, the majority of feedback I get for those tools falls into one of two categories: bug reports and feature requests, the latter often in the form “can the tool get input from/filter on/output to…”. In many cases, that is quick to implement; others are more tricky. Besides increasing the complexity of tools, and filling up the interface with rarely-used buttons and input fields, the combinations (“…as you did in that other tool…”) would eventually exceed my coding bandwidth. And with “eventually”, I mean some time ago.

Wouldn’t it be better if users could “connect” tools on their own? Take the output of tool X and use it as the input of tool Y? About two years ago, I tried to let users pipeline some tools on their own; the uptake, however, was rather underwhelming, which might have been due to the early stage of this “meta-tool”, and its somewhat limited flexibility.

A script and its output

A script and its output.

So today, I present a new approach to the issue: scripting! Using toolscript, users can now take results from other tools such as category intersection and Wikidata Query, filter and combine the results, and display the results or even use tools like WiDaR to perform on-wiki actions. Many of these actions come “packaged” with this new tool, and the user has almost unlimited flexibility in operating on the data. This flexibility, however, is bought by the scary word programming (an euphemism for “scripting”). In essence, the tool runs JavaScript code that the user types or pastes into a text box.

Still here? Good! Because, first, there are some examples you can copy, run, and play with; if people can learn MediaWiki markup this way, JavaScript should pose little challenge. Second, I am working on a built-in script storage, which should add many more example scripts, ready to run (in the meantime, I recommend a wiki or pastebin). Third, all build-in functions use synchronous data access (no callbacks!), which makes JavaScript a lot more … scriptable, as in “logical linear flow”.

The basic approach is to generate one or more page lists (on a single Wikimedia project), and then operate on those. One can merge lists, filter them, “flip” from Wikipedia to associated Wikidata items and back, etc. Consider this script, which I wrote for my dutiful beta tester Gerard:

all_items = ts.getNewList('','wikidata');
cat = ts.getNewList('it','wikipedia').addPage('Category:Morti nel 2014') ;
cat_item = cat.getWikidataItems().loadWikidataInfo();
$.each ( cat_item.pages[0].wd.sitelinks , function ( site , sitelink ) {
  var s = ts.getNewList(site).addPage(sitelink.title);
  if ( s.pages[0].page_namespace != 14 ) return ;
  var tree = ts.categorytree({language:s.language,project:s.project,root:s.pages[0].page_title,redirects:'none'}) ;
  var items = tree.getWikidataItems().hasProperty("P570",false);
  all_items = all_items.join(items);
} )
all_items.show();

This short script will display a list of all Wikidata items that are in a “died 2014″ category tree on any Wikipedia, that do not have a death date yet. The steps are as follows:

  • Takes the “Category:Morti nel 2014″ from it.wikipedia
  • Finds the associated Wikidata item
  • Gets the item data for that item
  • For all of the site links into different projects on this item:
    • Checks if the link is a category
    • Gets the pages in the category tree for that category, on that site
    • Gets the associated Wikidata items for those pages
    • Removes those items that already have a death date
    • Adds the ones without a death date to a “collection list”
  • Finally, displays that list of Wikidata items with missing death dates

Thus, with a handful of straightforward functions (like “get Wikidata items for these pages”), one can ask complex questions of Wikimedia sites. A slight modification could, for example, create Wikidata items the pages in these categories. All functions are documented in the tool. Many more can be added on request; and, as with adding Wikidata labels, a single added function can enable many more use-cases.

I hope that this tool can become a hub for users who want more than the “simple” tools, to answer complex questions, or automate tedious actions.

The OAuth haters’ FAQ

Shortly after the dawn of the toolserver, the necessity for authentication in some tools became apparent. We couldn’t just let anyone use tools to upload files to Commons, doing so anonymously, hiding behind the tools’ user account; that would be like allowing anyone to edit Wikipedia anonymously. Crazy talk! But toolserver policies forbade tools asking for users’ Wikipedia/Commons passwords. So, different tool authors came up with different solutions; I created TUSC to have a single authentication mechanism across my various tools.

As some of you may have noticed, some of my new tools that require authentication are using OAuth instead of TUSC. Not only that, but I have been busy porting some of my long-standing tools, like flickr2commons and now commonshelper, to OAuth. This has been met with … unease by various parties. I hope that this post can alleviate some concerns, or at least answer some questions.

Q: Why did you switch from TUSC to OAuth?
A: TUSC was a crutch, and always has been. Not only is OAuth a standard technology, it is now also the “official” way to use tools that require user rights on Wikimedia sites.

Q: So I’m not uploading as your bot, but as myself?
A: Yes! You took the time and interest to run the tool; that effort should be rewarded, by having the upload associated with your use account. It will be so much easier to see who uploaded a file, to assign glory and blame alike. Also, the army of people who have been haunting me for re-use rights and image sources, just because my name was linked to the uploading bot account, will now come after YOU! Progress!!

Q: OK, maybe for new tools. But the old tools were working fine!
A: No, they were not. Years ago, when the WMF changed API login procedure,  I switched my tools to use the Peachy toolkit, so I would not have to do the “fiddly bits” myself for every tool. However, a few month ago, something changed again, causing the Peachy uploads to fail. It turned out that Peachy was no longer developed, so I had to go back and hack my own upload code. Something was wrong with that as well, leading to a flurry of bug reports across my Commons-uploading tools. The subsequent switch to OAuth uploads wasn’t exactly smooth for me either, but now that it’s running, it should work nicely for a while. Yeah, right.

Q: But now all the tools are using JavaScript to upload. I hate JavaScript!
A: That pesky JavaScript is all over the web these days. So you probably have installed NoScript in your browser (if you don’t, do!). You can easily set an exception for the tools server (or specific tools), so you’re safe from the evil JavaScript out there, while the tools “just work”.

Q: But now it won’t work in my text browser!
A: Text browser? In 2014? Really? You seem to be an old-school 1337 hax0r to be using that. All tools on tools.wmflabs.org are multi-maintainer; I’ll be happy to add you as a co-maintainer, so you can add a text browser mode to any tool you like. You don’t have time, you say? Fancy that. In the meantime, you could use elinks which can support JavaScript.

Q: But you changed my favourite tool! I don’t like change.
A: Sorry, Sheldon. The only constant is change.

In closing, the good old bot has clocked over 1.2 million uploads on Commons, or ~6% of all files there. I think it deserves some well-earned rest.

Qualified answers

WikiData Query (WDQ), one of my pet projects I have blogged about before, has gained a new functionality: search by qualifier. That is, a query for items with specific statements can now be fine-tuned by querying the qualifiers as well.

Too abstract? Let’s see: Who won the Royal Medal in 1853? Turns out it was Charles Darwin. Or how about this: Who received any award, medal, etc. in 1835? According to Wikidata, it’s just one other person besides Darwin; Patrice de Mac-Mahon was awarded a rank in the French Legion of Honour.

This might seem like a minor gimmick, but I think it’s huge. It is, to the best of my knowledge, the only way to query WikiData on qualifiers; and with qualifiers becoming more important (replacing groups of properties in lieu of simpler, “qualified” ones, for example in taxonomy) and more numerous, such queries will become more important to the “ecosystem” around Wikidata.

With the main point delivered, here are some points for those interested in using this system, or the development of WDQ:

  • Currently, you can only use qualifier queries in combination with the CLAIM command, though I’ll add it to string, time, location, and quantity commands next.
  • A subquery in {curly brackets} directly after the CLAIM command is used on the qualifiers of the statements that match the “main” command
  • Within the qualifier subquery, you can use all other commands, ranges, trees etc.
  • Adding qualifiers to the in-memory database required some major retrofitting of WDQ. It started my using “proper” statement representations (C++ template classes, anyone?), followed by creating a qualifier data type (to re-use all the existing commands within the subquery, each qualifier set is basically its own Wikidata!), extending the dump parsing, binary storage and loading, and a zillion other small changes.
  • While I believe the code quality has improved significantly, some optimization remains to be done. For example, memory requirements have skyrocketed from <1GB to almost 4GB. The current machine has 16GB RAM, so that should hold for a while, but I believe I can do better.
  • There is an olde memory leak, which appears to have become worse by the increased RAM usage. WDQ will restart automatically if the process dies, but queries will be delayed for a minute or so every time that happens.
  • I have prepared the code to use ranks, but neither parsing nor querying are implemented yet.
  • I need to update the API documentation to reflect the qualifier queries. Also, I need to write a better query builder. In due time.

Points of view

For many years, Henrik has single-handedly (based oScreen Shot 2014-02-17 at 17.07.43n data by Domas) done what the world’s top 5 website has consistently failed to provide: Page view information, per page, per month/day. Requested many times, repeatedly promised, page view data has remained proverbial vaporware, the Duke Nukem Forever of the Wikimedia Foundation (except DNF was delivered in 2011). A quick search of my mail found a request for this data from 2007, but I would be surprised if that is the oldest instance of such a query.

Now, it would be ridiculous to assume the Foundation does not actually have the data; indeed they do, and they are provided as unwieldy files for download. So what’s all the complaining about? First, the download data cannot be queried in any reasonable fashion; if I want to know how often Sochi was viewed in January 2014, I will have to parse an entire file. Just kidding; it’s not one file. Actually, it’s one file for every single hour. With the page titles URL-encoded as requested, that is, not normalized; a single page can have dozens of different “keys”, have fun finding them all!

But I can get that information from Henrik’s fantastic page, right? Right. Unless I want to query a lot of pages. Which I have to request one by one. Henrik has done a fantastic job, and single queries seem fast, but it adds up. Especially if you do it for thousands of pages. And try to be interactive about it. (My attempt to run queries in parallel ended with Henrik temporarily blocking my tools for DDOSing his server. And rightly so. Sorry about that.)

GLAMs (Galleries, Libraries, Archives, and Museums) are important partners of the Wikimedia Foundation in the realm of free content, and increasingly so. Last month, the Wellcome Trust released 100.000 images under the CC-BY license. Wikimedia UK is working on a transfer of these images to Wikimedia Commons. Like other GLAMs, the Wellcome Trust would surely like to know if and how these images are used in the Wikiverse, and how many people are seeing them. I try to provide a tool for that information, but, using Henrik’s server, it runs for several days to collect data for a single month, for some of the GLAM projects we have. And, having to hit a remote server with hundreds of thousands of queries via http each month, things sometimes go wrong, and then people write me why their view count is waaaay down this month, and I’ll go and fix it. Currently, I am fixing data from last November. By re-running that particular subset and crossing my fingers it will run smoothly this time.

Like others, I have tried to get the Foundation to provide the page view data in a more accessible and local (as in toolserver/Labs) way. Like others, I failed. The last iteration was a video meeting with the Analytics team (newly restarted, as the previous Analytics team didn’t really work out for a reason; I didn’t inquire too deeply), which ended with a promise to get this done Real Soon Now™, and the generous offer to use the page view data from their hadoop cluster. Except the cluster turned out to be empty; I then was encouraged to import the view data myself. (No, this is not a joke. I have the emails to prove it.) As much as I enjoy working with and around the Wikiverse, I do have neither the time, the bandwidth, nor the inclination to do your paid jobs for you, thank you very much.

As the sophisticated reader might have picked up at this point, the entire topic is rather frustrating for myself and others, and being unable to offer a patchy, error-prone data set to GLAMs who have released hundreds of thousands of files under a free license into Commons is, quite frankly, disgraceful. The requirement for the Foundation is not unreasonable; providing what Henrik has been doing for years on his own would be quite sufficient. Not even that is required; myself and others have volunteered to write interfaces if the back-end data is provided in a usable form.

Of the tools I try to provide in the GLAM realm, some don’t really work at the moment due to the constraints described above; some work so-so, kept running with a significant amount of manual fixing. Adding 100.000 Wellcome Trust images may be enough for them to come to a grinding halt. And when all the institutions who so graciously have contributed free content to the Wikiverse come a-running, I will make it perfectly clear that there is only the Foundation to blame.

Modest doubt is called the BEACON of the wise

Now that I got Shakespeare out of the way, I picked up chatter (as they say in the post-Snowden world) about the BEACON format again, which seemed to have fallen quiet for a while. In short, BEACON is a simple text format linking two items in different catalogs, usually web sites. There are many examples “in the wild”, often Wikipedia-to-X, with X being VIAF, GND, or the like.

Now, one thing has changed in the last year, and that is the emergence of Wikidata. Among other things, it allows us to have a single identifier for an item in the Wikiverse, without having to point at a Wikipedia in one language, with a title that is prone to change. When I saw the BEACON discussion starting up again, I realized that, not only do we now have such identifiers, but that I was sitting on the mother lode of external catalog linkage: Wikidata Query. So, I did what I always do: Write a tool.

I have not checked if there are any other Wikidata BEACON mappings out there, but my solution has several points going for it that would be hard to replicate otherwise:

  • An (almost) up-to-date representation of the state of Wikidata, usually no more than ~10-15 minutes behind the live site.
  • Quick on-the-fly data generation.
  • A large number of supported catalogs (76 at the moment).
  • The option to link two external catalogs via Wikidata items that have identifiers for both.

This service is pretty new (just hacked it together while waiting for other code to run…), and I may have gotten some of the BEACON header wrong. If so, please tell me!

The Reason For It All

Gerard has blogged tirelessly about improvements to Reasonator, my attempt at making Wikidata a little more accessible. Encouraged by constant feedback, suggestions, and increasing view numbers, it has grown from “that thing that shows biography data” into a versatile and, more importantly, useful view of Wikidata. This is my attempt at a summary of the state of Reasonator.

Q1339

Johann Sebastian Bach (Q1339). Reasonator and Wikidata side-by-side comparison.

The Cake

Reasonator attempts to show a Wikidata item in a way that it easy to access by humans. By focusing on display rather than editing, a lot of screen real estate can be used to get the information to the reader.

Besides the main body of text, there is a sidebar reminiscent of the Wikipedia infoboxes, containing images (of the item itself, as well as signature, coat of arms, seal, audio, video, etc.), simple string-based information (such as postal codes), links to external sources via their ID (such as VIAF), links to other Wikimedia projects associated with the item, and miscellaneous tidbits such as a QRpedia code (by special request).

The main text consists of a header (with title, Wikidata item number and link, aliases, manual and automatic description, where available), a body mostly made of linked property-value lists, and a footer with links to an associated Commons category, and images from related items. Qualifiers are shown beneath the respective claims. So far, not much more than what you get on Wikidata, except some images and pretty layout.

The Icing

Cambridge Reasonator

Cambridge (Q350) with Wikivoyage banner, location hierarchy, and maps.

One fundamental difference between the Wikidata and the Reasonator displays is that the latter is context-sensitive; it supports specialized information rendering for certain item types, and only uses the generic one as a fall-back. Thus, for a biographical item, there are the relatives (if any), and a link to the family tree display; Wikivoyage banner, OpenStreetMap displays (two zoom levels), map image, and a hierarchy list for a location; an automatically generated taxonomy list for a species.

Even the generic item display can generate a hierarchy via the “subclass of” property. Such hierarchical lists are generated by Wikidata Query for speed reasons, but can also be calculated on the “live” data, if you want to check your latest Wikidata edit. There are already requests for other specialized item types, such as books and book editions.

The bespoke display variations and hierarchical lists hint at the “reason” part of the Reasonator name: It does not “just” display the item, but also looks at the item type, as well as related items. Statues depicting Bach, things that are named after him, items for music he has composed – all is displayed on the item page, even though the item itself “knows” nothing about those related items. Reasonator also has some build-in inferences: any son of Bach’s parents is his brother, even if neither item has the “brother of” property.

JCF Bach

Hover box for J.C.F. Bach.

Every link to another Wikidata item (which will point to the appropriate Reasonator page for browsing continuity) has a “hover box” that will pop up when you move the mouse over the link. The box contains the label of the item; a link to Wikidata; links to Wikipedia, Wikisource, and Wikivoyage in the current language; a manual description; an automatic description (with appropriate Reasonator links); and a thumbnail image, all subject to availability.

Strange Ingredients

Reasonator has a faux “Universal Language Selector” to easily switch languages. (I decided to just steal the ULS button and roll my own, with a few lines of JavaScript, rather than including no less than 15 additional files into the HTML page.) Item and property labels will show in your preferred language, with a fallback to the “big ones” (as determined by Wikipedia sizes) first, and a random “minor language” as a last resort. Items using a such a “fallback label” are marked with a dotted red underline, to alert native speakers to missing labels in their language. (A “translate this label now” utility, build into the hover box, is currently held up by a Wikidata bug.) The interface text is translatable as well via a wiki page.

A few candles on top

One simple Reasonator function I personally use a lot is “random page”. The results can be both astonishing (the “richness” of some items is remarkable) and depressing (“blank” items, with no statements, or maybe just a Commons category). Non-content items, such as category or template pages, are not shown by this function, if it knows about them. If you load a page without statements with a “:” in the label, Reasonator offers you a one-click “tagging” of the item as template, category, etc., is you have WiDaR enabled. (If you don’t, you really should!) In a similar manner, Reasonator suggests items to use as a “parent taxon” for species without such a statement, based on the species taxonomic name.

There is also a build-in search function, based on the Wikidata search, enriching it with automatic descriptions and hover boxes. As usual, missing labels are highlighted, and a missing auto-description hints at items lacking fundamental statements. (You can go and edit the item on Wikidata via the link in the hover box, without having to load the Reasonator page first.)

Calendar Tunguska event

A calendar example.

The latest addition is a calendar function. This special type of page is linked to from all date values within Reasonator, using the values’ accuracy/resolution. This is inspired by the “year” pages on Wikipedia, alas, it requires no additional effort by humans, besides the addition of date values to Wikidata (which should be done anyway!). The page uses Wikidata Query to aggregate items with events inside the set time period (day, month, year). At this moment, events (“point in time”), foundation/creation/discovery dates, births and deaths are shown. Additionally, there is a list of “ongoing events”, which started less than 10 years before and ended less than 10 years after the viewed date. A sidebar allows navigation by date, and hover boxes give a quick insight into the listed items.

Here as well, Reasonator uses more than a simple “list of stuff”. Relevant dates are shown besides the item, e.g., death dates in the birthday list. For a specific day, births and deaths are shown side-by-side to avoid blank space; for month/year displays, both birth and death dates are shown (as they are not specifically implied by the date), and the display wraps after the births list.

For a whole, “recent” year (such as 1900), it can take a while to show the page. Counter-intuitively, this is not because it takes time to find the items; all the items for a year are returned by Wikidata Query in less than half a second. It is the loading of several thousand (in this example, over 3,000) items, their labels, descriptions, Wikipedia sites, etc. from the Wikidata API (at a maximum of 50 apiece) that takes a while.

Serving suggestion

This leads nicely to another perk of Reasonator: speed. Recently, I optimized Reasonator to load as fast as possible. This leads to the odd situation that the J.S. Bach entry takes, on my laptop, 43 seconds to load on Wikidata (logged out, force-reload), but shows all text in Reasonator in 8 seconds, and finishes loading all images after another 7 seconds. This is despite Reasonator loading over 600 additional items to provide the context described above. This was achieved by reducing the serial number of requests (where one request depends on the result of the previous one) to the Wikidata API, parallel loading of scripts and interface texts, grouping and parallelizing Wikidata API requests, and “outsourcing” complex queries, such as the hierarchical lists and the calendar, to Wikidata Query (while providing a native JavaScript fall-back for most).

A wafer-thin mint

So what’s the point of all of this? For most people who come across Wikidata, it presents itself either as a back-end to Wikipedia (for language links, and infobox statements, on some wikis), or as a somewhat dull editing interface for form fetishists. I have tried to breathe some life into aspects of Wikidata with other tools, but Reasonator allows to dwell through all of its information.

And while a human-written English Wikipedia will always be infinitely more readable than any Wikidata interface, I hope that Reasonator could one day be a useful addition to small-language Wikipedias, and scale for such a community; on Wikidata/Reasonator, you only have to translate “singer” once to have everyone with that occupation labelled as such; you don’t have to manually include hundreds of thousands of images from Commons; and you don’t have to manually curate birthday lists.

Finally, I hope that browsing Reasonator will not only be interesting, but to encourage people to participate and improve Wikidata. A claim-less page can be a big motivator to improve it, and even a showcase item is more fun if it doesn’t take a minute to load…