Skip to content

Match point

Mix’n’match is one of my more popular tools. It contains a number of catalogs, each in turn containing hundreds or even millions of entries, that could (and often should!) have a corresponding Wikidata item. The tool offers various ways to make it easier to match an entry in a catalog to a Wikidata item.

While the user-facing end of the tool does reasonably well, the back-end has become a bit of an issue. It is a bespoke, home-grown MySQL database that has changed a lot over the years to incorporate more (and more complex) metadata to go with the core data of the entries. Entries, birth and death dates, coordinates, third-party identifiers are all stored in separate, dedicated tables. So is full-text search, which is not exactly performant these days.

The perhaps biggest issue, however, is the bottleneck in maintaining that data – myself. As the only person with write access to the database, all maintenance operations have to run through me. And even though I have added import functions for new catalogs, and run various automatic update and maintenance scripts on a regular basis, the simple task of updating an existing catalog depends on me, and it is rather tedious work.

At the 2017 Wikimania in Montreal, I was approached by the WMF about Mix’n’match; the idea was that they would start their own version of it, in collaboration with some of the big providers of what I call catalogs. My recommendation to the WMF representative was to use Wikibase, the data management engine underlying Wikidata, as the back-end, to allow for a community-based maintenance of the catalogs, and use a task specific interface on top of that, to make the matching as easy as possible.

As it happens with the WMF, a good idea vanished somewhere in the mills of bureaucracy, and was never heard from again. I am not a system administrator (or, let’s say, it is not the area where I traditionally shine), so setting up such a system myself was out of the question at that time. However, these days, there is a Docker image by the German chapter that incorporates MediaWiki, Wikibase, Elasticsearch, the Wikibase SPARQL service, and QuickStatements (so cool to see one of  my own tools in there!) in a single package.

Long story short, I set up a new Mix’n’match using Wikibase ad the back-end.

Automatic matches

The interface is similar to the current Mix’n’match (I’ll call it V1, and the new one V2), but a complete re-write. It does not support all of the V1 functionality – yet. I have set up a single catalog in V2 for testing, one that is also in V1. Basic functionality in V2 is complete, meaning you can match (and unmatch) entries in both Mix’n’match and Wikidata. Scripts can import matches from Wikidata, and do (preliminary) auto-matches of entries to Wikidata, which need to be confirmed by a user. This, in principle, is similar to V1.

There are a few interface perks in V2. There can be more than one automatic match for an entry, and they are all shown as a list; one can set the correct one with a single click. And manually setting a match will open a full-text Wikidata search drop-down inline, often sparing one the need to search on Wikidata and then copying the QID to Mix’n’match. Also, the new auto-matcher takes the type of the entry (if any) into account; given a type Qx, only Wikidata items with name matches that are either “instance of” (P31) Qx (or one of the subclasses of Qx), or items with name matches but without P31 are used as matches; that should improve auto-matching quality.

Manual matching with inline Wikidata search

But the real “killer app” lies in the fact that everything is stored in Wikibase items. All of Mix’n’match can be edited directly in MediaWiki, just like Wikidata. Everything can be queried via SPARQL, just like Wikidata. Mass edits can be done via QuickStatements, just like… well, you get the idea. But users will just see the task-specific interface, hiding all that complexity, unless they really want to peek under the hood.

So far with the theory; sadly, I have run into some real-world issues that I do not know how to fix on my own (or do not have the time and bandwidth to figure out; same effect). First, as I know from bitter experience, MediaWiki installations attract spammers. Because I really don’t have time to clean up after spammers on this one, I have locked account creation and editing; that means only I can run QuickStatements on this Wiki (let me know your Wikidata user name and email, and I’ll create an account for you, if you are interested!). Of course, this kind of defeats the purpose of having the community maintain the back-end, but what can I do? Since the WMF has bowed out in silence, the wiki isn’t using the WMF single sign-on. The OAuth extension, which was originally developed for that specific purpose, ironically doesn’t work for MediaWiki as a client.

But how can people match entries without an account, you ask? Well, for the Wikidata side, they have to use my Widar login system, just like in V1. And for the V2 Wiki, I have … enabled anonymous editing of the item namespace. Yes, seriously. I just hope that Wikibase data spamming is a bit in the future, for now. Your edits will still be credited using your Wikidata user name in edit summaries and statements. Yes, I log all edits as Wikibase statements! (Those are also used for V2 Recent Changes, but since Wikibase only stores day-precision timestamps, Recent Changes looks a bit odd at the moment…)

I also ran into a few issues with the Docker system, and I have now idea how to fix them. This includes:

  • Issues with QuickStatements (oh the irony)
  • SPARQL linking to the wrong server
  • Fulltext search is broken (this also breaks the V2 search function; I am using prefix search for now)
  • I have no idea how to backup/restore any of this (bespoke configuration, MySQL)

None of the above are problems with Mix’n’match V2 in principal, but rather engineering issues to fix. Help would be most welcome.

Other topics that would need work and thought include:

  • Syncing back to Wikidata (probably easy to do).
  • Importing of new catalogs, and updating of existing ones. I am thinking about a standardized interchange format, so I can convert from various input formats (CSV files, auto-scrapers, MARC 21, SPARQL interfaces, MediaWiki installations, etc.).
  • Meta-data handling. I am thinking of a generic method of storing Wikidata property Px ID and a corresponding value as Wikibase statements, possibly with a reference for the source. That would allow most flexibility for storage, matching, and import into Wikidata.

I would very much like to hear what you think about this approach, and this implementation. I would like to go ahead with it, unless there are principal concerns. V1 and V2 would run in parallel, at least for the time being. Once V2 has more functionality, I would import new catalogs into V2 rather than V1. Suggestions for test catalogs (maybe something with interesting metadata) are most welcome. And every bit of technical advice, or better hands-on help, would be greatly appreciated. And if the WMF or WMDE want to join in, or take over, let’s talk!

The blind referee

A quick blog post, before the WordPress editor dies on my again…

Wikidata is great. Wikidata with references is even better. So I have written a little tool called Referee. It checks a Wikidata item, collects web pages that are linked via external ID statements, and via associated Wikipedia pages, and checks them for potential matches of statements (birth dates, locations, etc.).

If you add a little invocation to your common.js page on Wikidata:

importScript( 'User:Magnus_Manske/referee.js' );

it will automatically check on every Wikidata item load if there are potential references for that item in the cache, and display them with the appropriate statement, with one-click add/reject links.

If you put this line:

referee_mode = 'manual' ;

before the importScript invocation, it will not check automatically, but wait for you to click the “Referee” link in the toolbox sidebar. However, in manual, it will force a check (might take a few seconds) in case there are no reference candidates; the toolbar link will remain highlighted while the check is running.

I made a brief video demonstrating the interface. Enjoy.

Addendum: Forgot to mention that the tool does not work on certain instances (P31) of items, namely “taxon” and “scholarly article”. This is to keep the load on the scraper low, plus these types are special cases and would likely profit more from dedicated scraper systems.

Wikipedia, Wikidata, and citations

As part of an exploratory census of citations on Wikipedia, I have generated a complete (yeah, right) list of all scientific publications cited on Wikispecies, English and German Wikipedia. This is done based on the rendered HTML of the respective articles, and tries to find DOIs, PubMed, and PubMed Central IDs. The list is kept up to date (with only a few minutes lag). I also continuously match the publications I find to Wikidata, and create the missing items, most cited ones first.

A bit about the dataset (“citation” here means that an article mentions/links to a publication ID) at this point in time:

  • 476,560 distinct publications
  • 1,968,852 articles tracked across three Wikimedia projects (some citing publications)
  • 717,071 total citations (~1.5 citations per publication), of which
    • 261,486 have a Wikidata item
    • 214,425 have no Wikidata match
    • 649 cannot be found or created as a Wikidata item (parsing error, or DOI does not exist)
  • The most cited publication is Generation and initial analysis of more than 15,000 full-length human and mouse cDNA sequences (used 3,403 times)
  • Publications with a Wikidata item are cited 472,793 times, those without 244,191 times
  • 266 publications are cited from all three Wikimedia sites (263 have a Wikidata item)

There is no interface for this project yet. If you have a Toolforge (formerly known as Labs) account, you can look at the database as s52680__science_source_p.

Judgement Day

Wikipedia label by Gmhofmann on Commons

At the dawn of Wikidata, I wrote a tool called “Terminator”. Not just because I wanted to have one of my own, but as a pun on the term “term”, used in the database table name (“wb_term”) where Wikidata labels, descriptions, and aliases are stored. The purpose of the tool is to find important (by some definition) Wikidata items that lack a label in a specific language. This can be very powerful, especially in languages with low Wikidata participation; setting the label for teacher (Q37226) in a language will immediately allow all Wikidata items using that item (as an occupation, perhaps) to show that label. A single edit can improve hundreds or thousands of items, and make them more accessible in that language.

Well, Wikidata has grown a lot since I started that tool, and the Terminator didn’t cope well with the growth; it was limited to a handful of languages, and the daily update was compute intensive. Plus, the interface was slow and ugly. Time for a rewrite!

So without further ado, I present version 2 of the Terminator tool. Highlights:

  • Now covers all Wikidata languages
  • Get the top items with missing labels, descriptions, or Wikipedia articles
  • Sort items by total number of claims, external IDs, sitelinks, or a compound score
  • The database currently contains the top (by compound score) ~4.1 million items on Wikidata
  • Updated every 10 minutes
  • Search for missing labels in multiple languages (e.g. German, Italian, or Welsh)
  • Only show items that have labels in languages you know
  • Automatically hides “untranslatable” items (scientific articles, humans, Wikipedia-related pages such as templates and categories), unless you want those as well
  • Can use a SPARQL query to filter items (only shows items that match all the above, plus are in the SPARQL result, for results with <10K items or so)
  • Game mode (single, unsorted random result, more details, re-flows on mobile)

Please let me know through the usual channels about bugs and feature requests. I have dropped some functionality from the old version, such as data download; but that version is still linked form the new main page. Enjoy!

More topics

After my recent blog post about the TopicMatcher tool, I had quite a few conversations about the general area of “main topic”, especially relating to the plethora of scientific publications represented on Wikidata. Here’s a round-up of related things I did since:

As a first attempt, I queried all subspecies items from Wikidata, searched for scientific publications, and added them to TopicMatcher.

That worked reasonably well, but didn’t yield a lot of results, and they need to be human-confirmed. So I came at the problem the other way: Start with a scientific publication, try to find a taxon (species etc.) name, and them add the “main subject” match. Luckily, many such publications put taxon names in () in the title. Once I have the text in between, I can query P225 for an exact match (excluding cases where there are more than one!), and then add “main subject” directly to the paper item, without having to confirm it by a user. I am aware that this will cause a few wrong matches, but I imagine those are few and far between, can be easily corrected when found, and are dwarfed by the usefulness of having publications annotated this way.

There are millions of publications to check, so this is running on a cronjob, slowly going through all the scientific publications on Wikidata. I find quite a few topic in () that are not taxa, or have some issue with the taxon name; I am recording those, to run some analysis (and maybe other, advanced auto-matching) at a later date. So far, I see mostly disease names, which seem to be precise enough to match, in many cases.

Someone suggested to use Mix’n’match sets to find e.g. chemical substances in titles that way, but this requires both “common name” and ID to be present in the title for a sufficient degree of reliability, which is rarely the case. Some edits have been made for E numbers, though. I have since started a similar mechanism running directly off Wikidata (initial results).

Then, I discovered some special cases of publications that lend themselves to automated matching, especially obituaries, as they often contain name, birth, and death date of a person, which is precise enough to automatically set the “main subject” property. For cases where there is no match found, I add them to TopicMatcher, for manual resolution.

I have also added “instance of:erratum” to ~8,000 papers indicating this from the title. This might be better places in “genre”, but at least we have a better handle on those now.

Both errata and obituaries will run regularly, to update any new publications accordingly.

As always, I am happy to get more ideas to deal with this vast realm of publications versus topic.

On Topic

Wikidata already contains a lot of information about topics – people, places, concepts etc. It also contains topics that have a topic, e.g., a painting of a person, a biographical article about someone, a scientific publication about a species. Ideally, Wikidata also describes the connection between the work and the subject. Such connections can be tremendously useful in many contexts, including GLAM and scientific research.

This kind of work can generally not be done by bots, as this would require machine-readable, reliable source data to begin with. However, manually finding items about works, then finding the matching item on Wikidata is somewhat tedious. Thus, I give you TopicMatcher!

In a nutshell, I prepare a list of Wikidata items that are about creative works – paintings, biographical articles, books, scientific publications. Then, I try to guesstimate what Wikidata item they are (mainly) about. Finally, you can get one of these “work items” and their potential subject, with buttons to connect them. At the moment, I have biographical articles looking for a “main subject”, and paintings lacking a “depicts” statement. That comes to a total of 13,531 “work items”, with 54,690 potential matches.

You will get the expected information about the work item and the potential matches, using my trusty AutoDesc. You also get a preview of the painting (if there is an image) and a search function. Below that is a page preview that differs with context; depending on the work item, you could get

  • a WikiSource page, for biographical articles there
  • a GLAM page, if the item has a statement with an external reference that can be used to construct a URL
  • a publication page, using PMC, DOI, or PubMed IDs
  • the Wikidata page of the item, if nothing else works

The idea of the page preview is to find more information about the work, which will allow you to determine the correct match. If there are no suggested subjects in the database, a search is performed automatically, in case new items have been created since the last update.

Once you are done with the item, you can click “Done” (which marks the work item as finished, so it is not shown again), or “Skip”, to keep the item in the pool. Either way, you will get another random item; the reward for good work is more work….

At the top of the page are some filtering options, if you prefer to work on a specific subset of work items. The options are a bit limited for now, but should improve when the database grows to encompass new types of works and subjects.

Alternatively, you can also look for potential works that cover a specific subject. George Washington is quite popular.

I have chosen the current candidates because they are computationally cheap and reasonably accurate to generate. However, I hope to expand to more work and subject areas over time. Scientific articles that describe species come to mind, but the queries to generate candidate matches are quite slow.

If you have ideas for queries, or just work/subject areas, or even some candidate lists, I would be happy to incorporate those into the tool!

The Quickening

My QuickStatements tool has been quite popular, in both version 1 and 2. It appears to be one of the major vectors of adding large amounts of prepared information to Wikidata. All good and well, but, as will all well-used tools, some wrinkles appear over time. So, time for a do-over! It has been a long time coming, and while most of it concerns the interface and interaction with the rest of the world, the back-end learned a few new tricks too.

For the impatient, the new interface is the new default at QuickStatements V2. There is also a link to the old interface, for now. Old links with parameters should still work, let me know if not.

What has changed? Quite a lot, actually:

  • Creating a new batch in the interface now allows to choose a site. Right now, only Wikidata is on offer, but Commons (and others?) will join it in the near future. Cross-site configuration has already been tested with FactGrid
  • You can now load an existing batch and see the commands
  • If a batch that has (partially) run threw errors, some of them can be reset and re-run. This works for most statements, except the ones that use LAST as the item, after item creation
  • You can also filter for “just errors” and “just commands that didn’t run yet”
  • The above limitation is mostly of historic interest, as QuickStatements will now automatically group a CREATE command and the following LAST item commands into a single, more complex command. So no matter how many statements, labels, sitelinks etc. you add to a new item, it will just run as a single command, which means item creation just got a lot faster, and it goes easier on the Wikidata database/API too
  • The STOP button on both the batch list and the individual batch page should work properly now (let me know if not)! There are instructions for admins to stop a batch
  • Each server-side (“background”) batch now gets a link to the EditGroups tool, which lets you discuss an entire batch, gives information about the batch details, and most importantly, lets you undo an entire batch at the click of a button
  • A batch run directly from the browser now gets a special, temporary ID as well, that is added to the edit summary. Thanks to the quick work of the EditGroups maintainers, even browser-based batch runs are now one-click undo-able
  • The numbers for a running batch are now updated automatically every 5 seconds, on both the batch list and the individual batch page
  • The MERGE command, limited to QuickStatements V1 until now, now works in V2 as well. Also, it will automatically merge into the “lower Q number” (=older item), no matter the order of the parameters

I have changed my token by now…

For the technically inclined:

  • I rewrote the interface using vue.js, re-using quite a few pre-existing components from other projects
  • You can now get a token to use with your user name, to programmatically submit batches to QuickStatements. They will show up and be processed as if you had pasted them into the interface yourself. The token can be seen on your new user page
  • You can also open a pre-filled interface, using GET or POST, like so

One thing, however, got lost from the previous interface, and that is the ability to edit commands directly in the interface. I do not know how often that was used in practice, but I suspect it was not often, as it is not really suited for a mass-edit tool. If there is huge demand, I can look into retro-fitting that. Later.

 

Why I didn’t fix your bug

Many of you have left me bug reports, feature requests, and other issues relating to my tools in the WikiVerse. You have contacted me through the BitBucket issue tracker (and apparently I’m on phabricator as well now), Twitter, various emails, talk pages (my own, other users, content talk pages, wikitech, meta etc.), messaging apps, and in person.

And I haven’t done anything. I haven’t even replied. No indication that I saw the issue.

Frustrating, I know. You just want that tiny thing fixed. At least you believe it’s a tiny change.

Now, let’s have a look at the resources available, which, in this case, is my time. Starting with the big stuff (general estimates, MMMV [my mileage may vary]):

24h per day
-9h work (including drive)
-7h sleep (I wish)
-2h private (eat, exercise, shower, read, girlfriend, etc.)
=6h left

Can’t argue with that, right? Now, 6h left is a high estimate, obviously; work and private can (and do) expand on a daily, fluctuating basis, as they do for all of us.

So then I can fix your stuff, right? Let’s see:

6h
-1h maintenance (tool restarts, GLAM pageview updates, mix'n'match catalogs add/fix, etc.)
-3h development/rewrite (because that's where tools come from)
=2h left

Two hours per days is a lot, right? In reality, it’s a lot less, but let’s stick with it for now. A few of my tools have no issues, but many of them have several open, so let’s assume each tool has one:

2h=120min
/130 tools (low estimate)
=55 sec/tool

That’s enough time to find and skim the issue, open the source code file(s), and … oh time’s up! Sorry, next issue!

So instead of dealing with all of them, I deal with one of them. Until it’s fixed, or I give up. Either may take minutes, hours, or days. And during that time, I am not looking at the hundreds of other issues. Because I can’t do anything about them at the time.

So how do I pick an issue to work on? It’s an intricate heuristic computed from the following factors:

  • Number of users affected
  • Severity (“security issue” vs. “wrong spelling”)
  • Opportunity (meaning, I noticed it when it got filed)
  • Availability (am I focused on doing something else when I notice the issue?)
  • Fun factor and current mood (yes, I am a volunteer. Deal with it.)

No single event prompted this blog post. I’ll keep it around to point to, when the occasion arises.



				

The File (Dis)connect

I’ll be going on about Wikidata, images, and tools. Again. You have been warned.

I have written a few image-related Wikimedia tools over the years (such as FIST, WD-FIST, to name two big ones), because I believe that images in articles and Wikidata items are important, beyond their adorning effect. But despite everyone’s efforts, images on Wikidata (the Wikimedia site with the most images) are still few and far between. For example, less than 8% of taxa have an image; across all of Wikidata, it’s rather around 5% of items.

On the other hand, Commons now has ~45M files, and other sites like Flickr also have vast amounts of freely licensed files. So how to bring the two of them together? One problem is that, lacking prior knowledge, matching an item to an image means full-text searching the image site, which even these days takes time for thousands of items (in addition to potential duplication of searches, stressing APIs unnecessarily). A current example for “have items, need images” is the Craig Newmark Pigeon Challenge by the WMF.

The answer (IMHO) is to prepare item-file-matches beforehand; take a subset of items (such as people or species) which do not have images, and search for them on Commons, Flickr, etc. Then store the results, and present them to the user upon request. I had written some quick one-off scans like that before, together with the odd shoe-string interface; now I have consolidated the data, the scan scripts, and the interface into a new tool, provisionally called File Candidates. Some details:

  • Already seeded with plenty of candidates, including >85K species items, >53K humans, and >800 paintings (more suggestions welcome)
  • Files are grouped by topic (e.g. species)
  • Files can be located on Commons or Flickr; more sites are possible (suggestions welcome)
  • One-click transfer of files from Flickr to Commons (with does-it-exists-on-Commons check)
  • One-/Two-click (one for image, two for other properties) adding of the file to the Wikidata item
  • Some configuration options (click the “⚙” button)
  • Can use a specific subset of items via SPARQL (example: Pigeon Challenge for species)

There is another aspect of this tool I am excited about: Miriam is looking into ranking images by quality through machine learning, and has given me a set of people-file-matches, which I have already incorporated into my “person” set, including the ranking. From the images that users add through this tool, we can then see how much the ranking algorithm agrees with the human decision. This can set us on a path towards AI-assisted editing!

ADDENDUM: Video with demo usage!

Playing cards on Twitter

So this happened.

Yesterday, Andy Mabbett asked me on Twitter for a new feature of Reasonator: Twitter cards, for small previews of Wikidata items on Twitter. After some initial hesitation (for technical reasons), I started playing with the idea in a test tweet (and several replies to myself), using Andy as the guinea pig item:

Soon, I was contacted by Erika Herzog, who I did work with before on Wikidata projects:

That seemed like an odd thing to request, but I try to be a nice guy, and if there are some … personal issues between Wikidata users, I have no intention of causing unnecessary upset. So, after some more testing (meanwhile, I had found a Twitter page to do the tests on), I announced the new feature to the world, using what would undoubtedly be a suitable test subject for the Wikipedia/Wikidata folk:

Boy was I wrong:

I basically woke up to this reply. Under-caffeinated, I saw someone tell me what to (not) tweet. Twice. No reason. No explanation. Not a word on why Oprah would be a better choice as a test subject in a tweet about a new feature for a Wikidata-based tool. Just increasing aggressiveness, going from “problematic” to “Ugh” and “Gads” (whatever that is).

Now, I don’t know much about Oprah. All I know is, basically, what I heard characters in U.S. sit-coms say about her, none of which was very flattering. I know she is (was?) a U.S. TV talk show host, and that she recently gave some speech in the #metoo context. I never saw one of her talk shows. She is probably pretty good at what she does. I don’t really care about her, one way or the other. So far, Oprah has been a distinctively unimportant figure in my life.

Now, I was wondering why Erika kept telling me what to (not) tweet, and why it should be Oprah, of all people. But at that time, all I had the energy to muster as a reply was “Really?”. To that, I got a reply with more Jimbo-bashing:

At which point I just had about enough of this particular jewel of conversation to make my morning:

What follows is a long, horrible conversation with Erika (mostly), with me guessing what, exactly, she wants from me. Many tweets down, it turns out that, apparently, her initial tweets were addressing a “representation issue“. At my incredulous question if  she seriously demanded a “women’s quota” for my two original tweets (honestly, I have no idea what else this could be about by now), I am finally identified as the misogynist cause of all women’s peril in the WikiVerse:

Good thing we finally found the problem! And it was right in front of us the whole time! How did we not see this earlier? I am a misogynist pig (no offence to Pigsonthewing)! What else could it be?

Well, I certainly learned my lesson. I now see the error of my ways, and will try to better myself. The next time someone tries to tell me what to (not) tweet, I’ll tell them to bugger off right away.