Skip to content

Papers on Rust

I have written about my attempt with Rust and MediaWiki before. This post is an update on my progress.

I started out writing a MediaWiki API crate to be able to talk to MediaWiki installations from Rust. I was then pointed to a wikibase crate by Tobias Schönberg and others, to which I subsequently contributed some code, including improvements to the existing codebase, but also an “entity cache” struct (to retrieve and manage larger amounts of entities from Wikibase/Wikidata), as well as an “entity diff” struct.

The latter is something I had started in PHP before, but never really finished. The idea is that, when creating or updating an entity, instead of painstakingly testing if each statement/label/etc. exists, one simply creates a new, blank item, fills it with all the data that should be in there, and then generates a “diff” to a blank (for creating) or existing (for updating) entity. That diff can then be passed to the wbeditentity API action. The diff generation can be fine-tuned, e.g. only add English labels, or add/update (but not remove) P31 statements.

Armed with these two crates, I went to re-create a functionality that I had written in PHP before: creation and updating of items for scientific publications, mainly used in my SouceMD tool. The code underlying that tool has grown over the years, meaning it’s a mess, and has also developed some idiosyncrasies that lead to unfortunate edits.

For a rewrite in Rust, I also wanted to make the code more modular, especially regarding the data sources. I did find a crate to query CrossRef, but not much else. So I wrote new crates to query pubmed, ORCID, and Semantic Scholar. All these crates are completely independent of MediaWiki/Wikibase; they can be re-used in any kind of Rust code related to scientific publications. I consider them a sound investment into the Rust crate ecosystem.

With these crates in a basic but usable state, I went to write papers, Rust code (not a crate just yet) to gather data from the above sources, and inject them into Wikidata. I wrote a Rust trait to represent a generic source, and then wrote adapter structs for each of the sources. Finally, I added some wrapper code to take a list of adapters, query them about a paper, and update Wikidata accordingly. It can already

  • iteratively gather IDs (supply a PubMed ID, PubMed might get a DOI, which then can get you data from ORCID)
  • find item(s) for these IDs on Wikidata
  • gather information about the authors of the paper
  • find items for these authors on Wikidata
  • create new author items, or update existing ones, on Wikidata
  • create new paper items, or update existing ones, on Wikidata (no author statement updates yet)

The adapter trait is designed to both unify data across sources (e.g. use standardized author information), but also allow to update paper items with source-specific data (e.g. publication dates, Mesh terms). This system is open to add more adapters for different sources. It is also flexible enough to extend to other, similar “publication types”, such as book, or maybe even artwork. My test example shows how easy it is to use in other code; indeed, I am already using it in Rust code I developed for my work (publication in progress).

I see all of this as a seeding of the Rust crate system with easily reusable, MediaWiki-related code. I will add more such code in the future, and hope this will help in the adoption of Rust in the MediaWiki programmer community. Watch this space.

Update: Now available as a crate!

Deleted gender wars

After reading the excellent analysis of AfD vs gender by Andrew Gray, where he writes about the articles that faced and survived the “Article for Deletion” process, I couldn’t help but wonder what happened to the articles that were not kept, that is, where AfD was “successful”.

So I quickly took all article titles from English Wikipedia, article namespace, that were in the logging table in the database replica as “deleted” (this may include cases where the article was deleted but later re-created under the same name), a total of 4,263,370 at the time of writing.

I filtered out some obvious non-name article titles, but which of the remaining ones were about humans? And which of those were about men, which about women? If there only were an easily accessible online database that has male and female first names… oh wait…

So I took male and female first names, as well as last (family) names, from Wikidata. For titles that end with a family name, I group the title into male, female, or unknown.

Male 289,036 50%
Female 86,281 16%
Unknown 196,562 34%
Total 571,879

Of the (male+female) articles, 23% are about women, which almost exactly matches Andrew’s ratio of women in BLP (Biographies of Living People). That would indicate no significant gender bias in actually deleted articles.

Of course, this was just a quick calculation. The analysis code and the raw data, plus instructions how to re-create it, are available here, in case you want to play further.

Dealing with the Rust

Rust is a up-and-coming programming language, developed by the Mozilla Foundation, and is used in the Firefox rendering engine, as well as the Node Package Manager, amongst others. There is a lot to say about Rust; suffice it to say that it’s designed to be memory-safe, fast (think: C or better), it can compile to WebAssembly, and has been voted “most loved language” on StackOverflow in 2017 and 2018. As far as new-ish languages go, this is one to keep an eye on.

Rust comes with a fantastic package manager, Cargo, which, by default, uses crates (aka libraries) from crates.io, the central repository for Rust code. As part of my personal and professional attempt at grasping Rust (it has a bit of a learning curve), I wrote a tiny crate to access the API of MediaWiki installations. Right now, it can

  • run the usual API queries, and return the result as JSON (using the Rust serde_json crate)
  • optionally, continue queries for which there are more results available, and merge all the results into a single JSON result
  • log in your bot, and edit

This represents a bare-bones approach, but it would already be quite useful for simple command-line tools and bots. Because Rust compiles into stand-alone binaries, such tools are easy to run; no issues with Composer and PHP versions, node.js version/dependency hell, Python virtualenvs, etc.

The next functionality to implement might include OAuth logins, and a SPARQL query wrapper.

If you know Rust, and would like to play, the crate is called mediawiki, and the initial version is 0.1.0. The repository has two code examples to get you started. Ideas, improvements, and Pull Requests are always welcome!

Update: A basic SPARQL query functionality is now in the repo (not the crate – yet!)

Inventory

The Cleveland Museum of Art recently released 30,000 images of art under CC-Zero (~public domain). Some of the good people on Wikimedia Commons have begun uploading them there, to be used, amongst others, by Wikipedia and Wikidata.

But how to find the relevant Wikipedia article (if there is one) or Wikidata item for such a picture? One way would be via titles, names etc. But the filenames are currently something like Clevelandart_12345.jpg, and while the title is in the wiki text, it is neither easily found, nor overly precise. But, thankfully, the uploader also includes the accession number (aka inventory number). That one is, in conjunction with the museum, a lot more precise, and it can be found in the rendered HTML reliably (to a degree).

But who wants to go through 30K of files, extract the inventory numbers, and check if they are on, say, Wikidata? And what about other museums/GLAM institutions? That’s too big a task to do manually. So, I wrote some code.

First, I need to get museums and their categories on Commons. But how to find them? Wikidata comes to the rescue, yielding (at the time of writing) 9,635 museums with a Commons category.

Then, I need to check each museum’s category tree for files, and try to extract the inventory number. Because this requires HTML rendering on tens (hundreds?) of thousands of files, I don’t want to repeat the exercise at a later date. So, I keep a log in a database (s51203__inventory_p, if you are on Labs). It logs

  • the files that were found with an inventory number (file, number, museum)
  • the number of files with an inventory number per museum

That way, I can skip museum categories that do not have inventory numbers in their file descriptions. If one of them is updated with such numbers, I can always remove the entry that says “no inventory numbers”, thus forcing a re-index.

My initial scan is now running, slowly filling the database. It will be run on a daily basis after that, just for categories that have inventory numbers, and new categories (as per Wikidata query above).

That’s the first part. The second part is to match the files against Wikidata items. I have that as well, to a degree; I am using the collection and inventory number properties to identify a work of art. Initial tests did generate some matches, for example, this file was automatically added to this item on Wikidata. This automatic matching will run daily as well.

A third part is possible, but not implemented yet. Assuming that a work of art has an image on Commons, as well as a museum and an inventory number, but no Wikidata item, such an item could be created automatically. This would pose little difficulty to code, but might generate duplicate items, where an items exists but does not have the inventory number set. Please let me know if this would be something to do. I’ll give you some numbers once the initial run is through.

The Hand-editor’s Tale

Disclaimer: I am the author of Listeria, and maintainer of ListeriaBot.

In January 2016, User:Emijrp had an idea. Why not use that newfangled Listeria tool, a bot that generates lists based on Wikidata, and puts them on Wikipedia pages, to maintain a List of Women Linguists on English Wikipedia? It seemed that a noble cause had met with (at the time) cutting edge technology to provide useful information for both readers and editors (think: red links) on Wikipedia.

The bot thus began its work, and continued dutifully for almost a year (until January 2, 2017). At that time, a community decision was made to deactivate ListeriaBot edits on the page, but to keep the list it had last generated, for manual curation. No matter what motivated that decision, it is interesting to evaluate the progress of the page in its “manual mode”.

Since the bot was deactivated, 712 days have passed (at the time of writing, 2018-12-17). Edit frequency dropped from one edit every 1-2 days by the bot, to one edit every 18 days on average.

In that time, the number of entries increased from 663 (last bot edit) to 673 (adding one entry every 71 days on average). The query (women, linguists, but no translators) used to generate the Listeria list now yields 1,673 entries. This means the list on English Wikipedia is now exactly 1,000 entries (or 148%) out of date. A similar lag for images and birth/death dates is to be expected.

The manual editors kept the “Misc” section, which was used by ListeriaBot to group entries of unknown or “one-off” (not warranting their own section) nationalities. It appears that few, if any, have been moved into appropriate sections.

It is unknown if manual edits to the list (example), the protection of which was given as a main reason to deactivate the bot, were propagated to the Wikidata item, or to the articles on Wikipedia where such exist (here, es and ca), or if they are destined to wither on the list page.

The list on Wikipedia links to 462 biography pages (likely a slight overestimate) on English Wikipedia. However, there are 555 Wikidata items from the original query that have a sitelink to English Wikipedia. The list on Wikipedia thus fails to link to (at least) 93 women linguists that exist on the same site. One example of a missing entry would be Antonella Sorace, a Fellow of the Royal Society. She is, of course, linked from another, ListeriaBot-maintained page.

Humans are vastly superior to machines in many respects. Curating lists is not necessarily one of them.

What else?

Structured Data on Commons is approaching. I have done a bit of work on converting Infoboxes into statements, that is, to generate structured data. But what about using it? What could that look like?

Taxon hierarchy for animals (image page)

Inspired by a recent WMF blog post, I wrote a simple demo on what you might call “auto-categorisation”. You can try it out by adding the line

importScript('User:Magnus Manske/whatelse.js') ;

to your common.js script.

It works for files on Commons that are used in a Wikidata item (so, ~2.8M files at the moment), though that could be expanded (e.g. scanning for templates with Qids, “depicts” in Structured data, etc.). The script then investigates the Wikidata item(s), and tries to find ways to get related Wikidata items with images.

The Night Watch (image page)

That could be simple things as “all items that have the same creator (and an image)”, but I also added a few bespoke ones.

If the item is a taxon (e.g. the picture is of an animal), it finds the “taxon tree” by following the “parent taxon” property. It even follows branches, and constructs the longest path possible, to get as many taxon levels as possible (I stole that code from Reasonator).

A similar thing happens for all P31 (“instance of”) values, where it follows the subclass hierarchy; the London Eye is “instance of:Ferris wheel”, so you get “Ferris wheel”, its super-class “amusement ride” etc.

The same, again, for locations, all the way up to country. If the item has a coordinate, there are also a some location-based “nearby” results.

Finally, some date fields (birthdays, creation dates) are harvested for the years.

The London Eye (image page)

Each of these, if applicable, get their own section in a box floating on the right side of the image. They link to a gallery-type SPARQL query result page, showing all items that match a constraint and have an image. So, if you look at The Night Watch on Commons, the associated Wikidata item has “Creator:Rembrandt”. Therefore, you get a “Creator” section, with a “Rembrandt” link, that opens a page showing all Wikidata items with “Creator:Rembrandt” that have an image.

In a similar fashion, there are links to “all items with inception year 1642”. Items with “movement”baroque”. You get the idea.

Now, this is just a demo, and there are several issues with it. First, it uses Wikidata, as there is no Structured Data on Commons yet. That limits it to files used in Wikidata items, and to the property schema and tree structure used on Wikidata. Some links that are offered lead to ridiculously large queries (all items that are an instance of a subclass of “entity”, anyone?), some that just return the same file you came from (because it is the only item with an image created by Painter X), and some that look useful but time out anyway. And, as it is, the way I query the APIs would likely not be sustainable for use by everyone by default.

But then, this is what a single guy can hack in a few hours, using a “foreign” database that was never intended to make browsing files easy. Given these limitations, I think about what the community can do with a bespoke, for-purpose Structured Data, and some well-designed code, and I am very hopeful.

Note: Please feel free to work with the JS code; it also contains my attempt to show the results in a dialog box on the File Page, but I couldn’t get it to look nice, so I keep using external links.

Match point

Mix’n’match is one of my more popular tools. It contains a number of catalogs, each in turn containing hundreds or even millions of entries, that could (and often should!) have a corresponding Wikidata item. The tool offers various ways to make it easier to match an entry in a catalog to a Wikidata item.

While the user-facing end of the tool does reasonably well, the back-end has become a bit of an issue. It is a bespoke, home-grown MySQL database that has changed a lot over the years to incorporate more (and more complex) metadata to go with the core data of the entries. Entries, birth and death dates, coordinates, third-party identifiers are all stored in separate, dedicated tables. So is full-text search, which is not exactly performant these days.

The perhaps biggest issue, however, is the bottleneck in maintaining that data – myself. As the only person with write access to the database, all maintenance operations have to run through me. And even though I have added import functions for new catalogs, and run various automatic update and maintenance scripts on a regular basis, the simple task of updating an existing catalog depends on me, and it is rather tedious work.

At the 2017 Wikimania in Montreal, I was approached by the WMF about Mix’n’match; the idea was that they would start their own version of it, in collaboration with some of the big providers of what I call catalogs. My recommendation to the WMF representative was to use Wikibase, the data management engine underlying Wikidata, as the back-end, to allow for a community-based maintenance of the catalogs, and use a task specific interface on top of that, to make the matching as easy as possible.

As it happens with the WMF, a good idea vanished somewhere in the mills of bureaucracy, and was never heard from again. I am not a system administrator (or, let’s say, it is not the area where I traditionally shine), so setting up such a system myself was out of the question at that time. However, these days, there is a Docker image by the German chapter that incorporates MediaWiki, Wikibase, Elasticsearch, the Wikibase SPARQL service, and QuickStatements (so cool to see one of  my own tools in there!) in a single package.

Long story short, I set up a new Mix’n’match using Wikibase ad the back-end.

Automatic matches

The interface is similar to the current Mix’n’match (I’ll call it V1, and the new one V2), but a complete re-write. It does not support all of the V1 functionality – yet. I have set up a single catalog in V2 for testing, one that is also in V1. Basic functionality in V2 is complete, meaning you can match (and unmatch) entries in both Mix’n’match and Wikidata. Scripts can import matches from Wikidata, and do (preliminary) auto-matches of entries to Wikidata, which need to be confirmed by a user. This, in principle, is similar to V1.

There are a few interface perks in V2. There can be more than one automatic match for an entry, and they are all shown as a list; one can set the correct one with a single click. And manually setting a match will open a full-text Wikidata search drop-down inline, often sparing one the need to search on Wikidata and then copying the QID to Mix’n’match. Also, the new auto-matcher takes the type of the entry (if any) into account; given a type Qx, only Wikidata items with name matches that are either “instance of” (P31) Qx (or one of the subclasses of Qx), or items with name matches but without P31 are used as matches; that should improve auto-matching quality.

Manual matching with inline Wikidata search

But the real “killer app” lies in the fact that everything is stored in Wikibase items. All of Mix’n’match can be edited directly in MediaWiki, just like Wikidata. Everything can be queried via SPARQL, just like Wikidata. Mass edits can be done via QuickStatements, just like… well, you get the idea. But users will just see the task-specific interface, hiding all that complexity, unless they really want to peek under the hood.

So far with the theory; sadly, I have run into some real-world issues that I do not know how to fix on my own (or do not have the time and bandwidth to figure out; same effect). First, as I know from bitter experience, MediaWiki installations attract spammers. Because I really don’t have time to clean up after spammers on this one, I have locked account creation and editing; that means only I can run QuickStatements on this Wiki (let me know your Wikidata user name and email, and I’ll create an account for you, if you are interested!). Of course, this kind of defeats the purpose of having the community maintain the back-end, but what can I do? Since the WMF has bowed out in silence, the wiki isn’t using the WMF single sign-on. The OAuth extension, which was originally developed for that specific purpose, ironically doesn’t work for MediaWiki as a client.

But how can people match entries without an account, you ask? Well, for the Wikidata side, they have to use my Widar login system, just like in V1. And for the V2 Wiki, I have … enabled anonymous editing of the item namespace. Yes, seriously. I just hope that Wikibase data spamming is a bit in the future, for now. Your edits will still be credited using your Wikidata user name in edit summaries and statements. Yes, I log all edits as Wikibase statements! (Those are also used for V2 Recent Changes, but since Wikibase only stores day-precision timestamps, Recent Changes looks a bit odd at the moment…)

I also ran into a few issues with the Docker system, and I have now idea how to fix them. This includes:

  • Issues with QuickStatements (oh the irony)
  • SPARQL linking to the wrong server
  • Fulltext search is broken (this also breaks the V2 search function; I am using prefix search for now)
  • I have no idea how to backup/restore any of this (bespoke configuration, MySQL)

None of the above are problems with Mix’n’match V2 in principal, but rather engineering issues to fix. Help would be most welcome.

Other topics that would need work and thought include:

  • Syncing back to Wikidata (probably easy to do).
  • Importing of new catalogs, and updating of existing ones. I am thinking about a standardized interchange format, so I can convert from various input formats (CSV files, auto-scrapers, MARC 21, SPARQL interfaces, MediaWiki installations, etc.).
  • Meta-data handling. I am thinking of a generic method of storing Wikidata property Px ID and a corresponding value as Wikibase statements, possibly with a reference for the source. That would allow most flexibility for storage, matching, and import into Wikidata.

I would very much like to hear what you think about this approach, and this implementation. I would like to go ahead with it, unless there are principal concerns. V1 and V2 would run in parallel, at least for the time being. Once V2 has more functionality, I would import new catalogs into V2 rather than V1. Suggestions for test catalogs (maybe something with interesting metadata) are most welcome. And every bit of technical advice, or better hands-on help, would be greatly appreciated. And if the WMF or WMDE want to join in, or take over, let’s talk!

The blind referee

A quick blog post, before the WordPress editor dies on my again… Wikidata is great. Wikidata with references is even better. So I have written a little tool called Referee. It checks a Wikidata item, collects web pages that are linked via external ID statements, and via associated Wikipedia pages, and checks them for potential matches of statements (birth dates, locations, etc.). If you add a little invocation to your common.js page on Wikidata:
importScript( 'User:Magnus_Manske/referee.js' );
it will automatically check on every Wikidata item load if there are potential references for that item in the cache, and display them with the appropriate statement, with one-click add/reject links. If you put this line:
referee_mode = 'manual' ;
before the importScript invocation, it will not check automatically, but wait for you to click the “Referee” link in the toolbox sidebar. However, in manual, it will force a check (might take a few seconds) in case there are no reference candidates; the toolbar link will remain highlighted while the check is running. I made a brief video demonstrating the interface. Enjoy. Addendum: Forgot to mention that the tool does not work on certain instances (P31) of items, namely “taxon” and “scholarly article”. This is to keep the load on the scraper low, plus these types are special cases and would likely profit more from dedicated scraper systems.

Wikipedia, Wikidata, and citations

As part of an exploratory census of citations on Wikipedia, I have generated a complete (yeah, right) list of all scientific publications cited on Wikispecies, English and German Wikipedia. This is done based on the rendered HTML of the respective articles, and tries to find DOIs, PubMed, and PubMed Central IDs. The list is kept up to date (with only a few minutes lag). I also continuously match the publications I find to Wikidata, and create the missing items, most cited ones first.

A bit about the dataset (“citation” here means that an article mentions/links to a publication ID) at this point in time:

  • 476,560 distinct publications
  • 1,968,852 articles tracked across three Wikimedia projects (some citing publications)
  • 717,071 total citations (~1.5 citations per publication), of which
    • 261,486 have a Wikidata item
    • 214,425 have no Wikidata match
    • 649 cannot be found or created as a Wikidata item (parsing error, or DOI does not exist)
  • The most cited publication is Generation and initial analysis of more than 15,000 full-length human and mouse cDNA sequences (used 3,403 times)
  • Publications with a Wikidata item are cited 472,793 times, those without 244,191 times
  • 266 publications are cited from all three Wikimedia sites (263 have a Wikidata item)

There is no interface for this project yet. If you have a Toolforge (formerly known as Labs) account, you can look at the database as s52680__science_source_p.

Judgement Day

Wikipedia label by Gmhofmann on Commons

At the dawn of Wikidata, I wrote a tool called “Terminator”. Not just because I wanted to have one of my own, but as a pun on the term “term”, used in the database table name (“wb_term”) where Wikidata labels, descriptions, and aliases are stored. The purpose of the tool is to find important (by some definition) Wikidata items that lack a label in a specific language. This can be very powerful, especially in languages with low Wikidata participation; setting the label for teacher (Q37226) in a language will immediately allow all Wikidata items using that item (as an occupation, perhaps) to show that label. A single edit can improve hundreds or thousands of items, and make them more accessible in that language.

Well, Wikidata has grown a lot since I started that tool, and the Terminator didn’t cope well with the growth; it was limited to a handful of languages, and the daily update was compute intensive. Plus, the interface was slow and ugly. Time for a rewrite!

So without further ado, I present version 2 of the Terminator tool. Highlights:

  • Now covers all Wikidata languages
  • Get the top items with missing labels, descriptions, or Wikipedia articles
  • Sort items by total number of claims, external IDs, sitelinks, or a compound score
  • The database currently contains the top (by compound score) ~4.1 million items on Wikidata
  • Updated every 10 minutes
  • Search for missing labels in multiple languages (e.g. German, Italian, or Welsh)
  • Only show items that have labels in languages you know
  • Automatically hides “untranslatable” items (scientific articles, humans, Wikipedia-related pages such as templates and categories), unless you want those as well
  • Can use a SPARQL query to filter items (only shows items that match all the above, plus are in the SPARQL result, for results with <10K items or so)
  • Game mode (single, unsorted random result, more details, re-flows on mobile)

Please let me know through the usual channels about bugs and feature requests. I have dropped some functionality from the old version, such as data download; but that version is still linked form the new main page. Enjoy!