Skip to content

A Scanner Rusty

One of my most-used WikiVerse tools is PetScan. It is a complete re-write of several other PHP-based tools, in C++ for performance reasons. PetScan has turned into the Swiss Army Knife of doing things with Wikipedia, Wikidata, and other projects.

But PetScan has also developed a few issues over time. It is suffering from the per-tool database connection limit of 10, enforced by the WMF. It also has some strange bugs, one of them creating weirdly named files on disk, which generally does not inspire confidence. Finally, from a development/support POV, it is the “odd man out”, as none of my other WikiVerse tools are written in C++.

So I went ahead and re-wrote PetScan, this time in Rust. If you read this blog, you’ll know that Rust is my recent go-to language. It is fast, safe, and comes with a nice collection of community-maintained libraries (called “crates”). The new PetScan:

  • uses MediaWiki and Wikibase crates, which simplifies coding considerably
  • automatically chunks database queries, which should improve reliability, and could be developed into multi-threaded queries
  • pools replica database access form several of my other HTML/JS-only tools (which do not use the database, but still get allocated connections)

Long story short, I want to replace the C++ version with the Rust version. Most of the implementation is done, but I can’t think of all possible corner cases myself. So I ask the interested community members to give the test instance of PetScan V2 a whirl. I re-used the web interface from PetSan V1, so it should look very familiar. If your query works, it should do so much more reliably than in V1. The code is on github, and so is the new issue tracker, where you can file bug reports and feature requests.

Batches of Rust

QuickStatments is a workhorse for Wikidata, but it had a few problems of late.

One of those is bad performance with batches. Users can submit a batch of commands to the tool, and these commands are then run on the Labs server. This mechanism has been bogged down for several reasons:

  • Batch processing written in PHP
  • Each batch running in a separate process
  • Limitation of 10 database connection per tool (web interface, batch processes, testing etc. together) on Labs
  • Limitation of (16? observed but not validated) simultaneous processes per tool on Labs cloud
  • No good way to auto-start a batch process when it is submitted (currently, auto-starting a PHP process every 5 minutes, and exit if there is nothing to do)
  • Large backlog developing

Amongst continued bombardment on Wiki talk pages, Twitter, Telegram etc. that “my batch is not running (fast enough)”, I went to mitigate the issue. My approach is to do all the batches in a new processing engine, written in Rust. This has several advantages:

  • Faster and easier on the resources than PHP
  • A single process running on Labs cloud
  • Each batch is a thread within that process
  • Checking for a batch to start every second (if you submit a new batch, it should start almost immediately)
  • Use of a database connection pool (the individual thread might have to wait a few milliseconds to get a connection, but the system never runs out)
  • Limiting simultaneous batch processing for batches from the same user (currently: 2 batches max) to avoid the MediaWiki API “you-edit-too-fast” error
  • Automatic handling of maxlag, bot/OAuth login etc. by using my mediawiki crate

This is now running on Labs, processing all (~40 at the moment) open batches simultaneously. Grafana shows the spikes in edits, but no increased lag so far. The process is given 4GB of RAM, but could probably do with a lot less (for comparison, each individual PHP process used 2GB).

A few caveats:

  • This is a “first attempt”. It might break in new, fun, unpredicted ways
  • It will currently not process batches that deal with Lexemes. This is mostly a limitation of the wikibase crate I use, and will likely get solved soon. In the meantime, please run Lexeme batches only within the browser!
  • I am aware that I have now code duplication (the PHP and the Rust processing). For me, the solution will be to implement QuickStatements command parsing in Rust as well, and replace PHP completely. I am aware that this will impact third-party use of QuickStatements (e.g. the WikiBase docker container), but the PHP and Rust sources are independent, so there will be no breakage; of course, the Rust code will likely evolve away from PHP in the long run, possibly causing incompatabilities

So far, it seems to be running fine. Please let me know if you encounter any issues (unusual errors in your batch, weird edits etc.)!

Bad credentials

So there has been an issue with QuickStatements on Friday.

As users of that tool will know, you can run QuickStatements either from within your browser, or “in the background” from a Labs server. Originally, these “batch edits” were performed as QuickStatementsBot, mentioning batch and the user who submitted it in the edit summary. Later, through a pull request, QuickStatements gained the ability to run batch edits as the user who submitted the batch. This is done by storing the OAuth information of the user, and playing it back to the Wikidata API for the edits. So far so good.

However, as with many of my databases on Labs, I made the QuickStatements database open for “public reading”, that is, any Labs tool account could see its contents. Including the OAuth login credentials. Thus, since the introduction of the “batch edit as user” feature, up until last Friday, anyone with a login on Labs could, theoretically, perform edits and anyone who did submit a QuickStatements batch, by copying the OAuth credentials.

We (WMF staff and volunteers, including myself) are not aware that any such user account spoofing has taken place (security issue). If you suspect that this has happened, please contact WMF staff or myself.

Once the issue was reported, the following actions were taken

  • deactivation of the OAuth credentials of QuickStatements, so no more edits via spoofed user OAuth information could take place
  • removal of the “publicly” (Labs-internally) visible OAuth information from the database
  • deactivation of the QuickStatement bots and web interface

Once spoofed edits were no longer possible, I went ahead and moved the OAuth storage to a new database that only the QuickStatements “tool user” (the instance of the tool that is running, and myself) can see. I then got a new OAuth consumer for QuickStatements, and restarted the tool. You can now use QuickStatements as before. Your OAuth information will be secure now. Because of the new OAuth consumer for QuickStatements, you will have to log in again once.

This also means that all the OAuth information that was stored prior to Friday is no longer usable, and was deleted. This means that the batches you submitted until Friday will now fall back on the aforementioned QuickStatementsBot, and no longer edit as your user account. If it is very important to you that your edits appear under your user account, please let me know. All new batches will run edit your user accounts, as before.

My apologies for this incident. Luckily, there appears to be no actual damage done.

The Corfu Projector

I recently spent a week on Corfu. I was amazed by the history, the culture, the traditions, and, of course, the food. I was, however, appalled by the low coverage of Corfu localities on Wikidata. While I might be biased, living in the UK where every postbox is a historic monument, a dozen or so items for Corfu’s capital still seemed a bit thin. So I thought a bit about how to change this. Not just for Corfu, but in general for a geographic entity. I know I can’t do this all by myself, even for “just” Corfu, but I can make it easier and more appealing for anyone who might have an interest in helping out.

An important first step is to get an idea of what items there are. For a locality, this can be done in two ways: by coordinates, or by administrative unit. Ideally, relevant items should have both, which makes a neat first to-do-list (items with coordinates but without administrative unit, and vice versa). Missing images are also a low-hanging fruit, to use the forbidden management term. But there are more items that relate to a locality, besides geographical objects. People are born, live, work, and die there. A region has typical, local food, dresses, traditions, and events. Works are written about it, paintings are painted, and songs are sung. Plants and animals can be specific to the place. The list goes on.

All this does not sound like just a list of buildings; it sounds like its own project. A WikiProject. And so I wrote a tool specifically to support WikiProjects on Wikidata that deal with a locality. I created a new one for Corfu, and added some configuration data as a sub-page. Based on that data, the Projector tool can generate a map and a series of lists with items related to the project (Corfu example).

The map shows all items with coordinates (duh), either via the “administrative unit” property tree, or via coordinates within pre-defined areas. You can draw new areas on the map, and then copy the area definition into the configuration page. Items without an administrative unit will have a thick border, and items without an image will be red.

There are also lists of items, including the locations from the map (plus the ones in the administrative unit, but without coordinates), people related to these locations, creative works (books, paintings etc.), and “things” (food, organizations, events, you name it). All of this is generated on page load via SPARQL. The lists can be filtered (e.g. only items without image), and the items in the list can be opened in other tools, including WD-FIST to find images, Terminator to find missing labels, Recent Changes within these items over the last week, PetScan (all items or just the filtered ones), and Tablernacle. And, of course, WikiShootMe for the entire region. I probably forgot a few tools, suggestions are (as always) welcome.

Adopting this should be straightforward for other WikiProjects; just copy my Corfu example configuration (without the areas), adapt the “root regions”, and it should work (using “X” for “WikiProject X”). I am looking forward to grow this tool in functionality, and maybe to other, not location-based projects.

Papers on Rust

I have written about my attempt with Rust and MediaWiki before. This post is an update on my progress.

I started out writing a MediaWiki API crate to be able to talk to MediaWiki installations from Rust. I was then pointed to a wikibase crate by Tobias Schönberg and others, to which I subsequently contributed some code, including improvements to the existing codebase, but also an “entity cache” struct (to retrieve and manage larger amounts of entities from Wikibase/Wikidata), as well as an “entity diff” struct.

The latter is something I had started in PHP before, but never really finished. The idea is that, when creating or updating an entity, instead of painstakingly testing if each statement/label/etc. exists, one simply creates a new, blank item, fills it with all the data that should be in there, and then generates a “diff” to a blank (for creating) or existing (for updating) entity. That diff can then be passed to the wbeditentity API action. The diff generation can be fine-tuned, e.g. only add English labels, or add/update (but not remove) P31 statements.

Armed with these two crates, I went to re-create a functionality that I had written in PHP before: creation and updating of items for scientific publications, mainly used in my SouceMD tool. The code underlying that tool has grown over the years, meaning it’s a mess, and has also developed some idiosyncrasies that lead to unfortunate edits.

For a rewrite in Rust, I also wanted to make the code more modular, especially regarding the data sources. I did find a crate to query CrossRef, but not much else. So I wrote new crates to query pubmed, ORCID, and Semantic Scholar. All these crates are completely independent of MediaWiki/Wikibase; they can be re-used in any kind of Rust code related to scientific publications. I consider them a sound investment into the Rust crate ecosystem.

With these crates in a basic but usable state, I went to write papers, Rust code (not a crate just yet) to gather data from the above sources, and inject them into Wikidata. I wrote a Rust trait to represent a generic source, and then wrote adapter structs for each of the sources. Finally, I added some wrapper code to take a list of adapters, query them about a paper, and update Wikidata accordingly. It can already

  • iteratively gather IDs (supply a PubMed ID, PubMed might get a DOI, which then can get you data from ORCID)
  • find item(s) for these IDs on Wikidata
  • gather information about the authors of the paper
  • find items for these authors on Wikidata
  • create new author items, or update existing ones, on Wikidata
  • create new paper items, or update existing ones, on Wikidata (no author statement updates yet)

The adapter trait is designed to both unify data across sources (e.g. use standardized author information), but also allow to update paper items with source-specific data (e.g. publication dates, Mesh terms). This system is open to add more adapters for different sources. It is also flexible enough to extend to other, similar “publication types”, such as book, or maybe even artwork. My test example shows how easy it is to use in other code; indeed, I am already using it in Rust code I developed for my work (publication in progress).

I see all of this as a seeding of the Rust crate system with easily reusable, MediaWiki-related code. I will add more such code in the future, and hope this will help in the adoption of Rust in the MediaWiki programmer community. Watch this space.

Update: Now available as a crate!

Deleted gender wars

After reading the excellent analysis of AfD vs gender by Andrew Gray, where he writes about the articles that faced and survived the “Article for Deletion” process, I couldn’t help but wonder what happened to the articles that were not kept, that is, where AfD was “successful”.

So I quickly took all article titles from English Wikipedia, article namespace, that were in the logging table in the database replica as “deleted” (this may include cases where the article was deleted but later re-created under the same name), a total of 4,263,370 at the time of writing.

I filtered out some obvious non-name article titles, but which of the remaining ones were about humans? And which of those were about men, which about women? If there only were an easily accessible online database that has male and female first names… oh wait…

So I took male and female first names, as well as last (family) names, from Wikidata. For titles that end with a family name, I group the title into male, female, or unknown.

Male 289,036 50%
Female 86,281 16%
Unknown 196,562 34%
Total 571,879

Of the (male+female) articles, 23% are about women, which almost exactly matches Andrew’s ratio of women in BLP (Biographies of Living People). That would indicate no significant gender bias in actually deleted articles.

Of course, this was just a quick calculation. The analysis code and the raw data, plus instructions how to re-create it, are available here, in case you want to play further.

Dealing with the Rust

Rust is a up-and-coming programming language, developed by the Mozilla Foundation, and is used in the Firefox rendering engine, as well as the Node Package Manager, amongst others. There is a lot to say about Rust; suffice it to say that it’s designed to be memory-safe, fast (think: C or better), it can compile to WebAssembly, and has been voted “most loved language” on StackOverflow in 2017 and 2018. As far as new-ish languages go, this is one to keep an eye on.

Rust comes with a fantastic package manager, Cargo, which, by default, uses crates (aka libraries) from crates.io, the central repository for Rust code. As part of my personal and professional attempt at grasping Rust (it has a bit of a learning curve), I wrote a tiny crate to access the API of MediaWiki installations. Right now, it can

  • run the usual API queries, and return the result as JSON (using the Rust serde_json crate)
  • optionally, continue queries for which there are more results available, and merge all the results into a single JSON result
  • log in your bot, and edit

This represents a bare-bones approach, but it would already be quite useful for simple command-line tools and bots. Because Rust compiles into stand-alone binaries, such tools are easy to run; no issues with Composer and PHP versions, node.js version/dependency hell, Python virtualenvs, etc.

The next functionality to implement might include OAuth logins, and a SPARQL query wrapper.

If you know Rust, and would like to play, the crate is called mediawiki, and the initial version is 0.1.0. The repository has two code examples to get you started. Ideas, improvements, and Pull Requests are always welcome!

Update: A basic SPARQL query functionality is now in the repo (not the crate – yet!)

Inventory

The Cleveland Museum of Art recently released 30,000 images of art under CC-Zero (~public domain). Some of the good people on Wikimedia Commons have begun uploading them there, to be used, amongst others, by Wikipedia and Wikidata.

But how to find the relevant Wikipedia article (if there is one) or Wikidata item for such a picture? One way would be via titles, names etc. But the filenames are currently something like Clevelandart_12345.jpg, and while the title is in the wiki text, it is neither easily found, nor overly precise. But, thankfully, the uploader also includes the accession number (aka inventory number). That one is, in conjunction with the museum, a lot more precise, and it can be found in the rendered HTML reliably (to a degree).

But who wants to go through 30K of files, extract the inventory numbers, and check if they are on, say, Wikidata? And what about other museums/GLAM institutions? That’s too big a task to do manually. So, I wrote some code.

First, I need to get museums and their categories on Commons. But how to find them? Wikidata comes to the rescue, yielding (at the time of writing) 9,635 museums with a Commons category.

Then, I need to check each museum’s category tree for files, and try to extract the inventory number. Because this requires HTML rendering on tens (hundreds?) of thousands of files, I don’t want to repeat the exercise at a later date. So, I keep a log in a database (s51203__inventory_p, if you are on Labs). It logs

  • the files that were found with an inventory number (file, number, museum)
  • the number of files with an inventory number per museum

That way, I can skip museum categories that do not have inventory numbers in their file descriptions. If one of them is updated with such numbers, I can always remove the entry that says “no inventory numbers”, thus forcing a re-index.

My initial scan is now running, slowly filling the database. It will be run on a daily basis after that, just for categories that have inventory numbers, and new categories (as per Wikidata query above).

That’s the first part. The second part is to match the files against Wikidata items. I have that as well, to a degree; I am using the collection and inventory number properties to identify a work of art. Initial tests did generate some matches, for example, this file was automatically added to this item on Wikidata. This automatic matching will run daily as well.

A third part is possible, but not implemented yet. Assuming that a work of art has an image on Commons, as well as a museum and an inventory number, but no Wikidata item, such an item could be created automatically. This would pose little difficulty to code, but might generate duplicate items, where an items exists but does not have the inventory number set. Please let me know if this would be something to do. I’ll give you some numbers once the initial run is through.

The Hand-editor’s Tale

Disclaimer: I am the author of Listeria, and maintainer of ListeriaBot.

In January 2016, User:Emijrp had an idea. Why not use that newfangled Listeria tool, a bot that generates lists based on Wikidata, and puts them on Wikipedia pages, to maintain a List of Women Linguists on English Wikipedia? It seemed that a noble cause had met with (at the time) cutting edge technology to provide useful information for both readers and editors (think: red links) on Wikipedia.

The bot thus began its work, and continued dutifully for almost a year (until January 2, 2017). At that time, a community decision was made to deactivate ListeriaBot edits on the page, but to keep the list it had last generated, for manual curation. No matter what motivated that decision, it is interesting to evaluate the progress of the page in its “manual mode”.

Since the bot was deactivated, 712 days have passed (at the time of writing, 2018-12-17). Edit frequency dropped from one edit every 1-2 days by the bot, to one edit every 18 days on average.

In that time, the number of entries increased from 663 (last bot edit) to 673 (adding one entry every 71 days on average). The query (women, linguists, but no translators) used to generate the Listeria list now yields 1,673 entries. This means the list on English Wikipedia is now exactly 1,000 entries (or 148%) out of date. A similar lag for images and birth/death dates is to be expected.

The manual editors kept the “Misc” section, which was used by ListeriaBot to group entries of unknown or “one-off” (not warranting their own section) nationalities. It appears that few, if any, have been moved into appropriate sections.

It is unknown if manual edits to the list (example), the protection of which was given as a main reason to deactivate the bot, were propagated to the Wikidata item, or to the articles on Wikipedia where such exist (here, es and ca), or if they are destined to wither on the list page.

The list on Wikipedia links to 462 biography pages (likely a slight overestimate) on English Wikipedia. However, there are 555 Wikidata items from the original query that have a sitelink to English Wikipedia. The list on Wikipedia thus fails to link to (at least) 93 women linguists that exist on the same site. One example of a missing entry would be Antonella Sorace, a Fellow of the Royal Society. She is, of course, linked from another, ListeriaBot-maintained page.

Humans are vastly superior to machines in many respects. Curating lists is not necessarily one of them.

What else?

Structured Data on Commons is approaching. I have done a bit of work on converting Infoboxes into statements, that is, to generate structured data. But what about using it? What could that look like?

Taxon hierarchy for animals (image page)

Inspired by a recent WMF blog post, I wrote a simple demo on what you might call “auto-categorisation”. You can try it out by adding the line

importScript('User:Magnus Manske/whatelse.js') ;

to your common.js script.

It works for files on Commons that are used in a Wikidata item (so, ~2.8M files at the moment), though that could be expanded (e.g. scanning for templates with Qids, “depicts” in Structured data, etc.). The script then investigates the Wikidata item(s), and tries to find ways to get related Wikidata items with images.

The Night Watch (image page)

That could be simple things as “all items that have the same creator (and an image)”, but I also added a few bespoke ones.

If the item is a taxon (e.g. the picture is of an animal), it finds the “taxon tree” by following the “parent taxon” property. It even follows branches, and constructs the longest path possible, to get as many taxon levels as possible (I stole that code from Reasonator).

A similar thing happens for all P31 (“instance of”) values, where it follows the subclass hierarchy; the London Eye is “instance of:Ferris wheel”, so you get “Ferris wheel”, its super-class “amusement ride” etc.

The same, again, for locations, all the way up to country. If the item has a coordinate, there are also a some location-based “nearby” results.

Finally, some date fields (birthdays, creation dates) are harvested for the years.

The London Eye (image page)

Each of these, if applicable, get their own section in a box floating on the right side of the image. They link to a gallery-type SPARQL query result page, showing all items that match a constraint and have an image. So, if you look at The Night Watch on Commons, the associated Wikidata item has “Creator:Rembrandt”. Therefore, you get a “Creator” section, with a “Rembrandt” link, that opens a page showing all Wikidata items with “Creator:Rembrandt” that have an image.

In a similar fashion, there are links to “all items with inception year 1642”. Items with “movement”baroque”. You get the idea.

Now, this is just a demo, and there are several issues with it. First, it uses Wikidata, as there is no Structured Data on Commons yet. That limits it to files used in Wikidata items, and to the property schema and tree structure used on Wikidata. Some links that are offered lead to ridiculously large queries (all items that are an instance of a subclass of “entity”, anyone?), some that just return the same file you came from (because it is the only item with an image created by Painter X), and some that look useful but time out anyway. And, as it is, the way I query the APIs would likely not be sustainable for use by everyone by default.

But then, this is what a single guy can hack in a few hours, using a “foreign” database that was never intended to make browsing files easy. Given these limitations, I think about what the community can do with a bespoke, for-purpose Structured Data, and some well-designed code, and I am very hopeful.

Note: Please feel free to work with the JS code; it also contains my attempt to show the results in a dialog box on the File Page, but I couldn’t get it to look nice, so I keep using external links.