Skip to content

More topics

After my recent blog post about the TopicMatcher tool, I had quite a few conversations about the general area of “main topic”, especially relating to the plethora of scientific publications represented on Wikidata. Here’s a round-up of related things I did since:

As a first attempt, I queried all subspecies items from Wikidata, searched for scientific publications, and added them to TopicMatcher.

That worked reasonably well, but didn’t yield a lot of results, and they need to be human-confirmed. So I came at the problem the other way: Start with a scientific publication, try to find a taxon (species etc.) name, and them add the “main subject” match. Luckily, many such publications put taxon names in () in the title. Once I have the text in between, I can query P225 for an exact match (excluding cases where there are more than one!), and then add “main subject” directly to the paper item, without having to confirm it by a user. I am aware that this will cause a few wrong matches, but I imagine those are few and far between, can be easily corrected when found, and are dwarfed by the usefulness of having publications annotated this way.

There are millions of publications to check, so this is running on a cronjob, slowly going through all the scientific publications on Wikidata. I find quite a few topic in () that are not taxa, or have some issue with the taxon name; I am recording those, to run some analysis (and maybe other, advanced auto-matching) at a later date. So far, I see mostly disease names, which seem to be precise enough to match, in many cases.

Someone suggested to use Mix’n’match sets to find e.g. chemical substances in titles that way, but this requires both “common name” and ID to be present in the title for a sufficient degree of reliability, which is rarely the case. Some edits have been made for E numbers, though. I have since started a similar mechanism running directly off Wikidata (initial results).

Then, I discovered some special cases of publications that lend themselves to automated matching, especially obituaries, as they often contain name, birth, and death date of a person, which is precise enough to automatically set the “main subject” property. For cases where there is no match found, I add them to TopicMatcher, for manual resolution.

I have also added “instance of:erratum” to ~8,000 papers indicating this from the title. This might be better places in “genre”, but at least we have a better handle on those now.

Both errata and obituaries will run regularly, to update any new publications accordingly.

As always, I am happy to get more ideas to deal with this vast realm of publications versus topic.

On Topic

Wikidata already contains a lot of information about topics – people, places, concepts etc. It also contains topics that have a topic, e.g., a painting of a person, a biographical article about someone, a scientific publication about a species. Ideally, Wikidata also describes the connection between the work and the subject. Such connections can be tremendously useful in many contexts, including GLAM and scientific research.

This kind of work can generally not be done by bots, as this would require machine-readable, reliable source data to begin with. However, manually finding items about works, then finding the matching item on Wikidata is somewhat tedious. Thus, I give you TopicMatcher!

In a nutshell, I prepare a list of Wikidata items that are about creative works – paintings, biographical articles, books, scientific publications. Then, I try to guesstimate what Wikidata item they are (mainly) about. Finally, you can get one of these “work items” and their potential subject, with buttons to connect them. At the moment, I have biographical articles looking for a “main subject”, and paintings lacking a “depicts” statement. That comes to a total of 13,531 “work items”, with 54,690 potential matches.

You will get the expected information about the work item and the potential matches, using my trusty AutoDesc. You also get a preview of the painting (if there is an image) and a search function. Below that is a page preview that differs with context; depending on the work item, you could get

  • a WikiSource page, for biographical articles there
  • a GLAM page, if the item has a statement with an external reference that can be used to construct a URL
  • a publication page, using PMC, DOI, or PubMed IDs
  • the Wikidata page of the item, if nothing else works

The idea of the page preview is to find more information about the work, which will allow you to determine the correct match. If there are no suggested subjects in the database, a search is performed automatically, in case new items have been created since the last update.

Once you are done with the item, you can click “Done” (which marks the work item as finished, so it is not shown again), or “Skip”, to keep the item in the pool. Either way, you will get another random item; the reward for good work is more work….

At the top of the page are some filtering options, if you prefer to work on a specific subset of work items. The options are a bit limited for now, but should improve when the database grows to encompass new types of works and subjects.

Alternatively, you can also look for potential works that cover a specific subject. George Washington is quite popular.

I have chosen the current candidates because they are computationally cheap and reasonably accurate to generate. However, I hope to expand to more work and subject areas over time. Scientific articles that describe species come to mind, but the queries to generate candidate matches are quite slow.

If you have ideas for queries, or just work/subject areas, or even some candidate lists, I would be happy to incorporate those into the tool!

The Quickening

My QuickStatements tool has been quite popular, in both version 1 and 2. It appears to be one of the major vectors of adding large amounts of prepared information to Wikidata. All good and well, but, as will all well-used tools, some wrinkles appear over time. So, time for a do-over! It has been a long time coming, and while most of it concerns the interface and interaction with the rest of the world, the back-end learned a few new tricks too.

For the impatient, the new interface is the new default at QuickStatements V2. There is also a link to the old interface, for now. Old links with parameters should still work, let me know if not.

What has changed? Quite a lot, actually:

  • Creating a new batch in the interface now allows to choose a site. Right now, only Wikidata is on offer, but Commons (and others?) will join it in the near future. Cross-site configuration has already been tested with FactGrid
  • You can now load an existing batch and see the commands
  • If a batch that has (partially) run threw errors, some of them can be reset and re-run. This works for most statements, except the ones that use LAST as the item, after item creation
  • You can also filter for “just errors” and “just commands that didn’t run yet”
  • The above limitation is mostly of historic interest, as QuickStatements will now automatically group a CREATE command and the following LAST item commands into a single, more complex command. So no matter how many statements, labels, sitelinks etc. you add to a new item, it will just run as a single command, which means item creation just got a lot faster, and it goes easier on the Wikidata database/API too
  • The STOP button on both the batch list and the individual batch page should work properly now (let me know if not)! There are instructions for admins to stop a batch
  • Each server-side (“background”) batch now gets a link to the EditGroups tool, which lets you discuss an entire batch, gives information about the batch details, and most importantly, lets you undo an entire batch at the click of a button
  • A batch run directly from the browser now gets a special, temporary ID as well, that is added to the edit summary. Thanks to the quick work of the EditGroups maintainers, even browser-based batch runs are now one-click undo-able
  • The numbers for a running batch are now updated automatically every 5 seconds, on both the batch list and the individual batch page
  • The MERGE command, limited to QuickStatements V1 until now, now works in V2 as well. Also, it will automatically merge into the “lower Q number” (=older item), no matter the order of the parameters

I have changed my token by now…

For the technically inclined:

  • I rewrote the interface using vue.js, re-using quite a few pre-existing components from other projects
  • You can now get a token to use with your user name, to programmatically submit batches to QuickStatements. They will show up and be processed as if you had pasted them into the interface yourself. The token can be seen on your new user page
  • You can also open a pre-filled interface, using GET or POST, like so

One thing, however, got lost from the previous interface, and that is the ability to edit commands directly in the interface. I do not know how often that was used in practice, but I suspect it was not often, as it is not really suited for a mass-edit tool. If there is huge demand, I can look into retro-fitting that. Later.

 

Why I didn’t fix your bug

Many of you have left me bug reports, feature requests, and other issues relating to my tools in the WikiVerse. You have contacted me through the BitBucket issue tracker (and apparently I’m on phabricator as well now), Twitter, various emails, talk pages (my own, other users, content talk pages, wikitech, meta etc.), messaging apps, and in person.

And I haven’t done anything. I haven’t even replied. No indication that I saw the issue.

Frustrating, I know. You just want that tiny thing fixed. At least you believe it’s a tiny change.

Now, let’s have a look at the resources available, which, in this case, is my time. Starting with the big stuff (general estimates, MMMV [my mileage may vary]):

24h per day
-9h work (including drive)
-7h sleep (I wish)
-2h private (eat, exercise, shower, read, girlfriend, etc.)
=6h left

Can’t argue with that, right? Now, 6h left is a high estimate, obviously; work and private can (and do) expand on a daily, fluctuating basis, as they do for all of us.

So then I can fix your stuff, right? Let’s see:

6h
-1h maintenance (tool restarts, GLAM pageview updates, mix'n'match catalogs add/fix, etc.)
-3h development/rewrite (because that's where tools come from)
=2h left

Two hours per days is a lot, right? In reality, it’s a lot less, but let’s stick with it for now. A few of my tools have no issues, but many of them have several open, so let’s assume each tool has one:

2h=120min
/130 tools (low estimate)
=55 sec/tool

That’s enough time to find and skim the issue, open the source code file(s), and … oh time’s up! Sorry, next issue!

So instead of dealing with all of them, I deal with one of them. Until it’s fixed, or I give up. Either may take minutes, hours, or days. And during that time, I am not looking at the hundreds of other issues. Because I can’t do anything about them at the time.

So how do I pick an issue to work on? It’s an intricate heuristic computed from the following factors:

  • Number of users affected
  • Severity (“security issue” vs. “wrong spelling”)
  • Opportunity (meaning, I noticed it when it got filed)
  • Availability (am I focused on doing something else when I notice the issue?)
  • Fun factor and current mood (yes, I am a volunteer. Deal with it.)

No single event prompted this blog post. I’ll keep it around to point to, when the occasion arises.



				

The File (Dis)connect

I’ll be going on about Wikidata, images, and tools. Again. You have been warned.

I have written a few image-related Wikimedia tools over the years (such as FIST, WD-FIST, to name two big ones), because I believe that images in articles and Wikidata items are important, beyond their adorning effect. But despite everyone’s efforts, images on Wikidata (the Wikimedia site with the most images) are still few and far between. For example, less than 8% of taxa have an image; across all of Wikidata, it’s rather around 5% of items.

On the other hand, Commons now has ~45M files, and other sites like Flickr also have vast amounts of freely licensed files. So how to bring the two of them together? One problem is that, lacking prior knowledge, matching an item to an image means full-text searching the image site, which even these days takes time for thousands of items (in addition to potential duplication of searches, stressing APIs unnecessarily). A current example for “have items, need images” is the Craig Newmark Pigeon Challenge by the WMF.

The answer (IMHO) is to prepare item-file-matches beforehand; take a subset of items (such as people or species) which do not have images, and search for them on Commons, Flickr, etc. Then store the results, and present them to the user upon request. I had written some quick one-off scans like that before, together with the odd shoe-string interface; now I have consolidated the data, the scan scripts, and the interface into a new tool, provisionally called File Candidates. Some details:

  • Already seeded with plenty of candidates, including >85K species items, >53K humans, and >800 paintings (more suggestions welcome)
  • Files are grouped by topic (e.g. species)
  • Files can be located on Commons or Flickr; more sites are possible (suggestions welcome)
  • One-click transfer of files from Flickr to Commons (with does-it-exists-on-Commons check)
  • One-/Two-click (one for image, two for other properties) adding of the file to the Wikidata item
  • Some configuration options (click the “⚙” button)
  • Can use a specific subset of items via SPARQL (example: Pigeon Challenge for species)

There is another aspect of this tool I am excited about: Miriam is looking into ranking images by quality through machine learning, and has given me a set of people-file-matches, which I have already incorporated into my “person” set, including the ranking. From the images that users add through this tool, we can then see how much the ranking algorithm agrees with the human decision. This can set us on a path towards AI-assisted editing!

ADDENDUM: Video with demo usage!

Playing cards on Twitter

So this happened.

Yesterday, Andy Mabbett asked me on Twitter for a new feature of Reasonator: Twitter cards, for small previews of Wikidata items on Twitter. After some initial hesitation (for technical reasons), I started playing with the idea in a test tweet (and several replies to myself), using Andy as the guinea pig item:

Soon, I was contacted by Erika Herzog, who I did work with before on Wikidata projects:

That seemed like an odd thing to request, but I try to be a nice guy, and if there are some … personal issues between Wikidata users, I have no intention of causing unnecessary upset. So, after some more testing (meanwhile, I had found a Twitter page to do the tests on), I announced the new feature to the world, using what would undoubtedly be a suitable test subject for the Wikipedia/Wikidata folk:

Boy was I wrong:

I basically woke up to this reply. Under-caffeinated, I saw someone tell me what to (not) tweet. Twice. No reason. No explanation. Not a word on why Oprah would be a better choice as a test subject in a tweet about a new feature for a Wikidata-based tool. Just increasing aggressiveness, going from “problematic” to “Ugh” and “Gads” (whatever that is).

Now, I don’t know much about Oprah. All I know is, basically, what I heard characters in U.S. sit-coms say about her, none of which was very flattering. I know she is (was?) a U.S. TV talk show host, and that she recently gave some speech in the #metoo context. I never saw one of her talk shows. She is probably pretty good at what she does. I don’t really care about her, one way or the other. So far, Oprah has been a distinctively unimportant figure in my life.

Now, I was wondering why Erika kept telling me what to (not) tweet, and why it should be Oprah, of all people. But at that time, all I had the energy to muster as a reply was “Really?”. To that, I got a reply with more Jimbo-bashing:

At which point I just had about enough of this particular jewel of conversation to make my morning:

What follows is a long, horrible conversation with Erika (mostly), with me guessing what, exactly, she wants from me. Many tweets down, it turns out that, apparently, her initial tweets were addressing a “representation issue“. At my incredulous question if  she seriously demanded a “women’s quota” for my two original tweets (honestly, I have no idea what else this could be about by now), I am finally identified as the misogynist cause of all women’s peril in the WikiVerse:

Good thing we finally found the problem! And it was right in front of us the whole time! How did we not see this earlier? I am a misogynist pig (no offence to Pigsonthewing)! What else could it be?

Well, I certainly learned my lesson. I now see the error of my ways, and will try to better myself. The next time someone tries to tell me what to (not) tweet, I’ll tell them to bugger off right away.

Recommendations

Reading Recommending Images to Wikidata Items by Miriam, which highlights missing areas of image coverage in Wikidata (despite being the most complete site in the WikimediaVerse, image-wise), and strategies to address the issue, I was reminded of an annoying problem I have run into a few times.

My WD-FIST tool uses (primarily) SPARQL to find items that might require images, and that usually works well. However, some larger queries do time out, either on SPARQL, or the subsequent image discovery/filtering steps. Getting a list of all items about women with image candidates occasionally works, but not reliably so; all humans is out of the question.

So I started an extension to WD-FIST: A caching mechanism that would run some large queries in a slightly different way, on a regular basis, and offer the results in the well-known WD-FIST interface. My first attempt is “humans”, and you can see some results here. As of now, there are 275,500 candidate images for 160,508 items; the link shows you all images that are used on three or more Wikipedias associated with the same item (to improve signal-to-noise ratio).

One drawback of this system is that it has some “false positive” items; because it bypasses SPARQL, it gets some items that link to “human” (Q5), but not via “instance of” (P31). Also, matching an image to an items, or using “ignore” on the images, might not immediately reflect on reload, but the daily update should take care of that.

Update code is here.

Everybody scrape now!

If you like Wikidata and working on lists, you probably know my Mix’n’match tool, to match entries in external catalogs to Wikidata. And if you are really into these things, you might have tried your luck with the import function, to add your own catalog.

But the current import page has some drawbacks: You need to adhere to a strict format which you can’t really test except by importing, your data is static and will never update, but most importantly, you need to get the data in the first place. Sadly, many great sets of data are only exposed as web pages, and rescuing the data from a fate of tag fillers is not an easy task.

I have imported many catalogs into Mix’n’match, some from data files, but most scraped from web pages. For a long time, I wrote bespoke scraper code for every website, and I still do that for some “hard cases” occasionally. But some time ago, I devised a simple (yeah, right…) JSON description to specify the scraping of a website. This includes the construction of URLs (a list of fixed keys, like letters? Numerical? Letters with numerical subpages? A start page to follow all links from?), as well as regular expressions to find entries on these pages (yes, I am using RegEx to parse HTML. So sue me.), including IDs, names, and descriptions. The beauty is that only the JSON changes for each website, but the scraping code stays the same.

This works surprisingly well, and I have over 70 Mix’n’match catalogs generated through this generic scraping mechanism. But it gets better: For smaller catalogs, with relatively few pages to scrape, I can just run the scraping again periodically, and add new entries to Mix’n’match, as they are added to the website.


But there is still a bottleneck in this approach: me. Because I am the only one who can create the JSON, add it to the Mix’n’match database, and run the scraping. It does take some time to devise the JSON, and even more testing to get it right. Wouldn’t it be great if everyone could create the JSON through a simple interface, test it, add it to Mix’n’match to a new (or existing) catalog, and have it scrape a website, then run automatic matching with Wikidata on top, and get automatic, periodic updates to the catalog for free?

Well, now you can. This new interface offers all options I am using for my own JSON-based scraping; and you don’t even have to see the JSON, just fill out a form, click on “Test”, and if the first page scrapes OK, save it and watch the magic happen.

I am aware that regular expressions are not everyone’s cup of decaffeinated, gluten-free green tea, and neither will be the idea of multi-level pattern-based URL construction. But you don’t get an (almost) universal web scraping mechanism for free, and the learning curve is the price to pay. I have included an example setup, which I did use to create a new catalog.

Testing will get you the HTML of the first web page that your URL schema generated, plus all scraped entries. If there are too few or wrong entries, you can fiddle with the regular expressions in the form, and it will tell you live how many entries would be scraped by that. Once it looks all right, test again to see the actual results. When everything looks good, save it, done!

I do have one request: If the test does not look perfectly OK, do not save the scraper. Because if the results are not to your liking, you will have to come to me to fix it. And fixing these things usually takes me a lot longer than doing them myself in the first place. So please, switch that underused common sense to “on”!

The flowering ORCID

As part of my Large Datasets campaign, I have now downloaded and processed the latest data from ORCID. This yielded 655,706 people (47,435 or 7% in Wikidata), and 13,438,786 publications (1,079,305 or 8% in Wikidata) with a DOI or PubMed ID (to be precise, these are publications-per-person, so the same paper might be counted multiple times; however, that’s still 1,033,146 unique Wikidata items, so not much of a difference).

Number of papers, ORCID, first, and last name

Looking at the data, there are 14,883 authors, with ten or more papers already on Wikidata, that do either not have an item, or their item does not have an ORCID ID associated. So I am now setting a bot (my trusted Reinheitsgebot) to work at creating items for those authors, and then changing the appropriate author name string statement to author proper, preserving qualifiers and references, and adding the original name string as a new qualifier (like so).

By chance, one of the most prolific authors of scientific publications not yet on Wikidata turned out to be a (distant) colleague of mine, Rick Price, who is now linked as the author of ~100 papers.

I have now set the bot to create the author items for the authors with >=10 papers on Wikidata. I am aware that ORCID authorships are essentially “self-reported”, but I do check that a paper if not claimed by two people with the same surname in the ORCID dataset (in which case I pass it over). Please report any systematic (!) bot malfunctions to me through the usual channels.

Update: This will create up to 263,893 new author (P50) links on Wikidata.

In my last blog post “The Big Ones“, I wrote about my attempts to import large, third-party datasets, and to synchronize those with Wikidata. I have since imported three datasets (BNF, VIAF, GND), and created a status page to keep a public record of what I did, and try to do.

I have run a few bots by now, mainly syncing identifiers back-and-forth. I have put a few security measures (aka “data paranoia”) into the code, so if there is a collision between the third-party dataset and Wikidata, no edit takes place. But these conflicts can highlight problems; Wikidata is wrong, the third-party data supplier is wrong, there is a duplicated Wikidata item, or some other, more complex issue. So it would be foolish to throw away such findings!


But how to use them? I had started with a bot updating a Wikidata page, but that has problems, mostly, no way of marking an issue as “resolved”, but also lots of sustained edits, overwriting of Wikidata user edits, lists too long for wikitext pages, and so on.

So I started collecting the issue reports in a new database table, and now I have written a small tool around that. You can list and filter issues by catalog, property, issue type, status, etc. Most importantly, you can mark an issue as “done” (OAuth login required), so that it will not show up for other users again (unless they want it to). Through some light testing, I have already found and merged two duplicated Wikidata item pairs.

There is much to do and improve in the tool, but I am about to leave for WikidataCon, so further work will have to wait a few days. Until then, enjoy!