In our previous blog post in this series, we explained why no metadata matching strategy can return perfect results. Thankfully, however, this does not mean that it’s impossible to know anything about the quality of matching. Indeed, we can (and should!) measure how close (or far) we are from achieving perfection with our matching. Read on to learn how this can be done!
How about we start with a quiz? Imagine a database of scholarly metadata that needs to be enriched with identifiers, such as ORCIDs or ROR IDs.
We’re in year two of the Resourcing Crossref for Future Sustainability (RCFS) research. This report provides an update on progress to date, specifically on research we’ve conducted to better understand the impact of our fees and possible changes.
Crossref is in a good financial position with our current fees, which haven’t increased in 20 years. This project is seeking to future-proof our fees by:
Making fees more equitable Simplifying our complex fee schedule Rebalancing revenue sources In order to review all aspects of our fees, we’ve planned five projects to look into specific aspects of our current fees that may need to change to achieve the goals above.
On behalf of the Nominating Committee, I’m pleased to share the slate of candidates for the 2024 board election.
Each year we do an open call for board interest. This year, the Nominating Committee received 53 submissions from members worldwide to fill four open board seats.
We maintain a balanced board of 8 large member seats and 8 small member seats. Size is determined based on the organization’s membership tier (small members fall in the $0-$1,650 tiers and large members in the $3,900 - $50,000 tiers).
In our previous instalments of the blog series about matching (see part 1 and part 2), we explained what metadata matching is, why it is important and described its basic terminology. In this entry, we will discuss a few common beliefs about metadata matching that are often encountered when interacting with users, developers, integrators, and other stakeholders. Spoiler alert: we are calling them myths because these beliefs are not true! Read on to learn why.
A service provided by Crossref and powered by iThenticate—Similarity Check provides editors with a user-friendly tool to help detect plagiarism.
Our Similarity Check service helps Crossref members prevent scholarly and professional plagiarism by providing immediate feedback regarding a manuscript’s similarity to other published academic and general web content, through reduced-rate access to the iThenticate text comparison software from Turnitin.
Only Similarity Check members benefit from this tailored iThenticate experience that includes read-only access to the full text of articles in the Similarity Check database for comparison purposes, discounted checking fees, and unlimited user accounts per organization.
Watch the introductory Similarity Check animation in your language:
With editors under increased pressure to assess higher volumes of manuscript submissions each year, it’s important to find a fast, cost-effective solution that can be embedded into your publishing workflows. Similarity Check allows editors to upload a paper, and instantly produces a report highlighting potential matches and indicating if and how the paper overlaps with other work. This report enables editors to assess the originality of the work before they publish it, providing confidence for publishers and authors, and evidence of trust for readers. And as the iThenticate database contains over 78 million full-text scholarly content items, editors can be confident that Similarity Check will provide a comprehensive and reliable addition to their workflow.
Making sure only original research is published provides:
peace of mind for publishers and authors that their content is identified and protected,
a way for editors to educate their authors and ensure the reputation of their publication, and
clarity for readers around who produced the work.
Benefits of Similarity Check
Similarity Check participants enjoy use of iThenticate at reduced cost because they contribute their own published content into Turnitin’s database of full-text literature. This means that as the number of participants grows, so too does the size of the database powering the service. More content in the database means greater peace of mind for editors looking to determine a manuscript’s originality.
If you participate in Similarity Check, not only do you get reduced rate access to iThenticate, but you also have the peace of mind of knowing that any similarity between your published content and manuscripts checked by other publishers will be flagged as a potential issue too.
As a Similarity Check user, you also see extra features in iThenticate, such as enhanced text-matches within the Document Viewer.
How the Similarity Check service works
To participate in Similarity Check, you need to be a member. Similarity Check subscribers allow Turnitin to index their full catalogue of current and archival published content into the iThenticate database. This means that the service is only available to members who are actively publishing DOI-assigned content and including in their metadata full-text URLs specifically for Similarity Check.
Turnitin indexes members’ content directly via its Content Intake System (CIS). Its CIS accesses our metadata daily to collect the full-text content links provided by our members within their metadata. Turnitin follows these URLs and indexes the content found at each location.
When you apply for the Similarity Check service, Turnitin will check that they can access your existing content via the full-text URLs in your Crossref metadata. Once confirmed, you’ll be provided with access to the iThenticate tool where you will be able to submit manuscripts to compare against the corpus of published academic and general web content in Turnitin’s database. You can do this in the iThenticate tool, or through your manuscript submission system using an API. iThenticate provides a Similarity Report containing a Similarity Score and a highlighted set of matches to similar text. Editors can then further review matches in order to make their own decision regarding a manuscript’s originality.
Similarity Check fees are in two parts: an annual service fee, and a per-document checking fee.
The annual service fee is 20% of your Crossref annual membership fee and is included in the renewal invoices you receive each January. When you first join Similarity Check, you’ll receive a prorated invoice for the remainder of that calendar year.
Per-document checking fees are also paid annually in January. Volume discounts apply, and your first 100 documents are free of charge.