Call for Resources Track Papers



Resources are of paramount importance as they foster scientific advancement. These resources include, among others, datasets, benchmarks, workflows, and software. Sharing them is key to ensure reproducibility, allow other researchers to compare results and methods, and to explore new lines of research, in accordance with the FAIR principles for scientific data management.


The ISWC 2021 Resources Track aims to promote the sharing of resources which support, enable or utilise semantic web research. Resources include, but not restricted to: datasets, ontologies/vocabularies, ontology design patterns, evaluation benchmarks or methods, software tools/services, APIs and software frameworks, workflows, crowdsourcing task designs, protocols, methodologies and metrics, that have contributed or may contribute to the generation of novel scientific work. In particular, we encourage the sharing of such resources following best and well-established practices within the Semantic Web community. As such, this track calls for contributions that provide a concise and clear description of a resource and its usage.


A typical Resource track paper has its focus set on reporting on one of the following categories:

  • • Datasets produced
  • o to support specific evaluation tasks (for instance labeled ground truth data)
  • o to support novel research methods;
  • o by novel algorithms;
  • • Ontologies, vocabularies and ontology design patterns, with a focus on describing the modelling process underlying their creation;
  • • Benchmarking activities focusing on datasets and algorithms for comprehensible and systematic evaluation of existing and future systems;
  • • Reusable research software, e.g. prototypes/services supporting a given research hypothesis and enabling specific data processing and engineering tasks
  • • Community-shared software frameworks that can be extended or adapted to support scientific study and experimentation;
  • • Scientific and experimental workflows used and reused in practical studies;
  • • Crowdsourcing task designs that have been used and can be (re)used for building resources such as gold standards and the like;
  • • Protocols for conducting experiments and studies;
  • • Novel evaluation methodologies and metrics, and their demonstration in an experimental study.


Differentiation From the Other Tracks


We strongly recommend that prospective authors carefully check the calls of the other main tracks of the conference in order to identify the optimal track for their submission. Papers that propose new algorithms and architectures should continue to be submitted to the regular research track, whilst papers that describe the use of semantic web technologies in practical settings should be submitted to the In-Use track. When new reusable resources are produced during the process undertaken for achieving these results, e.g. datasets, ontologies, workflows, etc., they are suitable subject for a submission to the Resources Track.


Review Criteria


The program committee will consider the quality of both the resource and the paper in its review process. Therefore, authors must ensure unfettered access to the resource during the review process by citing the resource at a permanent location. For example, data available in a repository such as FigShare, Zenodo, or a domain-specific repository; or software code being available in public code repository such as GitHub or BitBucket. In exceptional cases, when it is not possible to make the resource public, authors must provide anonymous access to the resource for the reviewers.


We welcome the submission of established resources, having a community using them (excluding the authors), and of new resources, which may not prove established reuse but have sufficient evidence and motivation for claiming potential adoption. In the first case, it is required to provide evidence, e.g., statistics about the resource adoption. In the second case, authors should defend the claim of potential adoption by providing evidence of discussion in fora, mailing lists, and the like.


All resources will be evaluated along with the following review criteria:


Impact:

• Does the resource break new ground?

• Does the resource fill an important gap?

• How does the resource advance the state of the art?

• Has the resource been compared to other existing resources (if any) of similar scope?

• Is the resource of interest to the Semantic Web community?

• Is the resource of interest to society in general?

• Will/has the resource have/had an impact, especially in supporting the adoption of Semantic Web technologies?


Reusability:

• Is there evidence of usage by a wider community beyond the resource creators or their project? Alternatively (for new resources), what

is the resource’s potential for being (re)used; for example, based on the activity volume on discussion fora, mailing lists, issue trackers,

support portal, etc?

• Is the resource easy to (re)use? For example, does it have high-quality documentation? Are there tutorials available?

• Is the resource general enough to be applied in a wider set of scenarios, not just for the originally designed use? If it is specific, is there

substantial demand.

• Is there potential for extensibility to meet future requirements?

• Does the resource include a clear explanation of how others use the data and software? Or (for new resources) how others are expected

to use the data and software?

• Does the resource description clearly state what the resource can and cannot do, and the rationale for the exclusion of some

functionality?


Design & Technical quality:

• Does the design of the resource follow resource-specific best practices?

• Did the authors perform an appropriate reuse or extension of suitable high-quality resources? For example, in the case of ontologies,

authors might extend upper ontologies and/or reuse ontology design patterns.

• Is the resource suitable for solving the task at hand?

• Does the resource provide an appropriate description (both human- and machine-readable), thus encouraging the adoption of FAIR

principles? Is there a schema diagram? For datasets, is the description available in terms of VoID/DCAT/DublinCore?


Availability:

• Mandatory: Is the resource (and related results) published at a persistent URI (PURL, DOI, w3id)?

• Mandatory: Is there a canonical citation associated with the resource?

• Mandatory: Does the resource provide a licence specification? (See creativecommons.org, opensource.org for more information)

• Is the resource publicly available? For example as API, Linked Open Data, Download, Open Code Repository.

• Is the resource publicly findable? Is it registered in (community) registries (e.g. Linked Open Vocabularies, BioPortal, or DataHub)? Is it

registered in generic repositories such as FigShare, Zenodo or GitHub?

• Is there a sustainability plan specified for the resource? Is there a plan for the medium and long-term maintenance of the resource?

• Does the resource adopt open standards, when applicable? Alternatively, does it have a good reason not to adopt standards?


As regards specific resource types, checklists of their quality attributes are available in a presentation. Both authors and reviewers could make use of them when assessing the quality of the particular resource


Submission Details

• Pre-submission of abstracts is a strict requirement. All papers and abstracts have to be submitted electronically via EasyChair.

• Papers describing a resource must be in the range of 8 and 16 pages (including references). Papers must describe the resource and

focus on the sustainability and community surrounding the resource. Benchmark papers are expected to include evaluations and

provide a detailed description of the experimental setting. Papers that exceed the page limit will be rejected without review.

• All submissions must be in English.

• Submissions must be either in PDF or HTML, formatted in the style of the Springer Publications format for Lecture Notes in Computer

Science (LNCS). For details on the LNCS style, see Springer’s Author Instructions. For HTML submission guidance, please see the

HTML submission guide.

• ISWC 2021 submissions for the resources track are not anonymous. We encourage embedding metadata in the PDF or HTML to provide

a machine-readable link from the paper to the resource.

• Authors will have the opportunity to submit a rebuttal to the reviews to clarify the issues raised by program committee members.

• Authors of accepted papers will be required to provide semantic annotations for the abstract of their submission, which will be made

available on the conference web site. Details will be provided at the time of acceptance.

• Accepted papers will be distributed to conference attendees and also published by Springer in the printed conference proceedings, as

part of the Lecture Notes in Computer Science series.

At least one author of each accepted paper must register for the conference and present the paper there. As in previous years, students

will be able to apply for travel support to attend the conference. Preference will be given to students that are first authors of papers

accepted to the main conference or the doctoral consortium, followed by those who are first authors of papers accepted to ISWC

workshops and the Poster & Demo session.


Prior Publication And Multiple Submissions


ISWC 2021 will not accept resource papers that, at the time of submission, are under review for or have already been published or accepted for publication in a journal, another conference, or another ISWC track. The conference organisers may share information on submissions with other venues to ensure that this rule is not violated.


Important Dates - All deadlines are 23:59 AoE (anywhere on Earth)



Activities Due Date
Abstracts Due 12 April 2021
Full Papers Due 19 April 2021
Author Rebuttals 2 – 6 June 2021
Notifications 23 June 2021
Camera-ready Papers Due 12 July 2021


Program Chairs


Stefan Dietze

GESIS, Cologne & Heinrich-Heine-University Düsseldorf, Germany

stefan.dietze@hhu.de


Achille Fokoue

IBM Research, Yorktown Heights, NY, USA

achille@us.ibm.com