Trip Report: NSF Software & Data Citation Workshop

Last week (Jan 29 & 30), I was at the NSF & Sloan foundation workshop: Supporting Scientific Discovery through Norms and Practices for Software and Data Citation and Attribution. The workshop is in the context of the NSF’s dear colleague letter on the subject. The workshop brought together a range of backgrounds and organizations from Mozilla to NIH and NASA. I got to catch up with several friends but was able to meet some new folks as well. Check out the workshop’s github page with a list of 22 use cases submitted to the workshop.

I was pleased to see the impact impact of the work of FORCE11 on helping drive this space. In particular, the Joint Principles on Data Citation and Resource Identifiers (RRIDS) seem to be helping the community focus on citing other forms of scholarly output and were brought up several times in the meeting.

I think there were two main points from the conference:

  1. We have the infrastructure.
  2. Sustainability

Infrastructure

It was clear that we have much of the infrastructure in-place to enable the citation and referencing of outputs such as software and data.

In terms of software, piggy backing off existing infrastructures seems to be the most likely approach. The versioning/release mindset built into software development means that hosting infrastructure such as Github or Google Code provide a strong start. These can then be integrated with existing scholarly attribution systems.My colleague Sweitze Roffel presented Elsevier’s work on Original Software Publications. This approach leverages the existing journal based ecosystem to provide the permanence and context associated with things in the scientific record. Another approach is to use the data hosting/citation infrastructure to give code a DOI e.g. by using Zenodo. Both approaches work with Github.

The biggest thing will be promoting the actual use of proper citations. James Howison of University of Texas Austin presented interesting deep dive results on how people refer to software in the scientific literature  (slide set below) (Githhub). It shows that people want to do this but often don’t know how. His study was focused I’d like to do this same study in an automatic fashion on the whole of the literature. I know he’s working with others on training machine learning models for finding software mentions so that would be quite cool. Maybe it would be possible to back-fill the software citation graph this way?

In terms of data citation, we are much farther along because many of the existing data repositories support the minting of data citations. Many of the questions asked were about cases with changing or mash-ups of data. These are impotent edge cases to look at. I think progress will be made here by leveraging the landing pages for data to provide additional metadata. Indeed, Joan Starr from the California Digital Library is going to bring this back to the DataCite working group to talk about how to enable this. I was also impressed with the PLOS lead Making Data Count project and Martin Fenner’s continued development of the Lagotto altmetrics platform. In particular there was discussion about getting a supplementary guideline for software and data downloads included in COUNTER. This would be a great step in getting data and citation properly counted.

Sustainability

Sustainability is one of the key questions that have been going around in the larger discussion. How do we fund software and data resources necessary for the community. I think the distinction that arose was the need to differentiate between:

  • software as an infrastructure; and
  • software as an experiment/method.

This seems rather obvious but the tendency is for the later to become the former and this causes issues in particular for sustainability.

Issues include:

  1. It’s difficult to identify which software will become key to the community and thus where to provide the investment.
  2. Scientific infrastructure software tends to be funded on project to project basis or sometimes as a sideline of a lab.
  3. Software that begins as an experiment is often not engineered correctly.
  4. As Luis Ibanez from Google pointed out, we often loose the original developers overtime and there’s a need to involve new contributors.

The Software Sustainability Institute in the UK has begun to tackle some of these problems. But there is still lack of clear avenues for aggregating the funding necessary. One of the popular models is the creation of a non-profit foundation to support a piece of software  but this leads to “foundation fatigue.” Others approaches shift the responsibility to university libraries, but libraries may not have the required organizational capabilities. Katherine Skinner’s recent talk at FORCE 2015 covered some of the same ground here.

One of the interesting ideas that came up at the workshop was the use of other parts of the University institution to help tap into different funding streams (e.g. the IPR office; university development office). An example of this is Internet2 which is sponsored directly by universities. However, as pointed out by Dan Katz, to support this sort of sustainability there is a need to have insight into the deeper impact of this sort of software for the scientific community.

Conclusion

You can see a summary of the outcomes here. In particular, take a look at the critical asks. These concrete requests were formulated by the workshop attendees to address some of the identified issues. I’ll be interested to see the report that comes out of the workshop and how that can help move us forward.

1 comment

Leave a comment