Archive

talks

Two weeks ago, I had the pleasure of attending the 17th International Semantic Web Conference held at Asiolomar Conference Grounds in California. A tremendously beautiful setting in a state park along the ocean. This trip report is somewhat later than normal because I took the opportunity to hang out for another week along the coast of California.

Before getting into the content of the conference, I think it’s worth saying, if you don’t believe that there are capable, talented, smart and awesome women in computer science at every level of seniority, the ISWC 2018 organizing committee + keynote speakers is the mike drop of counter examples:

Now some stats:

  •  438 attendees
  •  Papers
    •  Research Track: 167 submissions – 39 accepted – 23% acceptance rate
    •  In Use: 55 submissions – 17 accepted – 31% acceptance rate
    •  Resources: 31 submissions – 6 accepted – 19% acceptance rate
  •  38 Posters & 39 Demos
  • 14 industry presentations
  • Over 1000 reviews

These are roughly the same as the last time ISWC was held in the United States. So on to the major themes I took away from the conference plus some asides.

Knowledge Graphs as enterprise assets

It was hard to walk away from the conference without being convinced that knowledge graphs are becoming fundamental to delivering modern information solutions in many domains. The enterprise knowledge graph panel was a demonstration of this idea. A big chunk of the majors were represented:

The stats are impressive. Google’s Knowledge Graph has 1 billion things and 70 billion assertions. Facebook’s knowledge graph which they distinguish from their social graph and has just ramped up this year has 50 Million Entities and 500 million assertions. More importantly, they are critical assets for applications, for example, at eBay their KG is central to creating product pages, at Google and Microsoft, KGs are key to entity search and assistants, and at IBM they use it as part of their corporate offerings. But you know it’s really in-use when knowledge graphs are used for emoji:

It wasn’t just the majors who have or are deploying knowledge graphs. The industry track in particular was full of good examples of knowledge graphs being used in practice. Some ones that stood out were: Bosch’s use of knowledge graphs for question answering in DIY, multiple use cases for digital twin management (Siemens, Aibel); use in a healthcare chatbot (Babylon Health); and for helping to regulate the US finance industry (FINRA). I was also very impressed with Diffbot’s platform for creating KGs from the Web. I contributed to the industry session presenting how Elsevier is using knowledge graphs to drive new products in institutional showcasing and healthcare.

Beyond the wide use of knowledge graphs, there was a number of things I took away from this thread of industrial adoption.

  1. Technology heterogeneity is really the norm. All sorts of storage, processing and representation approaches were being used. It’s good we have the W3C Semantic Web stack but it’s even better that the principles of knowledge representation for messy data are being applied. This is exemplified by Amazon Neptune’s support for TinkerPop & SPARQL.
  2. It’s still hard to build these things. Microsoft said it was hard at scale. IBM said it was hard for unique domains. I had several people come to me after my talk about Elsevier’s H-Graph discussing similar challenges faced in other organizations that are trying to bring their data together especially for machine learning based applications. Note, McCusker’s work is some of the better publicly available thinking on trying to address the entire KG construction lifecycle.
  3. Identity is a real challenge. I think one of the important moves in the success of knowledge graphs was not to over ontologize. However, record linkage and thinking when to unify an entity is still not a solved problem. One common approach was towards moving the creation of an identifiable entity closer to query time to deal with the query context but that removes the shared conceptualization that is one of the benefits of a Knowledge Graph. Indeed, the clarion call by Google’s Jamie Taylor to teach knowledge representation was an outcome of the need for people who can think about these kinds of problem.

In terms of research challenges, much of what was discussed reflects the same kinds of ideas that were discussed at the recent Dagstuhl Knowledge Graph Seminar so I’ll point you to my summary from that event.

Finally, for most enterprises, their knowledge graph(s) were considered a unique asset to the company. This led to an interesting discussion about how to share “common knowledge” and the need to be able to merge such knowledge with local knowledge. This leads to my next theme from the conference.

Wikidata as the default option

When discussing “common knowledge”, Wikidata has become a focal point. In the enterprise knowledge graph panel, it was mentioned as the natural place to collaborate on common knowledge. The mechanics of the contribution structure (e.g. open to all, provenance on statements) and institutional attention/authority (i.e. Wikimedia foundation) help with this. An example of Wikidata acting as a default is the use of Wikidata to help collate data on genes

Fittingly enough, Markus Krötzsch and team won the best in-use paper with a convincing demonstration of how well semantic technologies have worked as the query environment for Wikidata. Furthermore, Denny Vrandečić (one of the founders of Wikidata) won the best blue sky paper with the idea of rendering Wikipedia articles directly from Wikidata.

Deep Learning diffusion

As with practically every other conference I’ve been to this year, deep learning as a technique has really been taken up. It’s become just part of the semantic web researchers toolbox. This was particularly clear in the knowledge graph construction area. Papers I liked with DL as part of the solution:

While not DL per sea , I’ll lump embeddings in this section as well. Papers I thought that were interesting are:

The presentation of the above paper was excellent. I particularly liked their slide on related work:

iswc2018-1fd3fcf3.png

As an aside, the work on learning rules and the complementarity of rules to other forms of prediction was an interesting thread in the conference. Besides the above paper, see the work from Heiner Stuckenschmidt’s group on evaluating rules and embedding approaches for knowledge graph completion. The work of Fabian Suchanek’s group on the representativeness of knowledge bases is applicable as well in order to tell whether rule learning from knowledge graphs is coming from a representative source and is also interesting in its own right. Lastly, I thought the use of rules in Beretta et al.’s work to quantify the evidence of an assertion in a knowledge graph to help improve reliability was neat.

Information Quality and Context of Use

The final theme is a bit harder for me to solidify and articulate but it lies at the intersection of information quality and how that information is being used. It’s not just knowing the provenance of information but it’s knowing how information propagates and was intended to be used. Both the upstream and downstream need to be considered. As a consumer of information I want to know the reliability of the information I’m consuming. As a producer I want to know if my information is being used for what it was intended for.

The later problem was demonstrated by the keynote from Jennifer Golbeck on privacy. She touched on a wide variety of work but in particular it’s clear that people don’t know but are concerned with what is happening to their data.

There was also quite a bit of discussion going on about the decentralized web and Tim Berners-Lee’s Solid project throughout the conference. The workshop on decentralization was well attended. Something to keep your eye on.

The keynote by Natasha Noy also touched more broadly on the necessity of quality information this time with respect to scientific data.

The notion of propagation of bias through our information systems was also touched on and is something I’ve been thinking about in terms of data supply chains:

That being said I think there’s an interesting path forward for using technology to address these issues. Yolanda Gil’s work on the need for AI to address our own biases in science is a step forward in that direction. This is a slide from her excellent keynote at SemSci Workshop:

iswc2018-09cc97c4.png

All this is to say that this is an absolutely critical topic and one where the standard “more research is needed” is very true. I’m happy to see this community thinking about it.

Final Thought

The Semantic Web community has produced a lot (see this slide from Nataha’s keynote:

iswc2018-d5af2fed.png

ISWC 2018 definitely added to that body of knowledge but more importantly I think did a fantastic job of reinforcing and exciting the community.

Random Notes

I had the pleasure of attending the Web Conference 2018 in Lyon last week along with my colleague Corey Harper . This is the 27th addition of the largest conference on the World Wide Web. I have tremendous difficulty  not calling it WWW but I’ll learn! Instead of doing two trip reports the rest of this is a combo of Corey and my thoughts. Before getting to what we took away as main themes of the conference let’s look at the stats and organization:

It’s also worth pointing out that this is just the research track. There were 27 workshops,  21 tutorials, 30 demos (Paul was co-chair), 62 posters, four collocated conferences/events, 4 challenges, a developer track and programming track, a project track, an industry track, and… We are probably missing something as well. Suffice to say, even with the best work of the organizers it was hard to figure out what to see. Organizing an event with 2200+ attendees is a thing is a massive task – over 80 chairs were involved not to mention the PC and the local heavy lifting. Congrats to Fabien, Pierre-Antoine, Lionel and the whole committee for pulling it off.  It’s also great to see as well that the proceedings are open access and available on the web.

Given the breadth of the conference, we obviously couldn’t see everything but from our interests we pulled out the following themes:

  • Dealing with a Polluted Web
  • Tackling Tabular Data
  • Observational Methods
  • Scientific Content as a Driver

Dealing with a Polluted Web

The Web community is really owning it’s responsibility to help mitigate the destructive uses to which the Web is put. From the “Recoding Black Mirror” workshop, which we were sad to miss, through the opening keynote and the tracks on Security and Privacy and Fact Checking, this was a major topic throughout the conference.

Oxford professor Luciano Floridi gave an excellent first keynote  on “The Good Web” which addressed this topic head on. He introduced a number of nice metaphors to describe what’s going on:

  • Polluting agents in the Web ecosystem are like extremphiles, making the environment hostile to all but themselves
  • Democracy in some contexts can be like antibiotics: too much gives growth to antibiotic resistant bacteria.
  • His takeaway is that we need a bit of paternalism in this context now.

His talk was pretty compelling,  you can check out the full video here.

Additionally, Corey was able to attend the panel discussion that opened the “Journalism, Misinformation, and Fact-Checking” track, which included representation from the Credibility Coalition, the International Fact Checking Network, MIT, and WikiMedia. There was a discussion of how to set up economies of trust in the age of attention economies, and while some panelists agreed with Floridi’s call for some paternalism, there was also a warning that some techniques we might deploy to mitigate these risks could lead to “accidental authoritarianism.” The Credibility Coalition also provided an interesting review of how to define credibility indicators for news looking at over 16 indicators of credibility.

We were able to see parts of the “Web and Society track”, which included a number of papers related to social justice oriented themes. This included an excellent paper that showed how recommender systems in social networks often exacerbate and amplify gender and racial disparity in social network connections and engagement. Additionally, many papers addressed the relationship between the mainstream media and the web. (e.g. political polarization and social media, media and public attention using the web).

Some more examples: The best demo was awarded to a system that automatically analyzed privacy policies of websites and summarized them with respect to GDPR and:

More generally, it seems the question is how do we achieve quality assessment at scale?

Tackling Tabular Data

Knowledge graphs and heterogenous networks (there was a workshop on that) were a big part of the conference. Indeed the test of time paper award went to the original Yago paper. There were a number of talks about improving knowledge graphs for example for improving on question answering tasks, determining attributes that are needed to complete a KG or improving relation extraction. While tables have always been an input to knowledge graph construction (e.g. wikpedia infoboxes), an interesting turn was towards treating tabular data as a focus area.

As Natasha Noy from Google noted in her  keynote at the SAVE-SD workshop,  this is an area with a number of exciting research challenges:img_0034_google_savesd.jpg

There was a workshop on data search with a number of papers on the theme. In that workshop, Maarten de Rijke gave a keynote on the work his team has been doing in the context of data search project with Elsevier.

In the main track, there was an excellent talk on Ad-Hoc Table Retrieval using Semantic Similarity. They looked at finding semantically central columns to provide a rank list of columns. More broadly they are looking at spreadsheet compilation as the task (see smarttables.cc and the dataset for that task.) Furthermore, the paper Towards Annotating Relational Data on the Web with Language Models looked at enriching tables through linking into a knowledge graph.

Observational Methods

Observing  user behavior has been a part of research on the Web, any web search engine is driven by that notion. What did seem to be striking is the depth of the observational data being employed. Prof. Lorrie Cranor gave an excellent keynote on the user experience of web security (video here). Did you know that if you read all the privacy policies of all the sites you visit it wold take 244 hours per year? Also, the idea of privacy as nutrition labels is pretty cool:

But what was interesting was her labs use of an observatory of 200 participants who allowed their Windows home computers to be instrumented. This kind of instrumentation gives deep insight into how users actually use their browsers and security settings.

Another example of deep observational data, was the use of mouse tracking on search result pages to detect how people search under anxiety conditions:

In the paper by Wei Sui and co-authors on Computational Creative Advertisements presented at the HumL workshop – they use in-home facial and video tracking to measure emotional response to ads by volunteers.

The final example was the use of FMRI scans to track brain activity of participants during web search tasks. All these examples provide amazing insights into how people use these technologies but as these sorts of methods are more broadly adopted, we need to make sure to adopt the kinds of safe-guards adopted by these researchers – e.g. consent, IRBs, anonymization.

Scientific Content as a Driver

It’s probably our bias but we saw a lot of work tackling scientific content. Probably because it’s both interesting and provides a number of challenges. For example, the best paper of the conference (HighLife) was about extracting n-ary relations for knowledge graph construction motivated by the need for such types of relations in creating biomedical knowledge graphs. The aforementioned work on tabular data often is motivated by the needs of research. Obviously SAVE-SD covered this in detail:

In the demo track, the etymo.io search engine was presented to summarize and visualization of scientific papers. Kuansan Wang at the BigNet workshop talked about Microsoft Academic Search and the difficulties and opportunities in processing so much scientific data.

IMG_0495.JPG

Paul gave a keynote at the same workshop also using science as the motivation for new methods for building out knowledge graphs. Slides below:

In the panel, Structured Data on the Web 7.0, Google’s Evgeniy Gabrilovich – creator of the Knowledge Vote – noted the challenges of getting highly correct data for Google’s Medical Knowledge graph and that doing this automatically is still difficult.

Finally, using DOIs for studying persistent identifier use over time on the Web.

Wrap-up

Overall, we had a fantastic web conference. Good research, good conversations and good food:

Random Thoughts

 

Last week, I was at Provenance Week 2016. This event happens once every two years and brings together a wide range of researchers working on provenance. You can check out my trip report from the last Provenance Week in 2014.  This year Provenance Week combined:

For me, Provenance Week is like coming home, lots of old friends and a favorite subject of mine. It’s also a good event to attend because it crosses the subfields of computer science, everything from security in operating systems to scientific workflows on to database theory. In one day, I went from a discussion on the role of indirection in data citation to staring at the C code of a database. Marta, Boris and Sarah really put together a solid program. There were about 60 attendees across the four days:

ProvenanceWeek_2016-06-08_D4S2484

So what was I doing there? Having served as co-chair of the W3C PROV working group, I thought it was important to be at the PROV: Three years later event where we reflected on the status of PROV, it’s uptake and usage. I presented some ongoing work on measuring the usage of provenance on the web of data.  Additionally, I gave the presentation of joint work led by my student Manolis Stamatogiannakis and done in conjunction with Ashish Gehani‘s group at SRI. The work focused on using benchmarks to help inform decisions on what provenance capture system to use. Slides:

I’ll now walk through my 3 big take aways from the event.

Provenance to attack Advanced Persistent Threats

DARPA’s $60 million transparent computing explicitly calls out the use of provenance to address the problem of what’s called an Advanced Persistent Threat (APTs). APTs are attacks that are long terms, look like standard business processes, and involve the attacker knowing the system well. This has led to a number of groups exploring the use of system level provenance capture techniques (e.g. SPADE and OPUS) and then integrating that from multiple distributed sources using PROV inspired data models. This was well described by David Archer is his talk as assembling multiple causal graphs from event streams.  James Cheney’s talk on provenance segmentation also addressed these issues well. This reminded me some what of the work on distributed provenance capture using structured logs that the Netlogger and Pegasus teams do, however, they leverage the structure of a workflow system to help with the assembly.

I particularly liked Yang JiSangho Lee and  Wenke Lee‘s work on using user level record and replay to track and replay provenance. This builds upon some of our work that used system level record replay as mechanism for separating provenance capture and instrumentation. But now in user space using the nifty rr tool from Mozilla. I think this thread of being able to apply provenance instrumentation after the fact  on an execution trace holds a lot of promise.

Overall, it’s great to see this level of attention on the use of provenance for security and in more broadly of using long term records of provenance to do analysis.

PROV as the starting point

Given that this was the ten year anniversary of IPAW, it was appropriate that Luc Moreau gave one of the keynotes. As really one of the drivers of the community, Luc gave a review of the development of the community and its successes.One of those outcomes was the W3C PROV standards. 

Overall, it was nice to see the variety of uses of PROV and the tools built around it. It’s really become the jumping off point for exploration. For example, Pete Edwards team combined PROV and a number of other ontologies including (P-Plan) to create a semantic representation of what’s going on within a professional kitchen in order to check food safety compliance. 

burger

Another example is the use of PROV as a jumping off point for the investigation into the provenance model of HL7 FHIR (a new standard for electronic healthcare records interchange).

As whole, I think the attendees felt that what was missing was an active central point to see what was going on with PROV and pointers to resources for implementation. The aim is to make sure that the W3c PROV wiki is up-to-date and is a better resource overall.

Provenance as lens: Data Citation, Documents & Versioning

An interesting theme was the use of provenance concepts to give a frame for other practices. For example, Susan Davidson gave a great keynote on data citation and how using a variant of provenance polynomials can help us understand how to automatically build citations for various parts of curated databases. The keynote was based off her work with James Frew and Peter Buneman that will appear in CACM (preprint). Another good example of provenance to support data citation was Nick Car’s work for Geoscience Australia.

Furthermore, the notion of provenance as the substructure for complex documents appeared several times. For example, the Impacts on Human  Health of Global Climate Change report from globalchange.gov uses provenance as a backbone. Both the OPUS and PoeM systems are exploring using provenance to generate high-level experiment reports.

Finally, I thought David Koop‘s versioning of version trees showed how using provenance as lens can help better understand versioning of version trees themselves. (I have to give David credit for presenting a super recursive concept so well).

Overall, another great event and I hope we can continue to attract new CS researchers focusing on provenance.

Random Notes

  • PROV in JSON-LD – good for streaming
  • Theoretical provenance paper recipe = extend provenance polynomials to deal with new operators. Prove nice result. e.g. now for Linear Algebra.
  • Prefixes! R-PROV, P-PROV, D-PROV, FS-PROV, SC-PROV, — let me know if I missed any..
  • Intel Secure Guard Extensions (SGX) – interesting
  • Surprised how dependent I’ve become on taking pictures in conferences for note taking. Not being able to really impacted my flow. Plus, there are less pictures for this
  • Thanks to Adriane for hosting!
  • A provenance based data science environment
  • 👍Learning Health Systems – from Vasa Curcin

I was in southern California for essentially a big chunk of August. I had a day visit to the Information Sciences Institute (slides here),  a some nice discussions with friends and also a chance to hang out at the ocean. So here are 10 observations:

  1. I still think hooking up  Abstract Meaning Representation to linked data semantics is something worth trying out.
  2. What is data? I Christine Borgman’s definition “Data refers to entities used as evidence of phenomena for the purposes of research or scholarship”. p.29
  3. Silicon Beach is like a thing. Overhead in Venice, literally, “Tech dude: We need to iterate and test our mvp. Product dude: Steve Jobs didn’t ask what the marked wanted. We need vision!”.
  4. “a future incarnation of Siri, Cortana or other digital companions will be more like a knowledgeable colleague than a personal assistant.” 
  5. JSON-LD + PROV + Elastic Search + lots of other stuff is awesome. I DIG it. Looking forward to hearing more at ISWC.
  6. Something to check out for altmetrics fans: Media Impact Project
  7. UCSB has a sweet campus….
  8. A nice ontology for software metadata: OntoSoft.
  9. AirBnB is great but this is the first trip where I encountered negative responses from neighbors / neighborhood.
  10. You can predict transformative scientific research

 

This past week I was asked to attend an offsite meeting of a local research group where they were discussing ethics.  They asked me to present a topic around ethics within science and scholarship. This gave me an opportunity to try to condense some of my recent thoughts. Roughly, I’ve been playing around with the idea that there is a growing conflict between what those outside of scholarship view the practice of scholarship as (“an ideal”) and how the actually messy practice of it works (“the norms”).  In the slides, below I try to make a start of an argument that we should be clear about the norms that we have. Articulate them and embrace them. I try to boil this down in to two:

  1. be transparent,
  2. embrace the iterative nature of scholarship

I’d love to hear your thoughts on this line of thinking.

This past week I attended a workshop the Evolution and variation of classification systems organized by the Knowescape EU project. The project studies how knowledge evolves and makes cool maps like this one:

The aim of the workshop was to discuss how knowledge organization systems and classification systems change.  By knowledge organization systems, we mean things like the Universal Decimal Classification system or the Wikipedia Category Structure. My interest here is the interplay between the change in data and the change in the organization system used for that data. For example, I may use a certain vocabulary or ontology to describe a dataset (i.e. the columns), how does that impact data analysis procedures when that organization’s meaning changes.  Many of our visualizations decisions and analysis are based on how we categorize (whether mechanical or automatically) data according to such organizational structures. Albert Meroño-Peñuela gave an excellent example of that with his work on dutch historical census data. Furthermore, the organization system used may impact the ability to repurpose and combine data.

Interestingly, even though we’ve seen highly automated approaches emerge for search and other information analysis tasks Knowledge Organization Systems (KOSs) still often provide extremely useful information. For example, we’ve see how scheme.org and wikipedia structure have been central to the emergence of knowledge graphs. Likewise, extremely adaptable organization systems such as hashtags have been foundational for other services.

At the workshop, I particularly enjoyed Joesph Tennis keynote on the diversity and stability of KOSs. He’s work on ontogeny is starting to measure that change. He demonstrated this by looking at the Dewey Decimal System but others have shown that the change is apparent in other KOSs (1, 2, 3, 4). Understanding this change could help in constructing better and more applicable organization systems.

From both Joseph’s talk as well as the talk Richard Smiraglia (one of the leaders in the Knowledge Organization), it’s clear that as with many other sciences our ability to understand information systems can now become much more deeply empirical. Because the objects of study (e.g. vocabularies, ontologies, taxonomies, dictionaries) are available on the Web in digital form we can now analyze them. This is the promise of Web Observatories. Indeed, that was an interesting outcome of the workshop was that the construction of KOSs observatory was not that far fetched and could be done using aggregators such as Linked Open Vocabularies and Taxonomy Warehouse. I’ll be interested to see if this gets built.

Finally, it occurred to me that there is a major lack of studies on the evolution of the urban dictionary as a KOS. Somewhat ought to do something about it 🙂

Random Notes

Welcome to a massive multimedia extravaganza trip report from Provenance Week held earlier this month June 9 -13.

Provenance Week brought together two workshops on provenance plus several co-located events. It had roughly 65 participants. It’s not a huge event but it’s a pivotal one for me as it brings together all the core researchers working on provenance from a range of computer science disciplines. That means you hear the latest research on the topic ranging from great deployments of provenance systems to the newest ideas on theoretical properties of provenance. Here’s a picture of the whole crew:

Given that I’m deeply involved in the community, it’s going to be hard to summarize everything of interest because…well…everything was of interest, it also means I had a lot of stuff going on. So what was I doing there?

Activities


 

PROV Tutorial

Together with Luc Moreau and Trung Dong Huynh, I kicked off the week with a tutorial on the W3C PROV provenance model. The tutorial was based on my recent book with Luc. From my count, we had ~30 participants for the tutorial.

We’ve given tutorials in the past on PROV but we made a number of updates as PROV is becoming more mature. First, as the audience had a more diverse technical background we came from a conceptual model (UML) point of view instead of starting with a Semantic Web perspective. Furthermore, we presented both tools and recipes for using PROV. The number of tools we now have out for PROV is growing – ranging from  conversion of PROV from various version control systems to neuroimaging workflow pipelines that support PROV.

I think the hit of the show was Dong’s demonstration of interacting with PROV using his Prov python module (pypi) and Southampton’s Prov Store.

Papers & Posters

I had two papers in the main track of the International Provenance and Annotation Workshop (IPAW) as well as a demo and a poster.

Manolis Stamatogiannakis presented his work with me and Herbert Bos – Looking Inside the Black-Box: Capturing Data Provenance using Dynamic Instrumentation . In this work, we looked at applying dynamic binary taint tracking to capture high-fidelity provenance on  desktop systems. This work solves what’s known as the n-by-m problem in provenance systems. Essentially, it allows us to see how data flows within an application without having to instrument that application up-front. This lets us know exactly which output of a program is connected to which inputs. The work was well received and we had a bunch of different questions both around speed of the approach and whether we can track high-level application semantics. A demo video is below and you can find all the source code on github.

We also presented our work on converting PROV graphs to IPython notebooks for creating scientific documentation (Generating Scientific Documentation for Computational Experiments Using Provenance). Here we looked at how to try and create documentation from provenance that is gathered in a distributed setting and put that together in easy to use fashion. This work was part of a larger kind of discussion at the event on the connection between provenance gathered in these popular notebook environments and that gathered on more heterogeneous systems. Source code again on github.

I presented a poster on our (with Marcin Wylot and Philippe Cudré-Mauroux) recent work on instrumenting a triple store (i.e. graph database) with provenance.  We use a long standing technique provenance polynomials from the database community but applied for large scale RDF graphs. It was good to be able to present this to those from database community that we’re at the conference. I got some good feedback, in particular, on some efficiencies we might implement.

 

I also demoed (see above) the really awesome work by Rinke Hoekstra on his PROV-O-Viz provenance visualization service. (Paper, Code) . This was a real hit with a number of people wanting to integrate this with their provenance tools.

Provenance Reconstruction + ProvBench

At the end of the week, we co-organized with the ProvBench folks an afternoon about challenge tasks and benchmark datasets. In particular, we looked at the challenge of provenance reconstruction – how do you recreate provenance from data when you didn’t track it in the first place. Together with Tom De Nies we  produced a number of datasets for use with this task. It was pretty cool to see that Hazeline Asuncion used these data sets in one of her classes where her students used a wide variety of off the shelf methods.

From the performance scores, precision was ok but very dataset dependent and relies on a lot on knowledge of the domain. We’ll be working with Hazeline to look at defining different aspects this problem going forward.

Provenance reconstruction is just one task where we need datasets. ProvBench is focused on gathering those datasets and also defining new challenge tasks to go with them. Checkout this github for a number of datasets. The PROV standard is also making it easier to consume benchmark datasets because you don’t need to write a new parser to get a hold of the data. The dataset I most liked was the Provenance Capture Disparities dataset from the Mitre crew (paper). They provide a gold standard provenance dataset capturing everything that goes on in a desktop environment, plus, two different provenance traces from different kinds of capture systems. This is great for testing both provenance reconstruction but also looking how to merge independent capture sources to achieve a full picture of provenance.

There is also a nice tool to covert Wikipedia edit histories to PROV.

Themes


I think I picked out four large themes from provenance week.

  1. Transparent collection
  2. Provenance aggregation, slicing and dicing
  3. Provenance across sources

Transparent Collection

One issue with provenance systems is getting people to install provenance collection systems in the first place let alone installing new modified provenance-aware applications. A number of papers reported on techniques aimed to make it easier to capture more transparent.

A couple of approaches tackled this for the programming languages. One system focused on R (RDataTracker) and the other python (noWorkflow). I particularly enjoyed the noWorkflow python system as they provided not only transparent capture for provenance systems but a number of utilities for working with the captured provenance. Including a diff tool and a conversion from provenance to Prolog rules (I hope Jan reads this). The prolog conversion includes rules that allow for provenance specific queries to be formulated. (On Github). noWorkflow is similar to Rinke’s PROV-O-Matic tool for tracking provenance in python (see video below). I hope we can look into sharing work on a really good python provenance solution.

An interesting discussion point that arose from this work was – how much we should expose provenance to the user? Indeed, the team that did RDataTracker specifically inserted simple on/off statements in their system so the scientific user  could control the capture process in their R scripts.

Tracking provenance by instrumenting the operating system level has long been an approach to provenance capture. Here, we saw a couple of techniques that tried to reduce that tracking to simply launching a system background process in user space while improving the fidelity of provenance. This was the approach of our system Data Tracker and Cambridge’s OPUS (specific challenges in dealing with interposition on the std lib were discussed).  Ashish Gehani was nice enough to work with me to get his SPADE system setup on my mac.  It was pretty much just a checkout, build, and run to start capturing reasonable provenance right away – cool.

Databases have consistently been a central place for provenance research.  I was impressed  Boris Glavic’s vision (paper) of a completely transparent way to report provenance for database systems by leveraging two common database functions – time travel and an audit log. Essentially, through the use of query rewriting and query replay he’s able to capture/report provenance for database query results. Talking to Boris, they have a lot it implemented already in collaboration with Oracle. Based on prior history (PostgresSQL with provenance), I bet it will happen shortly.  What’s interesting is that his approach requires no modification of the database and instead sits as middleware above the database.

Finally, in the discussion session after the Tapp practice session, I asked the presenters who represented the range of these systems to ballpark what kind of overhead they saw for capturing provenance. The conclusion was that we could get between 1% – 15% overhead. In particular, for deterministic replay style systems you can really press down the overhead at capture time.

Provenance  aggregation, slicing and dicing

I think Susan Davidson said it best in her presentation on provenance for crowdsourcing  – we are at the OLAP stage of provenance. How do we make it easy to combine, recombine, summarize, and work with provenance. What kind of operators, systems, and algorithms do we need? Two interesting applications came to the fore for this kind of need – crowdsourcing and security. Susan’s talk exemplified this but at the Provenance Analytics event there were several other examples (Huynh et al., Dragon et al).

The other area was security.  Roly Perera  presented his impressive work with James Cheney on cataloging various mechanisms for transforming provenance graphs for the purposes of obfuscating or hiding sensitive parts of the provenance graph. This paper is great reference material for various mechanisms to deal with provenance summarization. One technique for summarization that came up several times in particular with respect to this domain was the use of annotation propagation through provenance graphs (e.g. see ProvAbs by Missier et al. and work by Moreau’s team.)

Provenance across sources

The final theme I saw was how to connect provenance across sources. One could also call this provenance integration. Both Chapman and the Mitre crew with their  Provenance Plus tracking system  and Ashish with his SPADE system are experiencing this problem of provenance coming from multiple different sources and needing to integrate these sources to get a complete picture of provenance both within a system and spanning multiple systems. I don’t think we have a solution yet but they both (ashish, chapman) articulated the problem well and have some good initial results.

This is not just a systems problem, it’s fundamental that provenance extends across systems. Two of the cool use cases I saw exemplified the need to track provenance across multiple sources.

The Kiel Center for Marine Science (GEOMAR)  has developed a provenance system to track their data throughout their entire organization stemming from data collected on their boats all the way through a data publication. Yes you read that right, provenance gathered on awesome boats!  This invokes digital pens, workflow systems and data management systems.

The other was the the recently released US National Climate Change Assessment. The findings of that report stem from 13 different institutions within the US Government. The data backing those findings is represented in a structured fashion including the use of PROV. Curt Tilmes presented more about this amazing use case at Provenance Analytics.

In many ways, the W3C PROV standard was created to help solve these issues. I think it does help but having a common representation is just the start.


Final thoughts

I didn’t mention it but I was heartened to see that community has taken to using PROV as a mechanism for interchanging data and for having discussions.  My feeling is that if you can talk provenance polynomials and PROV graphs, you can speak with pretty much anybody in the provenance community no matter which “home” they have – whether systems, databases, scientific workflows, or the semantic web.  Indeed, this is one of the great things about provenance week, is that one was able to see diverse perspectives on this cross cutting concern of provenance.

Lastly, there seemed to many good answers at provenance week but more importantly lots of good questions. Now, I think as a community we should really expose more of the problems we’ve found to a wider audience.

Random Notes

  • It was great to see the interaction between a number of different services supporting PROV (e.g. git2prov.org, prizims , prov-o-viz, prov store,  prov-pings, PLUS)
  • ProvBench on datahub – thanks Tim
  • DLR did a fantastic job of organizing. Great job Carina, Laura and Andreas!
  • I’ve never had happy birthday sung to me at by 60 people at a conference dinner – surprisingly in tune – Kölsch is pretty effective. Thanks everyone!
  • Stefan Woltran’s keynote on argumentation theory was pretty cool. Really stepped up to the plate to give a theory keynote the night after the conference dinner.
  • Speaking of theory, I still need to get my head around Bertram’s work on Provenance Games. It looks like a neat way to think about the semantics of provenance.
  • Check out Daniel’s trip report on provenance week.
  • I think this is long enough…..

A couple of weeks ago, I was at the European Data Forum in Athens talking about the Open PHACTS project. You can find a video of my talk with slides here. Slides are embedded below.

In April, we launched the Open PHACTS Discovery Platform with a corresponding API allowing developers to create drug discovery applications without having to worry about the complexities and pain of integrating multiple databases. We’ve had some great applications being developed on top of this API. If you’re a developer in this space, I encourage you to take a look and see what you can create.  Below is a slide set and a webinar about getting started with the API. You can also check out https://dev.openphacts.org for developer documentation and getting an account.

webinar

I was invited by to do a webinar for Elsevier Journal Editors on Altmetrics this past Tuesday. You can find the complete recording here.  260 people attended live. The best part probably was the Q/A session starting 33 minutes in. Broadly, I would broadly characterize the questions as: “I really would like to use these but can you give me some assurances that they are ok.”  Anyway, have a listen for yourself. Hannah Foreman did a great job of directing (she also brought me a cake for my birthday!). I also thought it was great to see Mike Taylor doing a demo of ImpactStory. I felt this was an important webinar to do as it reaches a traditional journal editor audience that may have not yet fully gotten on board. A final note, thanks to Steve Pettifier for letting me use him as an example.

 

 

%d bloggers like this: