Wednesday, December 9, 2015

SWAT4LS 2015 Industry stream

It's been a great first day at SWAT4LS and I have been buying a few books in the lovely Cambridge University bookstore and had a nice conference dinner.

I'm now preparing for tomorrow's task to be the chair for the industry stream in SWAT4LS (see my previous blog post for more information about this event). So, here's a list of the 6 abstracts, companies and projects/tools that I'll introduce tomorrow morning:
  1. The BioHub Knowledge Base: Ontology and Repository for Sustainable BiosourcingText Mining/NLP research group within the School of Computer Science at the University of Manchester together with UniLever, BioHub Knowledge Base (BioHubKB)
  2. Customizing “General SPARQL” for visualisation of in-house data in CytoscapeGeneral BioinformaticsGeneral SPARQL
  3. GraphScope – smart data access for the life sciencesSearchHaus,  GraphScope
  4. Semantic Technologies Make Sense for Life SciencesSmartLogic
  5. Advancing Knowledge Discovery for Alzheimer’s Disease: The Alzforum ExperienceAlzforum
  6. Everybody a Translational Data ScientistOntoforceDISQOVER 

Tuesday, December 1, 2015

SWAT4LS 2015

Monday, 7th December, I will fly to Cambridge to attend the Semantic Web Applications and Tools for Life Sciences (SWAT4LS) conference and also visit colleagues at the new AstraZeneca site. The conference programme looks interesting and the venue, Clare College, fantastic ("Harry-Potter-land" was my husband's comment when he saw the pictures :-).



I am very glad to be the chair for the Industry session on Wednesday morning. Here are a few items on the programme I find extra interesting, from my clinical and RWE data perspective:
Will be great fun to meet friends and colleagues in the Semantic Web community.

Checkout my Storify: SWAT4LS2015

Thursday, June 25, 2015

Jupyter Notebooks


Last week I followed the feed from the Spark Summit 2015 event and several tweets talked about using Notebooks. Two tweets especially:
So I got curios in Jupyter, the lab notebooks used in the edX/DataBricks MOOC I'm following (Introduction to Big Data with Apache Spark). And yes, I do agree with Paco Nathan (@pacoid) and Edd Dumbill (@edd); Notebooks do look like a real game changer:  
  • VisiCalc and Lous 1-2-3 in the early 80ies. 
  • Mosaic and Netscape in the mid 90ies. 
  • I get a similar feeling now, in the mid 2010ies, when I see Jupyter Notebooks.
    (Yes, I know it's old news for all Mathematica users :)

The first 20 mins of this great video with Min Ragan-Kelley (@minrk) one of the core contributor to IPython and now to Jupyter, he gives a nice intro and in the following 30 mins he describes several cool examples of Notebooks, e.g. the CodeNeuro notebooks using Thunder based on Spark.



Excellent podcast interview with two other key contributors to iPython/Jupyter: Brian Granger (@ellisonbg) and Fernando Perez (@fperez_org)


Hmm, I need to think more about the combinations of Notebooks (reproducible research) and Linked Data (processable data) ... ...


Wednesday, April 22, 2015

CSVW for Tabular Clinical Trial Data and Metadata


W3C has developed a set of working drafts for tabular data and metadata called CSV on the Web (CSVW) and are now seeking comments and implementations.

The drafts describes:
  • Metadata vocabular for tabular data
    A JSON-based format for expressing metadata about tabular data to inform validation, conversion, display and data entry for tabular data
  • Model for tabular data and metadata
    An abstract model for tabular data, and how to locate metadata that enables users to better understand what the data holds; this specification also contains non-normative guidance on how to parse CSV files.
  • Procedures and rules to be applied when converting tabular data into JSON and RDF 
These are based on a series of use cases and recommendations including for example Publication of National Statistics and Analyzing Scientific Spreadsheets. I can see some interesting opportunities in this for tabular Clinical Trial Datasets.

A small example

Check out Ed Summers' (@edsu) very nice, small csvw example mentioning one of the authors of the drafts; Dan Brickley (@danbri, Developer Advocate at Google). Below the CSV example, related Metadata and the Annotated, linked data.

CSV
isbn,title,author
0470402377,"Bricklin on Technology","Dan Bricklin"

Metadata
{
  "@context": {
    "@vocab": "http://www.w3.org/ns/csvw#", 
    "dc": "http://purl.org/dc/terms/"
  }, 
  "@type": "Table", 
  "url": "example.csv",
  "dc:creator": "Dan Bricklin", 
  "dc:title": "My Spreadsheet", 
  "dc:modified": "2014-05-09T15:44:58Z", 
  "dc:publisher": "My Books", 
  "tableSchema": {
    "aboutUrl": "http://librarything.com/isbn/{isbn}",
    "primaryKey": "isbn",
    "columns": [
      {
        "name": "isbn",
        "titles": "ISBN-10",
        "datatype": "string",
        "unique": true,
        "propertyUrl": "http://purl.org/dc/terms/identifier"
      },
      {
        "name": "title", 
        "titles": "Book Title",
        "datatype": "string", 
        "propertyUrl": "http://purl.org/dc/terms/title"
      },
      {
        "name": "author",
        "titles": "Book Author",
        "datatype": "string",
        "propertyUrl": "http://purl.org/dc/terms/creator"
      }
    ]
  }
}


Annotated, linked data 
(RDF modeled serialized in JSON-LD)
  "@context": {
    "csvw": "http://www.w3.org/ns/csvw#",
    "dc": "http://purl.org/dc/terms/",
    "prov": "http://www.w3.org/ns/prov#",
    "xsd": "http://www.w3.org/2001/XMLSchema#"
  },
  "@graph": [
    {
      "@id": "_:g69960879269460",
      "@type": "prov:Usage",
      "prov:entity": {
        "@id": "example.csv-metadata.json"
      },
      "prov:hadRole": {
        "@id": "csvw:tabularMetadata"
      }
    },
    {
      "@id": "_:g69960879270660",
      "@type": "prov:Usage",
      "prov:entity": {
        "@id": "example.csv"
      },
      "prov:hadRole": {
        "@id": "csvw:csvEncodedTabularData"
      }
    },
    {
      "@id": "_:g69960879273280",
      "@type": "prov:Activity",
      "prov:endedAtTime": {
        "@value": "2015-04-22T20:21:11Z",
        "@type": "xsd:dateTime"
      },
      "prov:qualifiedUsage": [
        {
          "@id": "_:g69960879270660"
        },
        {
          "@id": "_:g69960879269460"
        }
      ],
      "prov:startedAtTime": {
        "@value": "2015-04-22T20:21:10Z",
        "@type": "xsd:dateTime"
      },
      "prov:wasAssociatedWith": {
        "@id": "http://rubygems.org/gems/rdf-tabular"
      }
    },
    {
      "@id": "_:g69960879277480",
      "@type": "csvw:Row",
      "csvw:describes": {
        "@id": "http://librarything.com/isbn/0470402377"
      },
      "csvw:rownum": {
        "@value": "1",
        "@type": "xsd:integer"
      },
      "csvw:url": {
        "@id": "#row=2"
      }
    },
    {
      "@id": "_:g69960879413940",
      "@type": "csvw:Table",
      "csvw:row": {
        "@id": "_:g69960879277480"
      },
      "csvw:url": {
        "@id": "example.csv"
      },
      "dc:creator": "Dan Bricklin",
      "dc:modified": "2014-05-09T15:44:58Z",
      "dc:publisher": "My Books",
      "dc:title": "My Spreadsheet"
    },
    {
      "@id": "_:g69960879425260",
      "@type": "csvw:TableGroup",
      "csvw:table": {
        "@id": "_:g69960879413940"
      },
      "prov:wasGeneratedBy": {
        "@id": "_:g69960879273280"
      }
    },
    {
      "@id": "http://librarything.com/isbn/0470402377",
      "dc:creator": "Dan Bricklin",
      "dc:identifier": "0470402377",
      "dc:title": "Bricklin on Technology"
    }
  ]
}


A clinical trial data example?

Tabular data has been the traditional way to organize how clinical trial data is captured, stored and submitted. So, I think that this would be very interesting to explore to be able to bind data to it's metadata in a similar way. That is to, make things like variable labels, date/time formats etc. explicit.
  • How could the metadata for a small, example of e.g. demographic data look like?
  • How would the annotated, linked data look for such a small example like?
I would love to see some early ideas on how this could be implemented in the two main language/environments we use today for clinical data: SAS and R. Similar to the early implementation of CSVW in Ruby described in a nice blog post from Greg Kellogg (@Gkelloggone of the authors of the drafts).

Such a first example I think would trigger an interesting ideas for best practices and potential extensions to the metadata vocabular and model, and also to the procedures and rules to create annotated JSON and RDF representations such as:
  • Templates for the URIs to be assigned to each captured and derived data point?
  • Representing implied formats in varchar fields such as dates and precision.
  • Making explicit the implied metadata from the actual data such as encoded labtest codes and units.
  • How to leverage the RDF schemas representing CDISC standards?
  • How to best use W3C's Provenance ontology to capture the life cycle of a data point in a clinical trial?
I think questions as these are important to address, especially in the context of transparency and reuse of clinical trial data, see also an earlier blog post: Clinical Trial Data Transparency and Linked Data.

So, I hope this blog post will spark some interesting responses from the SAS and R communities, and discussions in groups like CDISC and PhUSE Semantic Technology project.

Monday, February 16, 2015

Clinical Trial Data Transparency and Linked Data

I've with great interest been following the discussions about clinical trial transparency and sharing of clinical trial data for the last three years. More precisely - my first tweet about this is from early 2012:


There has been a lot of debates over these years of how much of results of clinical trial results being published - is 50% or much more? Journal article publications vs trial registries? A lot of issues around summary level data vs. patient level data, and around de-identification of data and redaction of documents etc.

All interesting topics but my interest in all of this is the opportunities in making data in, about and related to clinical trials, useful using semantic web standards and linked data principles. In the spring 2013 I wrote a post on my blog: Talking to Machines, about this after listening to Ben Goldacre, one of the key people behind the AllTrials initiative where he also acknowledged this:




Here are a couple of recent events, early 2015, related to Clinical Trial Data Transparency and Linked Data:
  • AAAS Panel on Innovations in Clinical Trial Registry
  • Public consultation EMA Clinical trial database
  • IoM report: Sharing Clinical Trial Data: Maximizing Benefits, Minimizing Risk

AAAS Panel on Innovations in Clinical Trial Registry

So, I really liked what I saw in the program for a session yesterday evening (15 February, 2015) from the American Association for the Advancement of Science annual meeting in San Jose (#AAASmtg) in a panel on Innovations in Clinical Trial Registers
Documents relating to trials -- protocols, regulatory summaries of results, clinical study reports, consent forms, and patient information sheets -- are scattered in different places. It is difficult to track the information that is available, in order to audit for gaps in information and for doctors and regulators to be sure they have all the information they need to make decisions about medicines. There is an unprecedented opportunity to refine how clinical trial data are shared and linked.

Public consultation EMA Clinical trial database

This is similar to what I wrote last week when I tried to "act courageously" and responded to "the public consultation on how the transparency rules of the European Clinical Trial Regulation will be applied in the new clinical trial database is launched by the European Medicines Agency (EMA)."
Make use of modern data standards and access methods to make the access to the clinical trial database developer-friendly, data machine-processable and the trials and their components linkable. Leverage initiatives and use principles, such as CDISC Standards in RDF (under review), that uses modern data standards from W3C stack of semantic web standards, openFDA that uses developer-friendly REST APIs JSON (openFDA API reference), and the linked data principles.

IoM report: Sharing Clinical Trial Data: Maximizing Benefits, Minimizing Risk

A couple of weeks ago the Institute of Medicine (IOM) released an excellent report: Sharing Clinical Trial Data: Maximizing Benefits, Minimizing Risk.

Short summary, as I interpret the core message of the report: Instead of just designing and planning a study, scientists need to plan and document how they're going to share the data from that study so that its usable to others who may want to re-analyze it.

The report has a well written section on “legacy trials” and an interesting listing of challenges:

Infrastructure challenges—Currently there are insufficient platforms to store and manage clinical trial data under a variety of access models. 
Technological challenges—Current data sharing platforms are not consistently discoverable, searchable, and interoperable. Special attention is needed to the development and adoption of common protocol data models and common data elements to ensure meaningful computation across disparate trials and databases. A federated query system of “bringing the data to the question” may offer effective ways of achieving the benefits of sharing clinical trial data while mitigating its risks. 
Workforce challenges—A sufficient workforce with the skills and knowledge to manage the operational and technical aspects of data sharing needs to be developed. 
Sustainability challenges—Currently the costs of data sharing are borne by a small subset of sponsors, funders, and clinical trialists; for data sharing to be sustainable, costs will need to be distributed equitably across both data generators and users.

And for a ”clinical trial data and metadata nerd” as me this is like music :-)

Just because data are accessible does not mean they are usable. Data are usable only if an investigator can search and retrieve them, can make sense of them, and can analyze them within a single trial or combine them across multiple trials. Given the large volume of data anticipated from the sharing of clinical trial data, the data must be in a computable form amenable to automated methods of search, analysis, and visualization.


To ensure such computability, data cannot be shared only as document files (e.g., PDF, Word). Rather, data must be in electronic databases that clearly specify the meaning of the data so that the database can respond correctly to queries. If data are spread over more than one database, the meaning of the data must be compatible across databases; otherwise, queries cannot be executed at all, or are executable but elicit incorrect answers. In general, such compatibility requires the adoption of common data models that all results databases would either use or be compatible with.