I have been asked to provide a position paper for next week's Future of Interoperability Standards meeting hosted by CETIS. This blog post is one I have been meaning to write for ages so I'm offering it as a position paper of sorts.
UKOLN has been charged by JISC with the task of supporting the development of Dublin Core Application Profiles (DCAPs) in a number of areas. While I have not (so far) had much direct involvement in this work I have developed, over the last year or so, a real interest in the process of developing these.
The development of DCAPs is governed through the application of the Singapore Framework for Dublin Core Application Profiles. In this document, the concept of the application profile is explained thus:
The term profile is widely used to refer to a document that describes how standards or specifications are deployed to support the requirements of a particular application, function, community, or context. In the metadata community, the term application profile has been applied to describe the tailoring of standards for specific applications.
The requirements for an application profile to be legitimately termed a Dublin Core Application Profile are defined within the Singapore Framework. In brief, a DCAP is a "packet of documentation" which includes the following elements:
This seems mostly sensible although I have not been party to much of the discussion around the Singapore Framework and so I have never entirely appreciated the purpose of or need for the Description Set Profile (DSP). In passing I will note that it seems to me that the DSP could be optional rather than mandatory, and that the Usage Guidelines should be mandatory rather than optional.
According to the Singapore Framework web page, "there are no stable, published examples of full-blown application profiles that conform to these guidelines". With one exception, Scholarly Works Application profile (SWAP), it is difficult to find any examples of DCAPs which are close to being realised. SWAP was developed for the most part at UKOLN so I have an interest in seeing it adopted; however to date we have seen no actual usage of this DCAP.
I come from a background of software and service development, rather than standards development. For this reason, the development of application profiles is more appealing to me than is standards development per se, as I expect to be able to apply my experience and skills more readily to work which is aimed at supporting "specific applications". It is natural for me to measure success in terms of usage. This means that I take usability seriously, and tend to focus on users and their responses.
Early in 2009 I began to notice a few things about how DCAPs such as SWAP were expected to be developed. It seemed to me that usability was not a stated priority. As, I think, a consequence of this, there is little attention given to testing the usability of DCAPs within a context involving users and applications. It does seem that DCAPs are expected to be tested for conformance to the standard, for internal cohesion and logic in terms of the underlying information model, and even for theoretical satisfaction of functional requirements, but if the DCAP has not been tested for usability before it gets to this point then it is at high risk of failure. It was also apparent to me that users, even experts in the domain for which the DCAP was intended, might struggle to be able to appreciate, test or criticise the DCAP documented according to the Singapore Framework, unless they had relatively rare information management knowledge and understanding.
At UKOLN, I got together with some colleagues and proposed that we consider a more Agile approach to the development process. I use the term Agile in the sense in which it has been applied to software development in recent years. A key feature of Agile development is that it allows the development of not only the solution, but also the requirements, in a highly iterative process. Agile development tends to favour working solutions over future capabilities and encourages near-continuous engagement with users during the development process, responding to changes in functional requirements as both the developer and the user increase their understanding of the problem space. I wondered if we couldn't devise some tools and techniques which would allow the early development stages of DCAP development to be done iteratively, with close engagement from prospective end-users. The following is a description of what we have developed so far.
An Agile approach
In order to focus on usability in the development of DCAPs, we realised that we would need to introduce a methodology which would allow us to frequently test what had been developed so far against user-requirements and understanding. Borrowing again from software development, we decided to adopt a rapid prototyping method, where we would give prospective end-users the means to quickly assemble information models which made sense to them in the context of their requirements. Some of our early experiments were in the domain of scholarly works because we have a particular interest there. Our method therefore relies on being able to assemble small groups of prospective users to participate actively in the development process.
We have observed an issue with users’ engagement with application profiles. Application profiles are, essentially, intangible - users cannot interact with them directly. For many users, this presents a very real barrier to engagement. Even if formal documentation such as Description Set Profiles (DSP) is developed during the development iterations, it tales a certain kind of user with a particular interest to engage with these. Many users need to see the sort of system interface which they will ultimately be using in order to contribute feedback on the development of an AP. We have developed two approaches to making DCAPs tangible, paper-protoyping and a flexible user interface tool for information modelling.
In early stages of requirements gathering, a paper-prototyping approach has shown real promise as an accessible method for eliciting requirements from groups of users. This has the advantage of being potentially very free-form, such that the developer’s unconscious influence on users’ contributions is reduced. Users are encouraged, collectively, to develop their own understanding and to model it. You can read about this in more detail in Emma Tonkin's paper: Multilayered Paper Prototyping for User Concept Modelling: Supporting the Development of Application Profiles.
One limitation of paper prototyping comes from this very free-formedness: it is difficult to correlate the outcomes of a free-form modelling exercise with the outcomes of other similar exercises. For this reason, we have developed a second stage development tool which uses software to structure and, crucially, record, users' engagement with the developing application profile.
Our software for allowing users to experiment with modelling their domain is MrVobi. Below you can view a short video of it being demonstrated on an interactive whiteboard:
Users are encouraged to use this tool to create and restructure entities and attributes through a user-friendly and intuitive interface. The user interface is is connected to a web service which records every decision, and which can hold and serve up pre-recorded models so that users can start from an advanced position in a given session.
As we move users from the free-formed to the more structured interfaces, we can start to gain an important benefit. By recording the decisions that individuals make about the information model, we can aggregate these so that, theoretically, we can start to assign a level of confidence to the decisions which are eventually made about the application profile. For example, we can say something like "this attribute belongs with this other in this entity, and 71% of our test users from this domain agree with this".
As an application profile becomes more developed, it can be presented to users for testing through this same interface. This means, importantly, that an application profile can be treated as something more dynamic. As a domain changes over time, with shifting aspirations, challenges and issues, so the application profile can be re-assessed in terms of its usability in a changing context.
A concern which we identified early in the development of these processes and tools was the fact that the tools influence the testing process: when a user gives feedback, they are to some extent commenting on the artificial interface as well as the application profile. The paper-prototyping patly mitigates this, as does the simple fact that we don't rely on a single interface. Within the very real constraints of users' patience and available time, the general approach is to introduce as many types of interface as the user can bear so that biases based on the idiosyncrasies of specific tools are gradually cancelled out.
To bring this back to the Singapore Framework: we believe that we are evolving an effective process to develop several parts of the 'package' - the functional requirements, the domain model, and the usage-guidelines. We believe that if these are developed with frequent recourse to user-testing, then the resulting DCAP will be more robust, and more likely to be adopted. We think that we can build into the process an aspect of evidence gathering to allow to make assertions about the resulting DCAP which are based on a certain degree of confidence.
This is very much a work in progress. We have experimented with the paper-prototyping approach with a number of different groups, and in more than one domain, with some very interesting results. We ran an interactive workshop at last year's International Conference on Dublin Core and Metadata Applications using the MrVobi software which was very well received (this was informed by a presentation which is also a useful overview). We have received strong encouragement from the Dublin Core Metadata Initiative to continue to develop this approach and are now considering how how we might take this work forward in 2010. Any comments are welcome.
This work has been been the result of collaboration within UKOLN. Special mention should be made of Emma Tonkin's efforts which have been crucial in a number of aspects of this work. Others at UKOLN who have contributed are Andy Hewson, Talat Chaudhri, Mark Dewey, Stephanie Taylor, Julian Cheal and Tom Richards.