XML – Conal Tuohy's blog http://conaltuohy.com The blog of a digital humanities software developer Wed, 28 Jun 2017 23:15:33 +0000 en-AU hourly 1 https://wordpress.org/?v=5.1.10 http://conaltuohy.com/blog/wp-content/uploads/2017/01/conal-avatar-with-hat-150x150.jpg XML – Conal Tuohy's blog http://conaltuohy.com 32 32 74724268 Analysis & Policy Online http://conaltuohy.com/blog/analysis-policy-online/ http://conaltuohy.com/blog/analysis-policy-online/#respond Tue, 27 Jun 2017 23:45:27 +0000 http://conaltuohy.com/?p=647 Continue reading Analysis & Policy Online]]> Notes for my Open Repositories 2017 conference presentation. I will edit this post later to flesh it out into a proper blog post.
Follow along at: conaltuohy.com/blog/analysis-policy-online/

background

  • Early discussion with Amanda Lawrence of APO (which at that time stood for “Australian Policy Online”) about text mining, at the 2015 LODLAM Summit in Sydney.
  • They needed automation to help with the cataloguing work, to improve discovery.
  • They needed to understand their own corpus better.
  • I suggested a particular technical approach based on previous work.
  • In 2016, APO contracted me to advise and help them build a system that would “mine” metadata from their corpus, and use Linked Data to model and explore it.

constraints

  • Openness
  • Integrate metadata from multiple text-mining processes, plus manually created metadata
  • Minimal dependency on their current platform (Drupal 7, now Drupal 8)
  • Lightweight; easy to make quick changes

technical approach

  • Use an entirely external metadata store (a SPARQL Graph Store)
  • Use a pipeline! Extract, Transform, Load
  • Use standard protocol to extract data (first OAI-PMH, later sitemaps)
  • In fact, use web services for everything; the pipeline is then just a simple script that passes data between web services
  • Sure, XSLT and SPARQL Query, but what the hell is XProc?!

progress

  • Configured Apache Tika as a web service, using Stanford Named Entity Recognition toolkit
  • Built XProc pipeline to harvest from Drupal’s OAI-PMH module, download digital objects, process them with Stanford NER via Tika, and store the resulting graphs in Fuseki graph store
  • Harvested, and produced a graph of part of the corpus, but …
  • Turned out the Drupal OAI-PMH module wa broken! So we used Sitemap instead
  • “Related” list added to APO dev site (NB I’ve seen this isn’t working in all browsers, and obviously needs more work, perhaps using an iframe is not the best idea. Try Chrome if you don’t see the list of related pages on the right)

next steps

  • Visualize the graph
  • Integrate more of the manually created metadata into the RDF graph
  • Add topic modelling (using MALLET) alongside the NER

Let’s see the code

Questions?

(if there’s any time remaining)

]]>
http://conaltuohy.com/blog/analysis-policy-online/feed/ 0 647
A tool for Web API harvesting http://conaltuohy.com/blog/web-api-harvesting/ http://conaltuohy.com/blog/web-api-harvesting/#respond Sat, 31 Dec 2016 05:31:05 +0000 http://conaltuohy.com/?p=610 Continue reading A tool for Web API harvesting]]>
A medieval man harvesting metadata from a medieval Web API

As 2016 stumbles to an end, I’ve put in a few days’ work on my new project Oceania, which is to be a Linked Data service for cultural heritage in this part of the world. Part of this project involves harvesting data from cultural institutions which make their collections available via so-called “Web APIs”. There are some very standard ways to publish data, such as OAI-PMH, OpenSearch, SRU, RSS, etc, but many cultural heritage institutions instead offer custom-built APIs that work in their own peculiar way, which means that you need to put in a certain amount of effort in learning each API and dealing with its specific requirements. So I’ve turned to the problem of how to deal with these APIs in the most generic way possible, and written a program that can handle a lot of what is common in most Web APIs, and can be easily configured to understand the specifics of particular APIs.

This program, which I’ve called API Harvester, can be configured by giving it a few simple instructions: where to download the data from, how to split up the downloaded data into individual records, where to save the record files, how to name those files, and where to get the next batch of data from (i.e. how to resume the harvest). The API Harvester does have one hard requirement: it is only able to harvest data in XML format, but most of the APIs I’ve seen offered by cultural heritage institutions do provide XML, so I think it’s not a big limitation.

The API Harvester software is open source, and free to use; I hope that other people find it useful, and I’m happy to accept feedback or improvements, or examples of how to use it with specific APIs. I’ve created a wiki page to record example commands for harvesting from a variety of APIs, including OAI-PMH, the Trove API, and an RSS feed from this blog. This wiki page is currently open for editing, so if you use the API Harvester, I encourage you to record the command you use, so other people can benefit from your work. If you have trouble with it, or need a hand, feel free to raise an issue on the GitHub repository, leave a comment here, or contact me on Twitter.

Finally, a brief word on how to use the software: to tell the harvester how to pull a response apart into individual records, and where to download the next page of records from (and the next, and the next…), you give it instructions in the form of “XPath expressions”. XPath is a micro-language for querying XML documents; it allows you to refer to elements and attributes and pieces of text within an XML document, to perform basic arithmetic and manipulate strings of text. XPath is simple yet enormously powerful; if you are planning on doing anything with XML it’s an essential thing to learn, even if only to a very basic level. I’m not going to give a tutorial on XPath here (there are plenty on the web), but I’ll give an example of querying the Trove API, and briefly explain the XPath expressions used in that examples:

Here’s the command I would use to harvest metadata about maps, relevant to the word “oceania”, from the Trove API, and save the results in a new folder called “oceania-maps” in my Downloads folder:

java -jar apiharvester.jar
directory="/home/ctuohy/Downloads/oceania-maps"
retries=5
url="http://api.trove.nla.gov.au/result?q=oceania&zone=map&reclevel=full"
url-suffix="&key=XXXXXXX"
records-xpath="/response/zone/records/*"
id-xpath="@url"
resumption-xpath="/response/zone/records/@next"

For legibility, I’ve split the command onto multiple lines, but this is a single command and should be entered on a single line.

Going through the parts of the command in order:

  • The command java launches a Java Virtual Machine to run the harvester application (which is written in the Java language).
  • The next item, -jar, tells Java to run a program that’s been packaged as a “Java Archive” (jar) file.
  • The next item, apiharvester.jar, is the harvester program itself, packaged as a jar file.

The remainder of the command consists of parameters that are passed to the API harvester and control its behaviour.

  • The first parameter, directory="/home/ctuohy/Downloads/oceania-maps", tells the harvester where to save the XML files; it will create this folder if it doesn’t already exist.
  • With the second parameter, retries=5, I’m telling the harvester to retry a download up to 5 times if it fails; Trove’s server can sometimes be a bit flaky at busy times; retrying a few times can save the day.
  • The third parameter, url="http://api.trove.nla.gov.au/result?q=oceania&zone=map&reclevel=full", tells the harvester where to download the first batch of data from. To generate a URL like this, I recommend using Tim Sherratt’s excellent online tool, the Trove API Console.
  • The next parameter url-suffix="&key=XXXXXXX" specifies a suffix that the harvester will append to the end of all the URLs which it requests. Here, I’ve used url-suffix to specify Trove’s “API Key”; a password which each registered Trove API user is given. To get one of these, see the Trove Help Centre. NB XXXXXXX is not my actual API Key.

The remaining parameters are all XPath expressions. To understand them, it will be helpful to look at the XML content which the Trove API returns in response to that query, and which these XPath expressions apply to.

  • The first XPath parameter, records-xpath="/response/zone/records/*", identifies the elements in the XML which constitute the individual records. The XPath /response/zone/records/* describes a path down the hierarchical structure of the XML: the initial / refers to the start of the document, the response refers to an element with that name at the “root” of the document, then /zone refers to any element called zone within that response element, then /records refers to any records within any of those response elements, and the final /* refers to any elements (with any name) within any of of those response elements. In practice, this XPath expression identifies all the work elements in the API’s response, and means that each of these work elements (and its contents) ends up saved in its own file.
  • The next parameter, id-xpath="@url" tells the harvester where to find a unique identifier for the record, to generate a unique file name. This XPath is evaluated relative to the elements identified by the records-xpath; i.e. it gets evaluated once for each record, starting from the record’s work element. The expression @url means “the value of the attribute named url”; the result is that the harvested records are saved in files whose names are derived from these URLs. If you look at the XML, you’ll see I could equally have used the expression @id instead of @url.
  • The final parameter, resumption-xpath="/response/zone/records/@next", tells the harvester where to find a URL (or URLs) from which it can resume the harvest, after saving the records from the first response. You’ll see in the Trove API response that the records element has an attribute called next which contains a URL for this purpose. When the harvester evaluates this XPath expression, it gathers up the next URLs and repeats the whole download process again for each one. Eventually, the API will respond with a records element which doesn’t have a next attribute (meaning that there are no more records). At that point, the XPath expression will evaluate to nothing, and the harvester will run out of URLs to harvest, and grind to a halt.

Happy New Year to all my readers! I hope this tool is of use to some of you, and I wish you a productive year of metadata harvesting in 2017!

]]>
http://conaltuohy.com/blog/web-api-harvesting/feed/ 0 610
Zotero, Web APIs, and data formats http://conaltuohy.com/blog/zotero-web-api-data-format/ http://conaltuohy.com/blog/zotero-web-api-data-format/#comments Sun, 30 Aug 2015 05:49:44 +0000 http://conaltuohy.com/?p=250 Continue reading Zotero, Web APIs, and data formats]]> I’ve been doing some work recently (for a couple of different clients) with Zotero, the popular reference management software. I’ve always been a big fan of the product. It has a number of great features, including the fact that it integrates with users’ browsers, and can read metadata out of web pages, PDF files, linked data, and a whole bunch of APIs.

zotero

One especially nice feature of Zotero is that you can use it to collaborate with a group of people on a shared library of data which is stored in the cloud and synchronized to the devices of the group members.

Getting data out of Zotero’s web API

If you then want to get the data out of Zotero to do other things with it, you have a number of options. Zotero supports many standard export formats, but the problem I found was that none of those export formats exposed the full richness of your data. Some formats don’t include the “Tags” that you can apply to items in your library; some don’t reflect the hierarchical structure of ‘Collections’ in your library; and so on. It seems the only way to get the full story is to use Zotero’s web API.

Like any web API, this API is a great thing; it makes it possible to use Zotero as a platform for building all kinds of other web applications and systems. The nice thing about a web API is that it’s open to being accessed by any other kind of software. You don’t need to write your software in Javascript or in PHP (the Zotero data server is written in PHP). To access a web API you only need to be able to make HTTP requests, so you’re not tied to any particular platform.

Zotero’s web API is pretty good as web APIs go, though it does have a weakness which is common to many “web APIs”. The weakness is that it’s not obvious how to interpret the data which Zotero provides, and this is a practical barrier to the use of the API. It certainly was for me.

REST

Zotero’s API documentation makes mention of the buzzword “REST”, which is an acronym for “Representational State Transfer”. REST is the name for a style of network communications, defined by a set of design principles or guidelines. A network protocol or web API that conforms to those guidelines is said to be “RESTful”. However, in practice a great many “RESTful” web APIs fail to conform to one or more of the principles, commonly the principle of the “Uniform Interface”, one corollary of which is that the packets of information sent back and forth must be “self-descriptive”.

Self-descriptive messages

To get it right, a RESTful web API needs to provide self-descriptive information; the information it sends you must describe itself sufficiently that you could work out what to do with it. Often the publishers of APIs rely on providing documentation of the different data formats their API provides, and they expect you to have found and read that documentation before you use their API, and to already know what kind of response you will get from the various different parts of their API. But if an API relies on you already knowing what kind of information it provides, then it’s not RESTful. This unfortunately is the case with Zotero.

So how should an API publish “self-descriptive” data?

The HTTP Content-Type header

The main mechanism a web server uses to publish self-descriptive data is to include along with the data a Content-Type header which explicitly declares the format of that information using a code called an “Internet Media Type”. There are a zillion of these Internet Media Types, including image/jpeg and image/png for images, text/html for web pages, application/xml for generic XML documents, or application/json for generic JSON data objects. Of these examples, the last two stand out as different because they are not very specific. What does an XML document mean? What does a JSON object mean? They could mean anything at all, because XML and JSON are generic data formats which can be used to transmit all different kinds of information. It’s possible to be more specific about what kind of XML or JSON you are producing, by saying for instance application/rdf+xml (RDF data encoded in XML) or application/ld+json (Linked Data encoded in JSON). But if you only give a more generic Content-Type, then a client will need to look inside the data package itself to determine what it means.

If Zotero were to publish its data as application/zotero+json, that would be an improvement. It would mean that Zotero data in this format could be exchanged around in other systems, and still be understandable. As it stands, Zotero’s application/json data can only reliably be understood if you have just read it from Zotero.

Here’s an example of the JSON data you can read from Zotero’s API: https://api.zotero.org/groups/300568/items?v=3&format=json

Namespaces

One of the nice features of XML is the concept of “namespaces”. These are distinct vocabularies with globally unique names, which allow you to unambiguously identify what kind of XML data you are looking at. If a piece of software can recognise the namespace or namespaces that a document uses, then it’s in a position to understand what it means, and to process it usefully. Otherwise a human is going to have to look at the XML and try to make some sense of it. JSON doesn’t have an equivalent to XML Namespaces (although JSON-LD does), so that means that information served up as application/json can’t be considered very self-descriptive.

Another interesting point about XML Namespaces is that each of these vocabularies is uniquely identified by a URI; that is, the URI is the name of the vocabulary. This has the nice feature that you can open an XML file, find the namespace URI, plug that namespace URI into your browser, and magically be presented with some useful information about that vocabulary. In other words, any data in this format will always contain a hyperlink to its own documentation (called a “Namespace Document“).

If Zotero were to publish its data in XML, and use a “Zotero” namespace to label all the terms in its vocabulary, then that would be another improvement. Any XML documents of that type could be downloaded from Zotero and stored in any other kind of system, and because they would contain that identifier, they would still be identifiably Zotero-flavoured, even after they had long been detached from Zotero itself.

Formalised data formats

Although it is a problem that Zotero’s JSON data format doesn’t have its own formal name by which it can identify itself, the more critical issue for me in attempting to understand the Zotero data was that the data format exposed by the API is barely documented at all.

If you read the JSON data, you will see names such as publicationTitle, itemType, dateAdded, etc, and you can have a guess at what they mean, but it shouldn’t be necessary to guess what they mean, or to understand the relationships between them. I had to spend hours analysing the dataset I had extracted from the web API, before I could seriously attempt to convert it to some other form. There is some documentation scattered about here and there, but no authoritative description of the data format. Compare this to the situation with the more formalised formats which Zotero can export: TEI, RIS, MODS, etc, which have formal specifications defining all the terms in their vocabulary.

Is this something that Zotero could do? It’s hard to say; it would require some technical changes to the Zotero data server code, but probably more signficantly it would involve a change in collective mindset by the developers involved; to see Zotero’s data model as an abstraction independent of Zotero’s data server application; a genuine public language for communicating between arbitrary bibliographic systems, not merely a kind of window into the internal workings of a particular software system.

This is a common situation in web applications which offer an API: the application developers are focused intellectually on the application itself; its own internal workings and the functionality it provides, and they naturally tend to see the API as merely an aspect of that system. The idea that the data format of the API might have a life independent of their software, or that it might even outlive their software altogether, is a stretch. But if the data which their system works with is important, then it is surely important enough to accord some formal status: to give it a name; an Internet Media Type, even a namespace URI, to constrain it with a schema, and to explain it with formal documentation.

Next steps

As usual, the code I’m writing is published on GitHub; in the first instance this XProc pipeline to convert a Zotero library to EAD. But this was really a first stab at the problem; where I’d like to go is to try to formalise and specify the Zotero data format itself; to give it an XML encoding with a formal definition, and then to build other systems, such as Linked Data systems, on top of that formalised format.

]]>
http://conaltuohy.com/blog/zotero-web-api-data-format/feed/ 1 250
Beta release of XProc-Z web server framework http://conaltuohy.com/blog/beta-release-of-xproc-z-web-server-framework/ http://conaltuohy.com/blog/beta-release-of-xproc-z-web-server-framework/#respond Thu, 14 May 2015 04:12:04 +0000 http://conaltuohy.com/?p=227 Continue reading Beta release of XProc-Z web server framework]]> I have at last released a “final” version of my web server framework, XProc-Z, for testing. The last features I had wanted to include were:

  • The ability for the XProc code in the web server to read information from its environment, so that a generic XProc pipeline can be customized by setting configuration properties.
  • Full support for sending and receiving binary files (i.e. non text files). XProc is really a language for processing XML, but I think it will be handy to be able to deal with binary files as well from time to time.
  • A few sample XProc pipelines, to demonstrate the capability of the platform.

XProc-Z-samples

Now the software is out there for people to try, and already I have a friend — a medievalist — who has installed it and started to use it to develop a web application. It’s exciting to have an “installed base” (one person, but it’s a start!) for the software which previously I was the only one to use.

Also, now that the XProc-Z platform is more or less complete, I will be using it myself to build an application for Library and Archives people to convert their collection metadata into Linked Open Data form.

I hope that the platform will turn out to be useful generally in the Digital Humanities and Library fields; there’s a lot of processing of XML going on, and XProc is an ideal programming language for that. Since it’s designed to run XProc pipelines, on the web, with minimal extras, XProc-Z is also highly appropriate for web-based XML processing applications, making it one of the most concise and simple ways to write applications of that nature. If you are a DH developer and you already know XSLT, XQuery, or XPath, you will find XProc a pretty amenable language – I totally recommend it!

If you’re interested to see it in action, you can view it on this server, running the sample pipelines which I’ve included. You can also view the Java source code or the sample XProc pipelines on the github site. The “main” pipeline is xproc-z.xpl.

If you’re interested to give it a try, and you know — or don’t mind learning — a bit of XProc, feel free to download the software and fire it up on a machine of your own. I am happy to answer questions about it and generally help to get people going. You can comment here on the blog, email me, or post an “issue” on the github site.

 

 

]]>
http://conaltuohy.com/blog/beta-release-of-xproc-z-web-server-framework/feed/ 0 227
XProc-Z http://conaltuohy.com/blog/xproc-z/ http://conaltuohy.com/blog/xproc-z/#comments Tue, 09 Dec 2014 09:38:36 +0000 http://conaltuohy.com/blog/?p=81 Continue reading XProc-Z]]> Last weekend I finally released my latest work of art; a software application called XProc-Z. It’s a fairly small thing, but it’s the result of a lot of thought, and I’m very pleased with it. I hope to make a lot of use of it myself, and I hope I can interest other people in using it too.

A lot of the work I do involves crunching up metadata records, XML-encoded text, web pages, and the like. In the old days I used to use Apache Cocoon for this kind of work, but in recent years the development community has moved Cocoon (especially since version 3) in a different direction. Now it’s more of a Java web server framework with many XML-related features. To actually build an application with Cocoon now, you have to put your Java hat on and write some Java and compile it, with Maven and all that Java stuff. That’s all very well, if you like that sort of thing, but it is not very lightweight. I would prefer to be able to just write a script, and not have to write and compile Java. And there are better languages around for that purpose. In particular, there’s a relatively new language for scripting XML processing, called XProc.

XProc; a language for data plumbing

XProc is a language for writing XML pipelines; it uses the idea of data “flowing” from various sources, step by step through a network of pipes and filters, to reach its destination. It’s a language for data plumbing.

So for the last few years I have tended to use the XProc programming language for XML processing tasks. For tasks such as these XProc is an ideal language because its features are designed for precisely these purposes. For instance it takes only a dozen or so lines of code to read a bunch of XML files from a website, transform them with an XSLT, validate them with a schema, and finally save the valid files in one folder and the invalid files in another.

Running XProc programs on the Web

Unlike Cocoon, XProc is not intended primarily for writing web servers, and for a while at least, there was no convenient way to run XProc pipelines as a web server at all. I’ve tended to run my XProc pipelines from the command line (using the XProc interpreter Calabash, by Norm Walsh) where they can read from the Web, and write to the Web, but they aren’t themselves actually part of the Web. It’s always struck me, though, that it would be a great language for writing web applications, and so I did some research to try to find a good way to run my XProc code on the Web.

I had a look at about 5 different ways, but none of them offered quite what I wanted. The problem lay in the details of the mechanisms by which an HTTP request is passed to your pipeline, and in which your pipeline outputs its response. For instance, if a browser makes an HTTP “POST” request for a resource with a particular URI, and passes it a bunch of parameters encoded in the “application/x-www-form-urlencoded” format, somehow that request has to invoke a particular pipeline, and pass those parameters to it. As far as I could tell, each of the different frameworks had their own custom mechanism for this. Some of them were quite restrictive; a URI directly identified a pipeline, and any URI parameters were passed to the pipeline as pipeline parameters. Others were more flexible; you could tweak it so that various properties of a request, taken together, identified which pipeline to run; you could pass not just form parameters, but other things, such as HTTP request headers, to the pipeline, and so on. Generally, to customize the way the HTTP request was handled you had to write some Java code, or write some custom XML configuration file. That was all a bit discouraging to me, because it didn’t fit with two requirements that I had in mind:

  • Firstly, I wanted to be able to write an application entirely in the XProc language, without any Java coding or compilation. This is to keep it simple. I myself am a fluent Java programmer, but there are a lot of potential XProc programmers who don’t know Java, and who would find it a barrier to have to set up a Java development environment. Why should they have to? I know that in the library world, and in the Digital Humanities community, there are a lot of people who know XML, and know XSLT, and for whom XProc could be a really easy next step, but having to learn Java (even at a basic level) would be an effective barrier.
  • Secondly, and this is more of an issue for me personally; I want to be able to write XProc applications that handle any kind of HTTP request. I don’t just want to be able to do GET and POST, but also PUT, HEAD, and so on. I want my applications to have access not only to URI parameters, but also to cookies, HTTP request headers, multipart content uploads – everything.

Proxies

It might seem odd to want to be able to handle any arbitrary HTTP request; surely if I’m writing a web application I can ensure my front end makes only the sort of requests that my back end can handle? That’s true — if you’re writing a specific application, or applications of a specific kind. But I want to be able to also write really generic applications; namely proxies.

A web proxy is both a client and a server. It’s an intermediary which sits in between a client and one or more servers. It receives a request from a client, which it passes on to a server (perhaps modifying the request first), and then retrieves the response from the server, which it returns to the client (perhaps modifying the response first).

This allows a proxy to transform an existing web application into something different. For instance a proxy could make a blog look like a Linked Data store, or make a repository of TEI files look like a website, or a map, or an RSS feed. A proxy can turn a website into a web API, or vice versa. It can mash up two or more web APIs and make them look like another web API.

As well as transforming one kind of web server into something quite different, it can also just add some extra feature to an existing web server. For instance, it can enhance the web pages provided by one server by adding links or related information retrieved from another server.

In general I think that the proxy design pattern is seriously under-valued and under-used. It’s a powerful technique for assembling large systems out of smaller parts. The World Wide Web has been designed specifically to facilitate the use of proxies, and yet many web developers are not really even aware of the technique. Part of my goal with XProc-Z is to facilitate and encourage the use of this pattern.

XProc-Z

I figured that to make a really proxy-friendly XProc server, I would have to construct it myself. Earlier this year I had writtten a program called Retailer which is a platform for hosting web apps written in the XSLT language, so I started with that and replaced the XSLT bits with XProc bits, using the Calabash XProc interpreter, and I was done.

Reusing XProc’s request and response documents

For maximum flexibility, I decided to pass the entire HTTP request to a single XProc pipeline, and leave it up to the pipeline itself to decide how to handle any headers, parameters, and so on. An XProc pipeline that didn’t need to know about the HTTP Accept header, for instance, could just ignore that header, but the header would always be passed to it anyway, just in case.

To pass the request to the pipeline, and to retrieve the response, I re-used a mechanism already present in the XProc language, which just had to be turned inside out. XProc has a step called http-request, and associated request and response XML document types. In XProc, an HTTP request is made by creating a request document containing the details of the request, and piping the document into an http-request step, which actually makes the HTTP request, and in turn outputs a response document. By following this pattern, I could make use of the existing definitions of request and response, and not have to add any extraneous or “foreign” mechanisms. In XProc-Z’s binding mechanism, an HTTP request received from a web user agent is converted into a request object and passed into the XProc-Z pipeline. The output of the pipeline is expected to be a response document, which XProc-Z converts into an actual HTTP response to the web user agent. In other words, an XProc-Z pipeline has the same signature as the standard XProc http-request step, which makes a lot of sense if you think about it.

This means that an XProc-Z server can make do with a single pipeline which handles any request. The pipeline can parse the request in an arbitrary way; using cookies, parsing URI parameters and HTTP headers, accepting PUT and POST requests, and returning arbitrary HTTP response codes and headers. The business of “routing” request URIs and parsing parameters is all left up to the XProc pipeline itself. So the binding mechanism is very simple and leaves maximum flexibility to the pipeline, which can then be used to implement any kind of HTTP based protocol.

My first XProc-Z app

Finally, as a demo, I wrote a small pipeline for my friend Dot. The pipeline shows you a list of XML-encoded manuscripts, and lets you pick a selection.

Making a selection
Making a selection

Then, when you have made a selection, and clicked the button, the pipeline is invoked again. It runs a stylesheet (which Dot had already written) over each of the selected XML files, aggregates the results into a single web page, and returns the page to your browser.

Visualize the placement of illustrations in the manuscripts you selected
Visualize the placement of illustrations in the manuscripts you selected

 

Next steps?

I am planning to use XProc-Z as a framework for building some Linked Open Data software, for publishing Linked Data from various legacy systems. I have some code lying around which mostly just needs some repackaging to turn it into a form that will run in XProc-Z.

I’m open to suggestions, though, and I’d be delighted to see other people using it. Let me know in the comments below if you have any bright ideas, or if you’d like to use it and need a hand getting started.

]]>
http://conaltuohy.com/blog/xproc-z/feed/ 2 81