lowndes-illustration
March 22-25, 2006
Albuquerque, New Mexico

Papers: An Introduction to the Semantic Web for Museums

Mike Lowndes, Natural History Museum, United Kingdom

Abstract

The future Web is an unknown country. Whatever we propose today, the reality will very likely be different. Technology progresses and conceptual thought keeps playing catch-up. Today’s Web is a very messy place, and for the most part it is still a ‘pull technology’ – you have to go find it. A key part of the Web, hyperlinks, are ‘dumb’ – they don’t necessarily tell you anything about themselves, or check that they’re still valid. Yet the current most popular way of exploring the Web – Google – relies on links, mostly human-made ones. There has to be a better way. Many years ago it was realised that ‘data about data’ or metadata could help by providing short descriptions of content that both machines and humans can work with. In the context of the Web, this idea developed into the Semantic Web, first proposed by Tim Berners-Lee in 1998 (Berners-Lee 1998). The mini-workshop will form an introduction to the Semantic Web for those who desire to learn more, but have been too afraid to ask. It is supported by this review of the main thrusts of current work, without getting too technical. Museums have some of the best, most valuable content on the Web, and have the expertise to make it self-describing. We should take the lead in development of the Semantic Web and strive toward a greater signal to noise ratio for the future global audience.

Keywords: semantic Web, review, RDF, Web 2.0, ontology, W3C

Introduction

“The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation.”
Tim Berners-Lee, inventor of the World Wide Web (Berners-Lee et al 2001).

The Semantic Web (SW) turns the current ‘global file system’ into a ‘global database’ and allows us to strive toward a greater signal-to-noise ratio for Web content. Furthermore, by letting machines do more of the work of understanding Web content, the personal experience of the Web could be more intelligent, directed and accurate. Achievement of the SW is a huge task, and there are several barriers. Not least that it is simply hard for many people to understand, However, the goal of the SW is also very similar to core goals of Museums and libraries – to codify our content as much as possible and make it accessible in a form relevant to our audiences. To enable this on a truly global scale, content on the Web must be associated with structured information that describes it. This ‘metadata’ must be able to interoperate globally, so standards are at the heart of the SW. Some standards are now well defined, while others still require much work. However, Museums and Libraries have been at the forefront of standards in some of these areas for our internal information management purposes, and are therefore well placed to contribute significantly to the SW. We deal with standards in collections management; these help us structure data about our objects and information for organisational use. Part of this also can be used for making the objects accessible to the SW. Metadata standards are well established and already used by Museums to link collections. Some museums are using data harvesting/cross-searching techniques such as Z39.50 and more recently metadata harvesting, to integrate collections data within and between institutions. (For instance, the Natural History Museum in London has made over 1 million electronic records of collections and research data available via the use of summary metadata in XML. This same work can underpin our involvement with the SW).

The SW is not developing in isolation. The Internet 2 (increased bandwidth, http://www.internet2.org/ ) and Grid Computing (shared processing and software, http://www.gridcomputing.org/) will lead to an increase in available computing ‘power’ when we are on-line. Computing power and software tools are likely to become a utility like electricity. This will edge towards instant processing of everyday tasks (in the human timeframe) as long as we are connected. Also, convergence is increasing: digital radio, interactive TV, Web, mobile, to embedded identification and / or processing in everything from cars and fridges to clothes labels. The Web itself can already be seen as ‘old school’, part of a bigger picture, as more and more young people prefer a mobile Internet platform focused on messaging. We can no longer limit our thinking to the ‘desktop browser’.

Building Blocks

The World Wide Web Corporation (W3C), headed by Tim Berners Lee, who I hope needs no introduction to my readers, created a Roadmap for the SW in 1999 (Berners-Lee, 2000), outlining the steps and technologies needed to get there. Beginning with the unique address behind the hyperlink – the Uniform Resource Locator (URL) this ‘roadmap’ is also a list of building blocks, taking in acronyms like XML, RDF, OWL, and future, as yet undefined systems or ‘Agents’ for Logic, Proof and Trust determination to get to a ‘final solution’ – the Semantic Web.

URLs

Discrete informational content – a ‘digital object’ on the Web – should be available by its unique address. This needs to be ‘persistent’ if the information is to persist.

XML and RDF

The first step towards the SW is to make content self-describing: to attach, communicate and process structured metadata about anything that could be regarded as a digital object or unit of information, be it the digital form of a book, a concept, a painting, event, plane ticket booking or song. XML is the markup framework for this metadata. The Resource Description Framework (RDF, http://www.w3.org/RDF/) is an XML language designed specifically for metadata encoding and transmission. It is now a W3C standard, i.e. the ‘official’ encoding format for semantic Web data.It can contain digital data and metadata and encode certain relationships. It is the basic format for making Web resources self-describing. Its also used more and more as the native messaging system between applications and even within distributed applications.

Ontologies: RDF Schema and OWL

The second step towards the SW is the development of ontologies. Ontologies are structures that map metadata together into meaningful conceptual groups. They can include and map between classification systems (taxonomies), for instance.

What are they and what are they for?

Tim Berners-Lee (et al. 2001): “An ontology may express the rule If a city code is associated with a state code, and an address uses that city code, then that address has the associated state code.”

The DigiCULT (DigiCULT, 2003) Thematic Issue on the SW: “The most typical kind of ontology for the Web has a taxonomy and a set of ‘inference rules.

Finally, one of the leading researchers in the SW, James Hendler (Hendler, 2001) defines ontologies as “a set of knowledge terms, including the vocabulary, the interconnections in meaning, and some simple rules of inference and logic for some particular topic” and ‘Standards for describing and showing relationships between data’; i.e. Ontologies have the functionality of a database (query) and a thesaurus (meaning by context). Now, do all this again in multiple languages…

The W3C standards here are an extension of RDF – RDF Schema and the Web Ontology Language, OWL. Research groups are working on the practical use of these, and vocabularies already exist such as SKOS (http://www.w3.org/2004/02/skos/) and with the Heritage sector, the CIDOC CRM (Gill, 2002). Ontologies can be mapped together and they can overlap. All of this should be hidden from the Web user – it is infrastructure that should make what the user sees more accurate and simple, not more complex and difficult. A common problem with early attempt to ‘do’ the SW has been to expose too much of this infrastructure to the user. Ontologies are tricky to understand if one is not an information scientist, and over-formalized, over-controlled use of them could slow down the development and spread of the SW, as they require us to model the ‘real world’ – not an easy task! We will return to this issue later, but essentially, someone needs to build these relationships. Who is capable of it? I’d suggest Museums are, when appropriate tools are in place.

Rules, Security and Trust

The third step is developing reasoning tools (that can query RDF and handle associated rules, security and trust issues) to make meaningful sense of all this data. This is still at quite a primitive stage and may involve advanced artificial intelligence technology to achieve. Some protocols are in development – SPARQL (http://www.w3.org/TR/rdf-sparql-query/), RuleML (http://www.ruleml.org/). These are not required for some steps towards the SW to be taken, such as an RDF-metadata aware search engine, but will be needed to utilise OWL and the layers above.

Such engines still need to know who they are reasoning for – i.e. have a user profile. The final piece of the SW jigsaw is that the SW will need to know who you are (or who you want to be for the purposes of your on-line interaction). Again, this is at an early stage, but projects like the Friend Of A Friend (FOAF) XML format (http://www.foaf-project.org/), and more recently the social Web networking/bookmaking phenomena of del.icio.us (http://del.icio.us/) begin to point the way.

Agents

Tim BL identified software ‘agents’ as the way people will interact with the fully formed SW. “Agents are the final ‘product’ of the semantic Web – automatic, even artificially intelligent software that does all your searching for you (the process of narrowing down) and much more”. This can already be achieved in the closed environment of a single Web application of course, but as a global property of the Web this is a very long-term goal.

Examples of Agents:

  • The agent attached to your diary automatically organises travel etc, and can change your travel tickets when you alter your diary.
  • The agent attached to your house automatically organises food purchasing, bill payment, lighting, heating, alarms etc.
Fig 1: A model of semantic Web components and how they fit together. See the text for an explanation and definitions of the acronyms.

Fig 1: A model of semantic Web components and how they fit together. See the text for an explanation and definitions of the acronyms.

These are the major building blocks, but how will it work? Figure 1 shows how the parts fit together. A digital object (defined by a URL) has associated metadata in RDF (e.g. using Dublin Core fields). That metadata is mapped to ontologies that define the meaning of the metadata. These ontologies may overlap or have defined boundaries, and are mapped to ‘higher’ ontologies, providing semantic relationships across knowledge areas (domains). On the ‘user’ side, each person has a profile that the SW can read and use to ‘filter’ its research. The user queries the SW explicitly, or implicitly via some simple interaction with SW-enabled software. The ‘agent’ enabled browser pulls back ‘objects’ (knowledge, requirements, answers, transactions etc) specific to the user’s requirements.

Why Has It Not Already Been Done?

Quite simply, it’s hard to do. (it's hard enough to explain!) Back in the late 1990’s there was a movement towards marking up our content with a metadata standard, the Dublin Core – a set of 15 common types of information about on-line content intended to help search for material. This in some places was taken further and DC encoded into RDF, and linked to pages. Some of us expected search engines to look for this extended metadata ‘real soon now’. It didn’t happen. What did happen was the upsurge of a radically different approach – the Google approach, using simple keyword extraction and Web linking activity to order links – ignoring metadata partly because of the issues surrounding its abuse in earlier search engines. Google has dominated the search engine scene since then, and outside specialist portals has, in my opinion, held back the development of more structured information searches. Thus, as well as the SW being hard to do, there was for some time no driving need to improve on Google. There is no doubt Google has been a great tool, but by ignoring or not grappling with the trust issues of metadata, it provides a poorer service than it could.

What Is Happening Now?

Semantic Web Tools

Most development in the SW has been in academic research and not commercial development. RDF and other metadata can be visualized using some open source tools such as FenFire (http://fenfire.org/), which may allow information professionals to develop metadata. But it is still at an early stage. A few commercial tools (such as Semagix (http://www.semagix.com/, which provides ontology development tools and a search engine), are also being adopted by those who can afford them, but most tools are still in the research phase. Two of the most developed tools are Haystack (http://haystack.lcs.mit.edu/) and Magpie (http://kmi.open.ac.uk/projects/climateprediction/ semantic.html).

Haystack, developed at MIT, was originally proposed as a semantic Web browser. It has since developed into a very advanced aggregation tool, a personal desktop and information management system. It has developed too far, too soon, trying to do too much, to be a good demonstrator for everyday use. It is still very difficult for an average user to install, and thus, for most people remains unproved.

Magpie, developed at the UK’s Open University, is a browser plugin that highlights words in a Web and provides a context for the page and advanced services behind each link. This does show the potential of an ‘automated Web’ for a technical or research audience. However, it requires the user to choose an explicit ontology to use with the page – not something the average user wants or should need to do.

Related Current Developments

I will briefly cover 3 developments which can be related to the SW and head the Web in the right direction: RSS Web feeds, Tag Clouds and AJAX. The use of these and similar techniques by a new generation of Web sites has recently been labelled 'Web 2.0' (O’Reilly, 2005) – the introduction of more sophisticated, user focused on-line applications that blur the lines between traditional desktop software programmes and Web pages.

Really Simple Syndication (RSS)

Syndication of content is now huge – most Web sites that carry news or other changing information now support subscription to Web feeds, usually in a RSS format that is actually a lightweight use of the SW building-block RDF. RSS is a simple but structured way to share a link and description of both existing content and new information, so that users and other Web sites get the information they want delivered to them as it is generated – a return of the ‘push’ Web, but controlled by the user. Its use of truly semantic ideas is at an early stage, but being based on RDF it is thus extensible. Microsoft (http://msdn.microsoft.com/windowsvista/ building/rss/ simplefeedextensions/) and others are working on extensions to provide better semantic metadata about RSS content, to use it for such tings as listings, categorisations and events descriptions. Web feed readers are currently quite techie-oriented, but will improve as the sheer number of newsfeeds and users require new layers of interpretation and simpler tools. Any Museum that wishes to gain a subscriber community or engage the ‘Web 2.0 generation’ should be seriously considering implementing Web feeds such as ‘news’ ‘events’ and ‘new acquisitions’. As well as delivering direct to people, these feeds can be syndicated to content aggregators – museum portals and general portals such as Yahoo, Google etc.

Tagging

The developers of ‘Web 2.0’ software realized that in general, a highly structured category approach to content navigation is very difficult, (e.g. Yahoo Directory, Google Directory), and clearly is not to be used as a primary method of navigation by most people. The answer has been simple keyword tagging by users. It is far less difficult to do than formal categorization (the original Yahoo approach), and is easier for people to understand. The ‘tagging’ now used on Flickr (http://www.flickr.com), technorati.com (http://www.technorati.com) and other user-driven Web applications to generate ‘tag clouds’ or ‘folksonomies’ (http://en.wikipedia.org/wiki/ Folksonomy) is a way of free keywording content that uses associations between metadata to infer relevance (see Figure 2), and provides a simple way to group content. At present it's not structured and since there is no attempt to rationalize tags, it only works for groups who understand each other’s tags. It depends on a social space (Flickr, del.icio.us etc) and large numbers of people to work. Work on the usefulness of such ‘emergent semantics’ is at an early stage. In the cultural sector, some initiatives have begun – e.g. the steve project – community tagging of Art Museum content (http://www.steve.museum/).

This part of Web 2.0 could be seen as formalization under attack. It avoids the difficulties of ‘categorising the world’ by not adhering to any standard. While far from perfect, it is seen as pretty good for most of the people (in a given cultural milieu), most of the time. Because is does not deal with issues of context, synonymy or authority, it won’t solve any of hard SW problems but might in future get ‘joined up’: the informal Web of tag clouds’ emergent semantics will overlap at some point with the SW world of thesauri and overlapping ontologies. This will likely add value to the Web and may actually speed realization of the SW.

Fig 2: A Web2.0 Tag cloud, by Markus Angermeier, 2005

Fig 2: A Web2.0 Tag cloud, by Markus Angermeier, 2005

AJAX

AJAX (http://en.wikipedia.org/wiki/AJAX) is simply the use of Advanced Javascript in the Web browser and XML over the Web, to update parts of pages – thus getting around one fundamental ‘issue’ with making the Web dynamic – content being static and only updateable by reloading the entire page. This evolution in Web programming is bringing true application-like interfaces to the Web – already begun with Google Maps and other Web 2.0 applications like Flickr. However, if we aim for a global database of semantically marked up information, we need stable URLs at the base. We've already seen many mistakes made by putting informational content into Web applications that do not provide a stable URL for unique content (for instance, if they demand a unique user session), or into Flash/JAVA apps that are not addressable/accessible at the level of ‘content items’: it’s the issue of 'state'. As Web 2.0 develops, there's a real danger that this can become much worse — as Web applications become as ‘stateful’ as desktop applications, we may lose 'granularity' of addressable content. So far, most of the popular new Web 2.0 sites take this into account, and developments in our sector need to be aware of this issue.

What Else Will We See?

Among other things, Web 2.0 is the turning of the Web from a document-publishing platform to an application platform. It has thus gone beyond the original vision. However, a core role of the Web will remain its place as the on-line world’s primary information resource. Before we get to the SW ideal of agents and the automated global database, there are several potential developments that could improve our use of Web sites. One such is in the way browsers handle links. Links no longer need to be dumb, or even pre-defined. The SW browser could follow a predefined link, validate it on-the-fly and extract metadata from the target. It could also provide context for the link, and other specialized services if the local domain of knowledge is defined in ontology (c.f. Magpie above). Also, if a page is semantically understood, any word on the page could be activated and linked to ‘most likely results’ by the browser, on demand, providing the user with a level of trust for these links and rating its relevance.

What About The Cultural Sector?

Externally, we are regarded as the holders of knowledge and authority on that knowledge. We are trusted more than many sources, perhaps. The role of the cultural sector is therefore potentially huge, primarily because of the quality and range of our content, and because we are already good at structuring it. In many cases, we have by default been preparing for the SW for some time, as a natural progression from good practice in library and collections management. The use of any sensible categorization scheme can enhance the implementation of simple SW-ready Metadata standards like the Dublin Core in RDF (http://dublincore.org/), and this is in common use already, though the tools to search it are lacking outside of certain limited-scope, specific portals. As touched on previously, metadata-driven search is an important part of the emerging SW, and parts of the cultural sector are involved in harvesting this metadata already, using data extraction and transfer standards such as the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) (http://www.openarchives.org/OAI/openarchivesprotocol.html). For instance, many natural history museums use this to provide collections information to the Global Biodiversity Information Framework portal (http://www.gbif.org/). Thesauri owned and created by Museums could become ontologies and act as part of the SW backbone. However, Museums are currently behind on implementing SW components, and will remain behind as no great driver is realized. Other sectors already see competitive advantage in SW ideals: business, commerce, even research. DigiCULT Thematic issue 3 concluded that “museums need to take a lead”. We need to do a big project together, standardise thesauri, and develop ontologies, and now of course, integrate with and/or participate in the emerging folksonomies. Given the complexity of the issues, it is not surprising that it has still not happened. Some steps have been taken. Regarding ontologies, a major step in this direction is the “CIDOC Object-Oriented Conceptual Reference Model” (CRM) (Gill, 2002). This provides an ontology of 81 classes and 130 properties; it describes in a formal language concepts and relations relevant to the documentation of cultural heritage. It is a ‘common language’ and extensible semantic framework to which any cultural heritage information can be mapped, the ‘interoperability glue,’ if you like, providing the ‘words’ and ‘relationships’ we can use to map our content together. This is exposed in RDF already – but as yet its use is very limited.

Example Projects

A number of projects in the sector have investigated the use of SW ideas to enhance on-line projects, with mixed results:

  • MesMuses (http://www.cultivate-int.org/issue9/mesmuses/) developed a ‘scientific knowledge cartography’ and tools to support Museums in investigating the SW.
  • Scultpeur (http://www.sculpteurweb.org/) – consisted of several collections brought together into one portal, with visual display of semantic relationships, a published ontology in RDF, and Concept-based searching.
  • VICODI (http://www.vicodi.org, http://www.eurohistory.net) involved semi-automatic creation of contextual semantic metadata for digital historical resources, by users, with ‘visualization of richly structured, contextualised content’ - the visual interface used historical maps and colour-coded links plus a navigational browse structure.SWED swed.org.uk (http://www.swed.org.uk) – A university project by the W3C-supported Semantic Web Advanced Development in Europe, cataloguing Environmental groups and creating a multifaceted semantic search and browse.
  • Finnish Museums on the SW (http://museosuomi.cs.helsinki.fi/) – the most ambitious and realised attempt to generate a complete SW portal – uses RDF-encoded Dublin Core metadata and brings 15+ Museum collections together. Visually it appears as a fairly basic, text based search and browse interface – a bit like an automated Yahoo directory. It has had good critical reports from users, and the project is now developing a practical semantic Web HTML generator. Dingley and Shabajee (2002) generated an authoring tool for the semantic markup of learning data for the ARKive Web site cataloguing endangered wildlife.

The above projects have shown that the sector can use SW approaches to do innovative things with collections, and join them up. However, most of these projects have (unsurprisingly, given the early stage of work the SW) exposed too much of the infrastructure of the SW to the users – often making the interfaces and information views more complex than traditional Web sites, not less so. This may be fine for a closed academic community of experts, who relish domain-specific rich interfaces, but not so good for the general public, at least until users have found topics they wish to research at such detailed levels. The next generation of tools and sites must address this issue.

Naysayers, Or Has W3C Got It Wrong?

In this paper you may have already noticed the tension between the structured and defined goal of the SW in terms of metadata and ontologies, and the real world of Google and simple tagging: the rather chaotic, never perfect Web. This does raise serious philosophical questions. Tim Berners-Lee sees the Semantic Web as based upon a whole bunch of ontologies mapped together: ‘Instead of asking machines to understand people's language, it involves asking people to make the extra effort’. (Berners-Lee 1998) It is acknowledged that this is a vast and difficult thing to do, and that although many standards are defined, the tools for users (both professional information scientists and the public) are not yet there.

In 2001, around the same time as the SW was being promoted by W3C, Joshua Allen wrote ”Until anyone can create metadata about any page and share it with everyone there will not be a semantic web.” (Allen, 2001). He has not been proved wrong. Thus far, research into the SW has not provided such tools: indeed the realization of this ideal is closer to the emerging free-tagging occurring now.

Others have expressed more fundamental issues with the approach: Janneke Van Kersen of the Dutch Digital Heritage Association was interviewed for the DigiCULT Thematic issue .

I do not believe in developing a fundamental ontology to give meaning to information on the Net. It looks to me like the 18th-century endeavour to write an encyclopaedia that contains all the knowledge in the world. I am afraid it does not work that way. A lot of knowledge, even scientific knowledge, cannot be described in a logical way. Especially in the arts a lot of “knowledge” is the result of heuristics and associative thinking. (DigiCULT 2003)

Similarly, Patel-Schneider and Simèon of Bell Labs Research:

…there is a semantic discontinuity at the very bottom of the Semantic Web, interfering with the stated goal of the Semantic Web: If semantic languages do not respect World-Wide Web data, then how can the semantic Web be an extension of the World-Wide Web at all? (Patel-Schneider and Simèon 2002)

Should we therefore ever expect all knowledge to be codified? No. The SW will never be the whole of the Web – as we have seen it does not ideally suit the casual Web of personal pages and many blogs, and despite some initiatives in ‘semantic blogging’ (http://www.semanticblogging.org/ semblog/blog/default/) there are still few workable tools. To be a real part of the future Web the SW needs to get out of the ivory towers and be flexible in the levels of 'structure' it deals with, mesh with Web 2.0 but enhance it. Since cultural institutions are already active in these areas, we should be champions of this approach.

Conclusions

The formal SW will be hard to realise, not easy like the current Web. So why try? To use an old radio engineering metaphor – to reduce the signal to noise ratio. Sturgeon’s Revelation (http://en.wikipedia.org/wiki/Sturgeon's_law) that 90% of everything is crud holds very true for the Web, but the better formalized the good stuff can be, the more likely the 90% can be avoided.

From DigiCULT again: “The Semantic Web is a direction, it is like North. You go north but you never arrive and say “here it is.”’ (DigiCULT, 2003)

  • It’s utopian – but the main goals are achievable.
  • It will be a part of the future Web, but never all of it.
  • Any movement towards it increases the ‘signal to noise ratio’.
  • It should and will be done where it can be.

Initially it will be used much more by the ‘formal Web’ where interoperability is key, in

  • commerce and business to business
  • academia (focused research areas)
  • formal education
  • cultural institutions

…but the informal Web (most blogs/wikis, personal pages, link sets etc) will benefit from the work.

Conclusions for Museums?

Our users trust us; this gives us a huge advantage over less ‘official’ sources of information. But they mainly deal with us at the level of the individual institution. In the future, SW people will experience culture and science in ways that accurately reflect their needs, and not always just the wishes of the individual institution that owns content. This, politically, is still a hot potato, but we need to realign along this reality.

  • It’s going to be a large scale, collaborative, community thing.
  • It requires leadership and opportunity from Governments.
  • We can and should make more starts now.
  • There are many valuable steps on the way.
  • It will make what you have to say far more accessible to those people who want to know.

References

Allen, J. Making a Semantic Web, http://www.netcrucible.com/semantic.html (last accessed 28/1/06).

Berners-Lee, T.J. (1998), What the Semantic Web can represent, http://www.w3.org/DesignIssues/RDFnot.html (last accessed 23/2/06)

Berners-Lee, T., J. Hendler, and O. Lassila (2001). “The Semantic Web: A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities”. In Scientific American, 17 May 2001. http://www.sciam.com/article.cfm?articleID=00048144-10D2-1C70-84A9809EC588EF21 Accessed January 28, 2006.

Berners-Lee, T. (2000). Semantic Web Architecture, http://www.w3.org/2000/Talks/
1206-xml2k-tbl/slide10-0.html
Accessed January 28, 2006.

Berners-Lee, T.J. (1998). A roadmap to the Semantic Web, http://www.w3.org/DesignIssues/Semantic.html Accessed January 28, 2006.

DigiCULT, (2003) Themed Issue 3: Towards a Semantic Web for Heritage Resources, May 2003. http://www.digicult.info/pages/themiss.php Accessed January 28, 2006.

Dingley A, and P. Shabajee (2002). “Today's Authoring Tools for Tomorrow’s Semantic Web”. In Bearman D. and J. Trant, Eds., Museums and the Web 2002: Proceedings. http://www.archimuse.com/mw2002/papers/dingley/dingley.html Accessed January 28, 2006.

Gill, T.(2002). Making sense of cultural infodiversity: The CIDOC-CRM. http://www.rlg.org/en/downloads/2002metadata/gill/gill.PPT Accessed January 28, 2006.

Hendler, J. (2001) Agents and the Semantic Web, http://www.cs.umd.edu/users/hendler/AgentWeb.html Accessed January 28, 2006.

O’Reilly, T. (2005) “What Is Web 2.0?” http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html?page=1 Accessed January 28, 2006.

Patel-Schneider, P., and J. Simèon. (2002). Building the Semantic Web on XML, in: I. Horrocks and J. Hendler (eds.), The Semantic Web - ISWC 2002: First International Semantic Web Conference, Sardinia, Italy, June 9-12, 2002. Proceedings, Lecture Notes in Computer Science, volume 2342. Berlin: Springer Verlag, 147-161.

Cite as:

Lowndes M., An introduction to the Semantic Web for Museums, in J. Trant and D. Bearman (eds.). Museums and the Web 2006: Proceedings, Toronto: Archives & Museum Informatics, published March 1, 2006 at http://www.archimuse.com/mw2006/papers/lowndes/lowndes.html