Web 2.0, Web 3.0 and public libraries.
Introduction.
The technologies that
underpin Web 2.0 and Web 3.0 are not new, but their potential has only recently
been exploited due to the decreasing cost of devices and technologies and the ease
of accessing the internet. As is typical for any technology, Web 2.0 and Web
3.0 are both empowered and limited by their users. Public libraries have a long
process ahead of them in terms of efficiently exploiting these proven
technologies by successfully and economically fitting them into the existing library
structure in order to enhance services and performance.
Practical application of Web
2.0 technologies in public libraries.
Web 2.0 with its inherent flexibility
and numerous applications has enabled public libraries to widen the range and
nature of services that they provide to their customers. By adopting Web 2.0
tools, libraries endeavour to customise and personalise their services to
render them user-friendly (Evans, 2009, p.14). However, public libraries fall
behind their academic counterparts in adopting and utilising the powerful Web
2.0 technologies. While academic libraries, driven by their innate aspiration
to explore and experiment, widely exploit Web 2.0 platforms to facilitate the
learning and communication process, public libraries are less willing to launch
themselves into the virtual social environment. The benefits of being present in
resources like Wikipedia or Facebook are unquestionable: there is an obvious
promotional value for libraries since these are users’ social spaces of choice.
As state-funded institutions, public libraries tend to be more prudent in
allowing users to contribute to their content or to jeopardise copyrighted material,
which would make public libraries vulnerable to legal liability. The Web 2.0
phenomenon is perceived as transitional and unimportant by some commentators,
not worth staff’s time and effort to engage with it. But as Kelly shrewdly
noted “there may be risks in doing nothing or in continuing to use current
technologies. This might include losing one’s competitive edge, lost
opportunity costs, loss of one’s user community to services that are willing to
take risks” (Kelly, 2010, p.109). Hence the necessity emerges for public
libraries to find a fine balance between being overcautious and taking a reasonable,
calculated risk in order to remain a viable part of the social realm.
While some Web 2.0 tools are
widely accepted and used by library professionals: RFID technology, for
example, has truly modernised public libraries’ service system through the introduction
of self-service machines; others, such as QR code leaflets used to promote
libraries’ services, could be criticised as a waste of money (very few library
users know what they are and how to use them) or an attempt to mimic marketing
strategies in the retail industry. The main impetus for adopting Web 2.0
technologies is to complement and keep existing library technologies up to
date, rather than supplant them. Libraries face the necessity of describing and
using the growing electronic content with which the existing MARC standard was
not really designed to cope (Brophy, 2007, p.188). Web services might also be utilised
to complement the libraries’ specialised applications. The ability of XML to
describe any type of information in order to satisfy users’ fast-changing needs
makes its use in the library environment essential. By using XML applications effectively
libraries may enhance the functionality and performance of their Integrated
Library Systems without investing additional funds into upgrading their systems
(Tennant, 2002, p.2). The use of XML does not require in-depth knowledge of technologies
to be able to generate and manage structured data in human- readable text files
(Banerjee, 2002, p.32). This may help libraries tackle an issue associated with
the nature of modern information technologies: they become outdated or obsolete
in a matter of months.
One of the applications that has
enabled public libraries to freely use web sources to generate up-to-date
dynamic content is mashup, which as defined by Engard is “a web application
that uses content from more than one source to create a single new service
displayed in a single graphical interface” (Engard, (2009, p.xv). This could be
anything from simple mapping of libraries’ locations to a more sophisticated mashup
which would assist users with the fast retrieval of required information by
filtering the content of remote resources based on specific parameters (Fitcher,
2009, p.11). Library websites have dramatically changed with the implementation
of this technology; their content, which previously used highly specialised terminology,
was rendered more intuitive and media versatile, making it easier to understand
and navigate. However, it is to be born in mind that given the nature of free
web services their content may change or disappear overnight without notice,
leaving it to librarians to monitor and control the quality and appropriateness
of libraries’ websites (Herzog B, 2009, p.70). Libraries could go one step
further in adopting this highly productive technology by making their content
mashable, but there are several issues associated with it; some issues are
objective, such as the proprietary status of some materials, while others relate
to librarians’ fear of becoming redundant in a new information age.
Web 3.0: unrealistic future
or future reality?
The Semantic Web is an
attempt to better organise the exponentially growing web content with which
current technologies are unable to cope efficiently. The idea of the Semantic
Web as “not a separate web but an extension of the current one” (Berners-Lee,
Hendler and Lassila, 2001) lies in the ambition to bestow meaning on all the
data on the web by describing it using Resource Description Framework (RDF), which
is a highly-structured, pre-defined format that enables machines to read and
understand information about the content of data and create relations between
them with RDF Schema. This standardised description of electronic resources enables
the next stage of web development, whereby the use of ontologies – conceptualisations
of meaning – would enable software applications to infer new knowledge based on
that description of the content. The sheer scale of work required to render the
web agents “intelligent” as well as the unanimous agreement of participants required
to accept and comply with the standards raise questions about its feasibility
and potential success. Since the web, despite the altruistic effort of Tim Berners-Lee
and his supporters, is still largely perceived by its main driving forces as a
profit-generating tool with millions of potential customers worldwide, no compelling
argument has yet been made for the commercial value of the Semantic Web to
entirely persuade them. Many commentators also express doubt about its
usability, pointing out that the field of practical application of the Semantic
Web, even without explicit commercial interest, is still limited and requires
further research.
Despite many sceptical views,
there are some areas where the Semantic Web may have a great impact on solving
current issues and allow for future developments to take place. Libraries may
benefit from the phenomenon of the Semantic Web both by actively using it and
employing their expertise to contribute to its content. For example, the
Semantic Web could provide an opportunity for public libraries to affirm and
enhance their value at present. This could be done at several levels: firstly,
the undeniable importance of public libraries as institutions makes their
social impact on public life worth studying. Libraries’ hitherto scattered
performance data, both quantitative and qualitative, which at the moment is available
in different formats, could be rendered meaningful and coherent by describing
it in RDF triples and creating ontologies, thus making it accessible to broader
social, medical, educational and other applications through the Semantic Web. Secondly,
librarians face the challenge of coping with ever-increasing amounts of
information and they are still expected to find answers to customers’ queries
promptly and effectively. Many libraries use federated or so-called platform
searches in which multiple information sources are simultaneously searched in
order to retrieve and compile the required information. However, the efficiency
of the federated search would be optimised and rendered less ambiguous if the
electronic source providers presented their data in RDF format and applied a
standard to the vocabulary which is used (Byrne and Goddard, 2010). Thirdly,
the attempt to make libraries’ bibliographic data a part of the Semantic Web,
though not entirely successful at the moment, is still appealing and could be
explored further (Yee, 2009), since the main goal would be improved technology
performance rather than superseding the intellectual work of library
professionals. If adequately adapted and developed, RDF format could solve the
problem of the excessive complexity of bibliographic data which may hinder its
interoperability.
The open movement, which
comprises open access publishing, open source software and open data, has the
potential to revolutionise the existing perception of the web. Currently, the web
is as it was designed by people with a pre-web frame of mind and was expected
to function according to the rules of the real world. However, as an evolving
new reality it generated its own set of rules and proved intractable to the “real
world rules” that its creators tried to impose upon it. Many commentators claim
that the web is a world which should be allowed to live by its own rules, where
the proprietary rights on published materials or on software would be considered
to contradict the essence of what some believe to be the philosophy of the web:
a free global information space, hence the open movement’s aim to boost the
value of the web by eliminating the restrictions posed by proprietary interests.
It is noteworthy that the term “free” in the virtual context is often
associated with something unreliable, volatile and hard to control, therefore
public libraries may be wary of making greater use of open source software as
it is now.
One of the initiatives of the
open movement is open access publishing defined by Esposito (2004) as
“accessible with only a browser and free to the user”, even though at the
moment it concerns mainly research papers which are believed to be capable of making
a valuable impact if they were available to the wider public free of charge. Even
though public libraries represent a smaller share of the subscriptions to
electronic publications than academic libraries which have an obligation to subscribe
to them in order to encourage institutions’ research, they are traditionally
expected to bear all costs associated with access to the materials, providing
them free of charge to their users without obvious self-interest. If open
access publishing is taken one step further to encompass other types of
non-academic works, it could help public libraries to ease the strain on their
constantly diminishing budgets. Until recently public libraries even bore all
costs for the interlibrary loan service, charging users only a minimal fee for
processing reservation requests. If these unavailable and therefore costly to
obtain items are provided in electronic form with free access, libraries would
still fulfil their mission but at a lesser cost to themselves. However, such a
scenario would necessitate a paradigm shift in our conception of intellectual
property and the copyright legislation that protects it, and it would also
deliver the coup de grâce to the
publishing industry which could have a far-reaching negative impact on society
insofar as the publishing industry implicitly advocates literature and
therefore literacy.
Conclusion.
Public libraries have to
fight many battles at present. They still need to realise that technology is a
culture, instead of burying their heads in the sand or reluctantly bearing with
it. Their cautious approach to adopting and integrating technologies, although
often criticised for its slowness, can only be justified by their qualms about
investing substantial amounts of money in technologies which could become
obsolete by the time they are fully up and running. Public libraries are also understandably
anxious about safeguarding their future existence if they embrace technologies
which could ultimately usurp their traditional functions. The current challenge
for libraries is to work out how to transform a crisis into an opportunity by
carving out a way in which they can fruitfully co-exist with technology, rather
than lying down, playing dead and waiting for it to pass.
References
1)
Banerjee, K. (2002) ‘Improving Interlibrary Loan
with XML’ in Tennant, R. (ed.) XML in
Libraries. New York: Neal-Schuman Publishers.
2)
Berners-Lee, T., Hendler, J. and Lassila, O.
(2001) The Semantic Web. Available at http://kill.devc.at/system/files/scientific-american.pdf.
(Accessed: 23 November 2011).
3)
Brophy, P. (ed.) (2007) The library in the
twenty-first century. 2nd edn. London: Facet.
4)
Byrne, G. and Goddard, L. (2010) ‘The Strongest
Link: Libraries and Linked Data’, D-Lib
Magazine, 16 (11/12). Memorial University Libraries, St. John's,
Newfoundland and Labrador. Available at: http://www.dlib.org/dlib/november10/byrne/11byrne.html.
(Accessed: 24 November 2011).
5)
Engard, N.C. (ed.) (2009) Library Mashups. London: Facet.
6)
Esposito, J.J. (2004) ‘The Devil you don't know: the unexpected future of Open Access
publishing’, First Monday, 9(8). Available at: http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1163. (Accessed: 3 January 2011).
7)
Evans, W. (2009) Building Library 3.0. Issues in creating a culture of participation.
Oxford: Chandos Publishing.
8)
Fitcher, D. (2009) ‘What Is a Mashup?’ in Engard,
N.C. (ed.) Library Mashups. London:
Facet.
9)
Herzog, B. (2009) ‘Information in context’ in
Engard, N.C. (ed.) Library Mashups.
London: Facet.
10)
Kelly, B. (2010) ‘A deployment strategy for
maximising the impact of institutional use of Web 2.0’ in Parkes, D. and
Walton, G. (eds.) Web 2.0 and Libraries:
Impacts, technologies and trends. Oxford: Chandos Publishing.
11)
Tennant, R. (ed.) (2010) XML in Libraries. New York: Neal-Schuman Publishers.
12)
Yee, M.M. (2009) ‘Can Bibliographic Data be Put
Directly onto the Semantic Web?’, Information
Technology & Libraries, 28 (2), pp. 55-80, Library, Information Science & Technology
Abstracts with Full Text, EBSCOhost.
(Accessed: 14 December 2011).