The opening and closing ceremonies of the 63rd annual Canadian Library Association conference had an international flair. The outgoing president, Alvin Schrader, described his activities of the past year, which included a visit to Durban, South Africa, host city for the World Library and Information Congress. He concluded with the point that “professional activism is its own reward”.
Ian E. Wilson, Librarian and Archivist of Canada, referred to his recent election as president of the International Council on Archives for the 2008-2010 term.
At the closing ceremonies, Claudia Lux, president of the International Federation of Library Associations and Institutions (IFLA), invited everyone to attend the 74th IFLA General Conference in Québec City this August.
Keynote speaker: Rowland Lorimer
The opening keynote speaker was Rowland Lorimer, a professor at Simon Fraser University, who has written many books and articles on book publishing. His talk was titled “Books Evolving” and he began by describing his personal experiences with the changes in technology in book publishing. He wrote his first thesis in 1966 on a manual typewriter, but his second in 1968 on an IBM Selectric and used a Xerox copier instead of carbon copies as prescribed for the submission of his thesis. In 1983 he wrote with a computer with dual floppy drives and a daisy wheel printer and found that his trepidation of making errors had gone. The next major change was the arrival of WYSIWYG interfaces from Apple and Microsoft– what you see is what you get, except that the computer only sees a graphic image. The computer at this point could not surmise, intuit, or infer page elements. A markup language like SGML (standard generalized markup language) was needed for documents with machine-identifiable elements. The web was developed with a scaled-down version of SGML called HTML (hypertext markup language) which handled the presentation of document structures. The next development was XML (extensible markup language) which involved the separation of content, structure, and presentation. XML means that computers can work with page elements in depth, leading to what has been called the “semantic web”.
Rowland Lorimer said that XML now means: WYTIWYTUn, or, What You Tag Is What You can Transform and Use to the power of n. A revolution in publishing is taking place. XML plus CSS (cascading style sheets) combine to produce an HTML display that web browsers can read, but the data elements that form the XML component are retrieved from a database. For maximum data malleability and digital printing on demand publishers need to move to the use of database technology. Rowland Lorimer described this as a monumental technological transformation. New representations of information are not just bells and whistles– it leads to new knowledge. These changes influence the development of ideas, and we underestimate the significance of these transformations. Some book publishers like O’Reilly Media are good showcases for the use of these new tools, and libraries can be partners in the provision of these new publishing activities.
But there is an impediment to converging and showing content with this model and that is copyright. Copyright is the evil twin of open access. There are always been tension between owner rights and user rights. The formation of libraries, from Alexandria to the mechanic’s institutes of the nineteenth century, happened by wresting ideas from the privileged and making them accessible to the general public. Libraries have historically been about the public good and open access.
The media world today is one of expansion and multiplication. Tools like Facebook, YouTube, and Skype are part of an expanded telephony, all part of new ways to facilitate and augment conversation. Online information sharing has skyrocketed with sites like Wikipedia, the Mayo Clinic, globeandmail.com, and the Drudge Report. And the success of online business means that open access is here to stay.
Rowland Lorimer then got into the problems that creators face. Information that cries to be free will ruin paid authors and publishers. Rowland Lorimer said there should be “fair trade for creators”– just like coffee producers, authors should be able to make a living. As much as open access on the Internet is a problem, creators also want and value the Internet’s dissemination power. Lorimer suggested the copyright laws should be changed such that open access is assumed unless copyright is explicitly stated on a web page.
The changes in publishing has introduced new possibilities for information representation and open access. A publisher like O’Reilly has successfully maximized access without destroying information production. O’Reilly has fully embraced the database model of publishing which combines human readability with machine decodability. At Simon Fraser University, the “fair trade for creators” principle is at work in the Public Knowledge Project, an effort aimed at reducing the cost of scholarly publication. Libraries are key players in the provision of open access under fair usage laws. The creative industries are society’s lifeblood, and libraries need to continue their championing of open access to information.
Building reader communities in the 21st century
Vickery Bowles and Catherine Auyeung, both of the Toronto Public Library, on Thursday afternoon presented their session on their efforts to build reader communities using new digital tools. Toronto Public Library recently launched its One Book, One City program, centred on Consolation, by Michael Redhill.
The Keep Toronto Reading website is at http://www.torontopubliclibrary.ca/ktr/onebook/index.html.
Several activities helped launch this program, such as online chats with the author and an interview featuring Mayor David Miller and Michael Redhill who was in France, but connected live with the LongPen technology. Other events included book signings, book clubs, and a dinner using a menu from the 1880’s Toronto setting of the book. The first chapter excerpt of the book on the TPL web site was augmented with links to historical photographs and maps of old Toronto. The online virtual chapter from the book uses “page turning” technology that mimics the look and feel of the actual book. A podcast for a history walk of old Toronto narrated by Michael Redhill could be downloaded. Media partners included CBC, the Toronto Star, Toronto Life, and spacing.ca.
A blog and a Facebook group were moderately successful, but the “One Book Widget” that people could add to their websites and blogs was less successful due to technical problems. Toronto Public Library recognized that there is an online community that needed to be reached. While direct participation seemed to consist of low numbers, indirect “lurker” activity was high. The overall goal of the One Book, One City program was to cross promote the library’s collection and services. What the library has found challenging is integrating and maintaining both online and traditional services. The challenge is to find an “online voice” and to develop inhouse expertise for web design and content creation.
The session continued with a presentation on the Toronto Public Library’s virtual book club, the Book Buzz. The book clubs of old were typically 95% female and 65% retired. To reach a younger demographic, the book club moved online. The New York Times Book Forum and Oprah Book Club are successful models. The TPL Book Buzz is at: http://bookbuzz.torontopubliclibrary.ca/
For a successful online book club, a dedicated facilitator is essential. The facilitator responds to questions, stimulates discussions, provides continuity, builds relationships, and functions as a reader’s advisor. The Book Buzz facilitator is a full-time employee and spends about 18 hours a week on the Book Buzz. A new book is selected every month. The Book Buzz consists of interviews, blogs and discussion forums. Online chats typically draw about 6 to 10 active participants, which is manageable. The Book Buzz has a member page on LibraryThing which means it taps into that community. The Book Buzz draws participants from around the world. Surveys are handled by adding online surveys from SurveyMonkey.com. Only 1% of book club members actually post online, but this is typical. For every 100 posts there are 10,000 page views.
Electronic Government Information: It was Here a Minute Ago! Issues and New Strategies in its Collection, Preservation and Use
Three librarians gave a presentation on the preservation and accessibility of electronic government information. The typical electronic publication lasts about 44 days to 2 years, and so preservation is a problem that libraries need to solve. Legislation on this issue would be very helpful– if access to government information is mandated by law, then preservation is required.
For all the benefits such as immediacy and keyword searching, electronic documents have a surprising number of drawbacks, especially when it comes to long term access. Future governments may be cash-strapped and so free access is not guaranteed. The removal of public information without public process is a real ongoing danger. The fact that electronic information is so malleable means that older information such as that found in databases, directories, and statutes can disappear forever, hampering the efforts of historians. Format changes and data migrations are technical issues that can be solved relatively easily, although there are instances of inexcusable downtime for access to information due to faulty upgrades and migrations. The bigger problem is that there are no mandated, guaranteed and co-ordinated measures for authenticity and quality control.
Most countries have no protocol at all for how to archive electronic documents. Libraries in Canada have varied collection development policies for acquiring and cataloguing electronic government documents. Many libraries simply add URLs to the electronic copy at major depository sites or Library and Archives Canada, but, as I learned in this session, those two institutions are far less certain to continue to provide that service than I believed earlier. Government ministry web sites are far less reliable than even these depository sites.
Libraries have to deal with problems related to the printing of electronic documents– print is demanded by some patrons; the cost of printing; issues of the archival quality of printed documents; and HTML versions of documents that are hard to print and do not offer consistent pagination that is used in bibliographic references. Electronic serials have their own issues, such as lack of a link to an index page that lists all available issues, incomplete runs, dead links, and “moving wall” holdings where older issues are arbitrarily removed.
Harvesting and crawling for online documents has potential, but there are numerous pitfalls such as depth of crawling, permissions required for some harvesting, scope and boundaries issues, scheduling issues, lack of quality reviews and appraisals of harvested data, dead links, political turf (the U.S. executive branch does its own harvesting), and hindrances to webcrawlers such as dynamic databases, robots.txt files, and inaccessible links (which can be overcome by Google’s Sitemap Protocol).
Key players in the electronic government document field in Canada are Library and Archives Canada, Early Canadiana Online, Industry Canada, and the Depository Services Program (DSP). I later spoke with Gay Lepkey, the manager of DSP, who described the many challenges facing the program. Outside of these agencies there are stopgap measures such as fugitive document initiatives and application of the LOCKSS principle (Lot of Copies Keeps Stuff Safe).
Sharna Polard of the Alberta Legislative Library discussed her library’s program to “e-archive” Alberta government documents. Numerous decisions had to made on format and content restrictions. Only PDF documents will be accepted. The library will archive committee and task force reports, discussion papers and topical reports, and statistical documents. The library will not archive annual reports, business plans, strategic plans, electoral information, and legislative documents. A lot of effort goes into tracking down electronic government documents that can be captured. Sharna Polard uses WebSite-Watcher software. She also monitors government RSS feeds and document management systems where they are in place. To store the files, the Alberta Legislative Library uses a file server with filenames derived in part from CODOC call numbers and the catalogue BIB number. Ongoing issues are server space, server security and file security, serial records (again there should be a link to an index of all issues), and data migration.
Krista Godfrey of McMaster University gave a presentation on “Gov Pubs 2.0”– the use of social networking sites and tools to promote the typically underused collection of government documents. The usual suspects of social networking tools include RSS/blogs, wikis (not widely used for gov pubs), social bookmarking sites such as delicious (Natural Resources Canada uses this), media sharing sites and services (YouTube, Flickr, podcasts), microblogging utilities such as Twitter, and virtual environments such as Second Life. Zotero is a tool that can be added to Firefox and it takes snapshots of web pages to build a reference management system. These social networking sites and tools can be used for subject guides and the promotion of new important documents.
One important message coming out of these presentations on electronic government documents is that true cost accounting is not done for “free” publications on the web– certainly not if one takes into account the underlying infrastructure and procedures that are needed to preserve documents for centuries.
RDA and Technical Services
My main interest in going to CLA 2008 was to learn more about the development of the new cataloguing code, RDA. The well-attended and well-received sessions and meetings I attended seemed to be indicative of a slowly building momentum as the development of the new code hits the final stretch. The momentum is important because RDA could have far-reaching effects on the organization of bibliographic information– especially given all the problems discussed in the other sessions on accessibility and the need to promote library collections and services.
RDA is an interesting balancing act. On the one hand its length and complexity are reflective of the actual bibliographic universe with its myriad formats and exceptions. On the other hand, the user– specifically, making the catalogue comprehensible to the user– is written in as the first consideration in every section of RDA. RDA also has to encompass the evolving structure of the catalogue. The catalogue is approaching its third incarnation. The first was the card catalogue. The second is the OPAC of linked bibliographic and authority records. RDA addresses and is primarily structured for future relational database designs for catalogues, where the record structures can be populated with data from other metadata communities such as publishers, archives, and museums. The challenge for RDA will be to balance all of these needs and bibliographic and historic realities as the code is rolled out and training begins. The integration of RDA with library systems and OPACs holds enormous potential, but one that looks like it will require a lot of co-ordination from stakeholders– national libraries, bibliographic utilities, and individual libraries.
RDA is a different from the other recent much discussed technology-driven advance of Web 2.0/Library 2.0. RDA is about the what that gets talked about in social networking sites. Where there is a need for precision in defining the objects of our attention, something like RDA is needed. But the bibliographic universe is a messy one, and there is subjectivity in how we talk about those things we call books. How book-like are electronic versions? Can we talk about them the same way when the content is identical but the experience of reading them and the associated long term problems of accessibility are so radically different?
Electronic information has tremendous benefits in terms of currency and immediacy. RDA has been designed to be an evolving tool, with the first step being the modernization of cataloguing traditional publications. Whether RDA can be adapted for all situations remains to be seen.
As the standard bearers for open access and the preservation of information libraries have great challenges ahead of them (I read just before I wrote this that Microsoft is exiting the book scanning and book search business– leaving the field to libraries). From this CLA conference, I got a greater sense that as new solutions for information storage and dissemination have appeared, new problems have arisen. In all of this, the end-user (still the reader in many cases) and the development of his or her intellect, and the associated development of the community and the transmission of its culture to successive generations, are what matter most.
Article on Microsoft exiting book scanning and book search business (leaving the field to libraries):