Mass Digitisation by Libraries: Issues concerning Organisation, Quality and Efficiency
Ever since the world-wide web made it possible to display graphics on the Internet, libraries have been scanning their older documents and pictures to provide access to them. From the middle of the 1990s thousands of libraries of all sizes began scanning parts of their collections, provided these with metadata and made them available on the web. The emphasis in these first, rather small, digitisation projects was on experimenting with different techniques for both scanning and building interfaces for the Internet. Along the way, methods for quality assurance, project management and business models became more professional. In line with the progress made in the field of digitisation, a profound knowledge of best practices has been developed. However, this knowledge is not available for all cultural heritage institutions who want to digitise their collections. Most of the smaller institutions lack experience and, moreover, the means to digitise in an efficient way. At the same time, the larger libraries are moving towards large-scale digitisation of historical texts while Google has already digitised millions of books from several libraries around the world. Although many libraries welcome the unprecedented access to all this information, Google has also been criticized for the inferior quality of their images, the emphasis on the English language, the violation of copyright laws and the lack of attention for preservation issues. The question therefore arises: can libraries do better than Google?
This work is licensed under a Creative Commons Attribution 3.0 License.