Abstract:Although we see an increasing number of Linked Data sets becoming available, there are still many vocal ‘dissenters’ who don’t believe that Linked Data technology will gain sufficient traction and become widely adopted. Their dissent typically revolves around the argument that Linked Data is complex, and that existing lightweight API approaches are easier to implement and more likely to be used by software developers.
We propose to compare these two alternative ways of making data available on the Web. Are they conflicting or are they complimentary? Is there room for both? Do they achieve different things?
We will look at two UK open data projects; ‘Linking Lives’ and the ‘World War One Discovery Project’, in order to compare these approaches. Linking Lives is developing a names-based interface to the Archives Hub Linked Data (data.archiveshub.ac.uk). This data was created using the five star RDF Linked Data approach. We will discuss the challenges we faced, highlighting issues around data quality, linking to other Linked Data sources, and exploring barriers to achieving the kind of flow of data that is the Linked Data vision.
The World War Discovery project is building an aggregation layer, drawing together WW1 digital content. Core to the creation of this aggregation layer is the vision and approaches outlined by JISC’s ‘Discovery’ programme, which aims to make resources more discoverable both by people and machines.
The aggregation layer aims to provide a ‘real-world’ exemplar of what can be achieved using the Discovery principles, and ensure a positive contribution towards the WW1 centenary programme, which is being coordinated across the UK public-sector. A range of interfaces will be developed on top of the aggregation layer to demonstrate ways in which content can be presented to maximise opportunities for educational and research innovation.
The aggregation layer will be built on data made available via APIs from libraries, archives and museums, and, unlike Linking Lives, will be agnostic as to the technological approach used to make the data available. Although the specific approach to providing the aggregation layer has yet to be decided, the intention is to produce a lightweight API. This will be based on a technology such as Apache SOLR, providing data in a developer friendly form, such as JSON. We will be able to provide more information on the specific approach at EMTACL in October.
We will compare and contrast the outcomes and experiences of these two projects, highlighting the pros and cons of each approach. We will give an indication of their relative merits and provide some personal reflections on which approach we prefer and why. Our aim will be to give delegates a good sense of how these approaches compare when used in a practical, real-life situation. Will there be a winner and loser? Or will it turn out to be a case of horses for courses?
The Multimedia Centre at the Norwegian University of Science and Technology, NTNU, offers free video production of and for courses held at the University.The users of these services vary both in which study program they belong to, and how they want to use video in their courses: Some lecturers want to have their campus lecturing recorded “as is” in the auditorium, while others want to record shorter videos which are intended for online use only in a studio setting. The video recordings can be executed as either a rich media presentations with a two-screen view with a system called Mediasite, or the traditional one screen solution. The recordings can either be published internally via NTNU´s LMS, or the videos can be published externally on channels like iTunes U and NTNU´s own video portal. Several studies conducted have shown that having NTNU course content available on video for “any time, any where”-learning, is a service that the students really appreciate.
This session will have focus on both the possibilities and challenges that arise from producing and distributing videos at NTNU today: Recording technologies, production-flow, open/closed content, metadata, administration of the videos, and the University´s strategy for this kind of education, are all important and interlinked topics in the scene of producing content for blended learning.
5-star Linked Data implies a collection of best practice methods and guidelines that support sharing of data in a way that realizes an interlinked semantic web. In most cases, such data are heterogeneous in structure and organization, even within relatively homogeneous domains. In this project we aim to examine the extent to which the best practices of Linked data can contribute to semantic interoperability between two sets of metadata that describe overlapping entities, but with divergent structural origins. One data set is a corpus drawn from the Norwegian national discography produced by the National Library of Norway and stored in the BIBSYS database. The second data set is a similar corpus drawn from a digital music archive (DMA) at the Norwegian Broadcasting Company. Both sets describe a limited area within the domain of popular music: the earliest recordings of Norwegian black metal. The corpus has been selected because of the complex relationships between recordings, people, their aliases and performing groups characterizing the genre. The project will partly (re)model the data, in line with best practice guidelines, and partly use existing conversion tools which are based on the same principles. This involves utilizing standard frameworks such as RDF(S), establishing identifiers, making use of existing and relevant ontologies, vocabularies and idenitifiers, and to connect the data to other data sets in the Linking Open Data cloud. When we have generated two data sets that conform to the 5 stars of Linked Data, we will attempt to examine the degree of semantic interoperability between them by applying graph matching techniques in the Linked Data context. In practice this entails using the RDF query language SPARQL to perform experimental retrieval tasks developed to identify similar entities across the data sets based on matching (RDF-)graph structures. The success of the experiments will principally be quantified through statistical examinations. Throughout the project, we hope in particular to gain knowledge about the effects Linked data and semantic technologies have on interoperability between library data and similar descriptive metadata that currently is administered and maintained in heterogeneous information systems. In general we hope to gain knowledge about challenges inherent in developing semantic web applications, and examine how these challenges can be met.
Too many institutional repositories resemble elephant’s graveyards. RePub, the EUR’s institutional repository tries to show the dynamics of research at the EUR.
Built on top of open source libraries as Mercurial (distributed version control), Solr (indexing) and Virtuoso (RDF triple-store), RePub software is written in Python.
The repository software enables the implementation of all sorts of services. Easy addition of informal research groups within the more formal university structure. Generation of statistics based upon the analysis of full-text files. Author-graphs that show cooperation between authors of organisational units.
All features can be showed, since they are implemented features: http://repub.eur.nl
EconBiz (www.econbiz.de/en/) is a subject specific portal for business and economics. It helps locate articles, books, full-texts etc. The portal has been around for more than ten years changing shape in accordance with technical developments and user needs and expectations.
Since 2011 apps – first for iPad and iPhone, since 2012 also for Android-Smartphones- are one way to use EconBiz. (http://www.econbiz.de/en/econbiz-mobil/)
Through the apps we offer our users new ways to access our content pool of more than 8 million records. The number of features of the web-portal has been reduced in the apps. However, the apps also include additional features like a map of libraries that can be used to find a library close to the user’s location.
There are a number of future challenges we face, especially with the apps. We serve users in a specific discipline but not in a specific library. Thus, we have to find solutions for presenting availability options independent of a specific home base. However, we still need to be able to refer users back to their home libraries for licensed material or printed books. This works o.k. for German libraries since we are able to include external availability checkers. But these services do not work well in an international context. A focus on “open access only” material that we provide in EconBiz Open (open.econbiz.de) may thus be an option for the international version of the apps.
In the future, we have to decide whether it is better to improve the native apps or to offer a mobile friendly version of the web portal. Both versions have their individual advantages and disadvantages. In the long run, we might go for a web-portal optimized for mobile users but right now the stripped down apps offer much faster access to the huge content pool than the web portal and make better use of the small displays and of the hardware-specific usage.
In spring 2012, usability experts will perform a heuristic evaluation and then the usability of the apps will be tested in a student project.
I will present our experiences with the apps, the results of the usability tests and our thoughts on and options for future developments.
EconBiz was developed by the ZBW- German National Library of Economics – Leibniz Information Centre for Economics and the USB – University and City Library of Cologne.
The apps were developed by the Know-Center in Graz, Austria.
The project ( MUBIL) is an international interdisciplinary collaboration and will investigate ways to use 3d technology and disseminate the content of old books from the special collections of Gunnerus Library. The project aims in communicating with new groups of library users, create social engagement and spread knowledge. In addition the project will also gather data on how young people interact with such software tools and virtual objects in a library environment.
The method we use to reach our young audience is storytelling with augmented books and gaming. A series of augmented books will be available as a virtual library on hand upon entering the laboratory. School children will be invited to “perform” certain tasks in a virtual reconstruction of an old laboratory in a 3D power wall as in a game environment. They will have to “fish” the instruments and information needed through a series of manuscripts, old books and artifacts through 3D technology haptic instruments. These are the augmented books that are enriched with adds-on and thus amplified by information related not only to the text and the book itself but also the period they were created in and specific subjects the writer relates to. They will be invited to browse through the information and contribute his/her narrative to the use of the book on a different lever of the board, called the “story-board”. This level of visitor interaction will remain open for the next visitor to contribute a new narrative as well. School classes will then experiment with the application through specific tasks in groups and in the frame of subject specific workshops together with their teachers in an effort to investigate the outcome of such an interactive gaming in a library-museum environment. The project is financed mainly by NTNU and the National Library of Norway.
This particular research project will concentrate on the performance and the outcome during a certain interaction between the visitor and the “augmented board”. His experience and his behavior, as he interacts with the augmented books or the 3D experiments will be the observation field of the research. That is the project will try to observe how visitors function in a hybrid setting, where physical and virtual reality connects to their action in the quest of knowledge as a conscious cognitive process.
QR codes, 2D barcodes that look like a bunch of black and white squares, allow creators to link to more complex information than traditional barcodes, such as URLs, phone numbers and text. Despite their creation in 1994 by Denso Wave, QR codes have only recently begun to appear as a mainstream offering, largely due to the ever-increasing rise in mobile devices. As these devices become more ubiquitous, libraries must shift focus and consider the best means to offer information in a mobile world. QR codes provide one possible way of linking the physical world to the virtual world.
This paper will identify the wide range of activities that libraries are presently incorporating QR codes, from marketing and outreach ventures to information literacy applications, and examine the benefits and issues associated with implementing this technology. Examples of QR code use will be highlighted and best practices for leveraging the most out of QR codes will be shared.
Since 2009, in collaboration with academics, students, and technologists, the JRUL has been working to enhance the student learning experience through the innovative use of mobile technologies. The Library has played a key role in the instigation, implementation and support of University initiatives to advance learning through the use of emerging technologies; from the Library’s e-reader pilot projects – initiated to improve access to core undergraduate texts, through the provision of iPads to students within Manchester Business School and Manchester Medical School (MMS), to SCARLET-a collaborative project between academics, librarians, archivists and technologists, seeking to unlock Special Collections material through the use of Augmented Reality. In a background survey carried out by MMS, 95% of students surveyed cited ‘access to reference materials’, as a key expectation from the iPads project. This highlights the increasing importance of the Librarian’s role as consultant and intermediary between publishers and Faculty. Furthermore the Library’s involvement in the MMS iPads Project has led to the initiation of a collaboration between MIMAS (national data centre based at the University) and MMS, with a view to enhancing iPad provision with the use of Augmented Reality; a team is now in the early stages of a project investigating the enhancement of drug administration teaching using this technology.
Set within the context of a restructure at the JRUL – which seeks to fundamentally change the way in which the Library contributes to Research, Teaching and Learning at the University – this paper will examine the emergent role of the Librarian as facilitator and collaborator, drawing together relevant expertise from across the Library and beyond. The new structure will see the Library make a bold departure from the traditional subject team approach to a functional model. This will allow the Library to become more user-centred and responsive to the needs of its user community and better placed to keep pace with the speed with which technology is developing. With ever rising student expectations (particularly pertinent to UK institutions with increased student fees), it is paramount that the Library pioneers the exploitation of new technologies to provide an outstanding learning experience if it is to prove its continued relevance within the academic institution.
How can academic libraries support research in the digital humanities? What existing technologies in academic libraries could be repurposed for the digital humanities? What are the emerging technologies that academic libraries could harness to support the digital humanities in the future?
The Digital Humanities are the ‘application of information technologies to the study of the humanities’ [Kamada, 2010, p484]. However, ‘what kinds of positions should academic librarians and libraries stake out on the digital humanities landscape’ [Little, 2011, p4] are yet to be clearly defined. Yet, it seems that the ‘skills and knowledge in collecting and organizing data, in which librarians have unique training and background.’ [Kamada, 2010, p485] could prove to be essential for the digital humanities. It is also noted that until now ‘Research libraries’ engagement with RIs (research infrastructures) has been low.” [Lossau, 2012, p314].
This paper will explore:
*Unique skills and competencies that librarians can offer to a digital humanities team
*Existing library technologies such metadata creation, digitisation and digital collection management, that could be repurposed for the digital humanities
*Emerging technologies such as text and data mining, data visualization and linked open data, that academic libraries could harness to support the digital humanities in the future
The role of libraries in digital research infrastructures such as DARIAH, the digital research infrastructure for the arts and humanities and how existing digital libraries such as The European Library and Europeana could be used to support digital humanities research will also be explored. This paper aims to demonstrate that academic libraries have a key role to play in supporting the digital humanities and encourages librarians to rise to the challenge.
Kamada, Hitoshi. “Digital humanities: Roles for libraries?” College & Research Libraries News. Chicago: Oct 2010. Vol. 71, Iss. 9; pg. 484
Little, Geoffrey. “We Are All Digital Humanists Now” The Journal of Academic Librarianship, Volume 37, Issue 4, July 2011, Pages 352-354
Lossau, Norbert. “An Overview of Research Infrastructures in Europe — and Recommendations to LIBER” Liber Quarterly 21 (3/4), April 2012): pp.313–329
As the provision, organisation, and dissemination of information becomes largely digital, it is essential to develop both digital and information literacy skills in the population as a whole. However, there is a lack of strong theoretical models of how this relationship plays out in real teaching and learning situations, both in formal education, and the informal learning processes throughout society. Information literacy is often equated merely with searching and retrieval skills: digital literacy, with ‘checkbox’ approaches as epitomised by the ECDL (European Computer Driving License). All tend to focus on generic and measurable achievements in the learner, which have some value as a benchmark but struggle to appreciate the differences between individuals (in terms of their subjective preferences and their distinctive learning contexts) and also the critical aspects of the field, that is, the social impact of information and technology use. Finally, there is a separation between information and digital literacy, both in theory and practice.
This presentation offers a new theoretical model for a better, because more holistic, view of the information/digital literacy relationship. It has its roots in the ‘Six Frames of Information Literacy’ model developed by Bruce, Lupton and Edwards in 2006, but groups their six frames into three domains – objective, subjective and intersubjective – based on the different ways information is valued by users, and also studies of social science and how people interact and shape technologies. The model has direct application in libraries, and in IL and digital literacy education, because it can be used to show how the ‘generic’ approaches mentioned above are inevitably limited, and must be complemented by more subjective (or context-specific) and intersubjective (or critical) approaches to literacy education. Ultimately then, the triadic model of literacy is a tool for analysis of actual IL and digital literacy programmes, as it shows how if any of the three domains are neglected, a programme will not be able to meet all the needs of users when it comes to applying information and digital literacy in real-life situations.
Information technologies are blurring disciplinary boundaries by supporting new ways of conducting research and by generating opportunities for educators and learners to engage with and develop new literacies. This presentation will explore the History Engine (http://historyengine.richmonde.edu) as a case study of a project in which librarians are collaborating with faculty members and the IT department to apply pedagogically-driven emerging technologies in the library. By using digital technologies to foster new approaches to research and collaboration among students within and between different universities, and by publishing student research that is of interest and value to a broad public, the History Engine assists in the transformation of typical approaches to teaching and learning, shifting students from the passive absorption of information to the stimulating work of knowledge creation.
Launched in 2005 by the Digital Scholarship Lab at the University of Richmond, the History Engine captures and organizes “episodes”— concise narratives about small, often local events of the past written by undergraduate students. When collected together, the episodes illustrate the scholarly potential of social media applications, and the site paints a portrait of life in a specific geographic area throughout its history that is both wide-ranging and deep, one that is fully accessible to scholars, teachers, and the general public. Because each episode has associated metadata, the History Engine database is growing into a rich vein for data mining and visualization. The site is also a valuable pedagogical tool that significantly enhances undergraduate experiential education by providing a suite of digital resources that fosters applied research and writing skills and exposes students to the methods and practices of both historical and digital scholarship.
Librarians and faculty at the University of Toronto Scarborough have recently partnered with the creators in an international federated partnership to collaboratively redevelop portions of the History Engine by adding new features and functionality. This presentation will look at that redevelopment as an opportunity for undergraduate students to create tangible intellectual products, and to experience the process of producing, publishing, and sharing their work with colleagues and the public, demonstrating how an emerging technology can create a more meaningful educational experience.
This presentation discusses a unique collaboration between two academic librarians, one in Pennsylvania, USA and the other in Shandong Province in the PRC to create a Chinese history and culture subject guide. The presentation discusses how international collaborations may occur through the use of social media, the difficulties faced by such collaborations and the necessity and benefits of working with international colleagues. The presentation addresses the challenges in forming international collaborations, taking into consideration physical distance, technological variations, culture, language and the role that social media can play in overcoming these obstacles.
1. Physical Distance
Physical distance can be a major deterrent in collaborations. Finding a median time to communicate is essential for all international efforts. Time zones and time changes can affect workflow and communication. Social media tools facilitate greater ease overcoming these challenges by providing synchronous and asynchronous communication.
2. Technological Variations
Collaboration based on Internet accessible connectivity can be very limiting. Differences in technologies pose difficulties. Access to a high-speed connection is an essential component. Poor connections contribute to confusion that disrupts work.
3. Cultural Differences
Culture and language offer additional challenges that need addressed in international efforts. English was a common language to the participants in this collaboration. However, this may not always be the case.
University campus culture expressed in library collections contributes to additional challenges.
4. The Promise and Benefits of International Collaboration
Participants in this collaboration used free and inexpensive social media tools to communicate and work together. This practice creates a model for using inexpensive social media tools to forge new partnerships among academic libraries globally? Academic libraries can now tap expertise in other cultures to improve and extend their services without the huge financial cost once attributed to international collaboration.
Is it possible for small projects like this can be maintained for long periods of time? What is the possible future of further international collaboration through social media?
15:30-16:00 Location A
Promoting scientific output has huge impact on library, the organisation & their relationship
There are more ways of doing things than the corporate way!
Everybody wants or needs to make publication overviews : using blog and easy aggregation techniques is within every library’s reach!
With the launch of a separate website focused on the complete scientific output of the University Medical Center Groningen (UMCG) the visibility of these publications were improved considerably, as well as the awareness of them with the corporate community. Possitive side-effect was also the rise of the awareness of library’s resources, skills and services.
Our large academic hospital publishes over 2000 scientific publications every year and it’s rising.
The library has made massive resources available for all staff and students, but finding out who published what within the hospital, is not that easy, especially if you want to be as complete as possible.
With the worldwide “battle of the ranking” between institutions in higher education, it is essential to make every effort to visualize this output of your organization. From the corporate point of view having that data easily available is good for PR and marketing activities ánd it is an answer to the growing call for more “open Science”.
From the library’s point of view, it is very important to be seen as a partner to contribute to this task.
With a minimum of cost and effort, the use of freely available web-technology and (social-media) plugins the library created “atUMCG” http://atUMCG.cmb.med.rug.nl
It’s simplicity in design and content clearly appealed to many inside the hospital. It’s launch had a great spin-off for the library.
The presentation will focus on describing this “controlled aggregation” website and it’s use in more detail, but also on all spin-off effects i.e, the great need and benefits of simple, complete publication overviews made available by the library.
15:30-16:00 Location B
Primo at the University of Amsterdam – Technology vs real life
The technology of harvesting and indexing in library discovery tools promises to search and find all information resources that a library has access to through one user interface. In real life this remains just a promise. The fulfillment of this promise depends heavily on the situation in three areas:
– content and content providers
– the methods of indexing used
– the configuration of the user interface
The issues in these three areas will be illustrated by looking at the implementation of Ex Libris’ Primo at the University of Amsterdam. However, all problems apply evenly to all discovery tools available, commercial, open source and homegrown alike.
10:00-10:30 Location A
“Tagging” is important technology for organizing information resources, and its importance has increased on the Web. Users freely create and manage metadata called “tags” that enable clustering and and improved searching of information resources. These metadata are called “free terms” which is simple and easy method to use for the Internet user; but there is also a disadvantage in that the information retrieval recall ratio can be reduced. The recall ratio can be improved by using “controlled terms”. On the other hand, librarian have made controlled terms to support clustering some of information resources. These tools are called thesauri or subject headings.
Covo.js(ver 0.1) is supporting to using 10 thesaurus and subject headings such as INIS Thesaurus, Web NDL Authorities, LCSH, MeSH, etc. We adopted the command line interface(CLI) as an interface of covo.js. Because of CLI is most simple and universal input interface in case of information search or input tags. The features of covo.js are controlled vocabularies in drop-down lists that are selectable by inputting a trigger word, an incremental search from controlled terms, and some customizable function configurations.
Now, we are coding the next covo.js. The feature of some of structure of control vocabularies are designed hierarchical. A term is connected with other related terms and it forms the network. Next covo.js will support to handling hierarchical structure, such as Narrow terms, Broader terms and Related terms, etc.
10:00-10:30 Location Møllenberg
With the emergence of new tools and technologies, information practices have changed radically. Libraries, which previously served an information warehousing, dissemination, and search role now serve patrons whose information habits are radically different. Increasingly, the work of scholars focuses more on just-in-time retrieval, quick bootstrapping to learn new disciplines, online collaboration throughout the research process, and knowledge construction and dissemination activities unlike traditional publishing.
Although there are many tools to support research, there are profound disconnects between library-oriented resources and tools, and the other tools scholars use to do their work. Scholars increasingly turn to online tutorials, Internet search engines, and collaborative sharing and writing environments. Libraries are becoming increasingly marginalized in the workflow of scholars, where ‘going to the library’ whether virtually or physically, can be seen as disruptive within the larger context of scholarship and the activities of learning, collaborating, and writing. While online services and tools have attempted to decrease barriers to using library resources, the fragmentation of the tools for library work and the tools for the rest of research instead serves as a disincentive.
From the perspective of research libraries, how should we support scholars within the entire lifecycle of the research process? What types of tools do researchers need to conduct work and how can libraries integrate library tools within the research process? We turn to the sociotechnical systems perspective and human-computer interaction design techniques to begin to answer these questions.
Information systems design often relies heavily on user research, in particular workflow analysis and development of use cases, to help identify needed information supports. Use cases are simply descriptions of sequences of events that, taken together, lead to a system doing something useful (Bittner & Spence, 2003). Use cases can be empirical records of actual user behavior, hypothetical activities of prospective users, or an abstraction or generalization of typical behaviors. In this paper, we share generalized use cases and workflow diagrams constructed through discussions with university library patrons, and identify areas where library tools could be better integrated to support library resource use throughout the lifecycle of research.
10:30-11:00 Location A
Oslo and Akershus University College of Applied Sciences offers a publishing platform based on the open source software Open Journal Systems (OJS): http://journals.hioa.no.
Currently we have four Open Access journals publishing regular issues on this platform.
All our journals use PDF as their standard publishing format. These articles are hard to read on tablets such as the iPad, and close to impossible on e-book readers with E-ink displays (like the Amazon Kindle and Sony devices). EPUB is the open e-book format developed by the International Digital Publishing Forum (IDPF), and is preferred by most vendors. The format is reflowable and specially designed for e-book readers. The latest EPUB version can support rich media content and text-to-speech technology as well.
The aim of this project is to provide our journals with the tools and editorial workflow they need to start publishing in EPUB.
So far the project group has tested different kinds of easy-to-use conversion tools: Calibre, Sigil, writer2epub and InDesign. These tools can convert commonly used formats such as HTML and OpenDocument into EPUB. Such tools could work for word processor documents where authors have used appropriate styles and templates, but will not work well enough for streamlined EPUB journal publishing.
The project is therefore currently looking into the more technical workflow in use by the Open Access journal PLoS ONE, and PubMed Central. Both tag submitted manuscripts with the Journal Publishing Tag Set initially developed by U.S. National Library of Medicine. The resulting XML file will work as a source document that can be transformed into XHTML, EPUB and PDF. This kind of workflow ensures a properly formatted EPUB file. Standard tools are freely available for these transformations.
The next step is to get one of our journals to commit to publishing in the EPUB format as well as in PDF. The project group will help the selected journal design a suitable workflow, based on their particular needs, from submitted manuscript to published article.
We want to share our experiences from the project. What are the estimated cost and workload for changing a traditional paper-based journal workflow into a fully digital workflow? What are the pros and cons of different tools and converters?
Hopefully our work will inspire and guide other academic libraries responsible for institutional repositories and journal publishing.
10:30-11:00 Location Møllenberg
How can the library use the extensive and well-structured data contained in the institutional repository to attract the researchers’ attention and to promote the research outcome of the university?
Publication data is still usually presented in a traditional bibliographic style in the form of publication lists and similair. But could this data also be used to present research activities and publications in a way that would get even more attention from users?
At Chalmers University of Technology, the library has developed a feature called Publication Profiles. This could be described as the Institutional Repository ‘Labs’, where we can create visualisations and aggregations of data in many different forms, and present information such as:
Co-authorships. What collaborations are there?
The geography of Chalmers. How international is Chalmers? And how can “the geography of Chalmers” be visualized? Using GPS Visualizer and Google Maps co-authorship addresses have been projected on the world map.
Publication types. Using visualisations, we can present comprehensive overviews of the publishing habits with regard to articles, conferences, monographs etc.
Publication frequency. Graphical visualisations of the number of publications for departments or individuals gives the user a quick overview of the productivity.
Subjects. Tag clouds containing subject categories gives an appealing and comprehensible overview of the areas in which research is being conducted.
Open Access. We aggregate and highlight the number of publications available for free. This is especially important for Chalmers since we have an open access mandate from 2010.
Social media. In what ways do users want to be able to share information from the publication database in social media? By integrating social network services, such as LinkedIn, we aim to provide good ways to promote the research of our university.
Also, the addition of external services, such as link resolving for full text location, citation counts and geospatial data, further enhances the value of the profiles.
The Publication Profiles have been running in a beta phase since early 2011, and the 1.0 version was released in November the same year.