Thursday, September 22, 2016

Library Exploration through Serendipitous Discovery

Jessie Ransom, Solutions Architect, Ex Libris

One of my favorite things about libraries is how you can walk in looking for one thing and leave with so much more than you knew you wanted or needed. I have experienced this over and over, both as a patron and as a librarian. As a student, I spent a lot of time not asking for help in my university library, and instead relied on a bit of luck to find what I needed—stumbling upon the perfect book on the shelf, finding an excellent resource by looking up the references in a book or article, and a few times even asking a librarian for recommendations (crazy, I know).

This concept of exploration is something that I now consider to be an essential part of libraries. This is a fundamental value that libraries offer over the Google research experience. Google may give you what you are looking for, but it won’t give you all the resources you didn’t think to look for. As more and more patrons interact with the library online, and sometimes only online, it is important that we not lose this core added value. Because of the potential to expose relationships between resources, there are great opportunities for serendipitous discovery online, and both the Summon and Primo services incorporate features that bring this front and center to your patrons.

Virtual Browse: I undertook my undergraduate studies at a very large school with an equally large library. When I needed to do research I would stumble my way through the OPAC until I found something “sort of related” to the topic I was researching, then wander around that section of the library until I found enough information. The Virtual Browse feature in Primo takes this element of exploration and brings it online. Patrons can now navigate your stacks using the cover browse slider in Primo, which mimics the physical experience of exploring the shelves but also exposes items from different locations, items that are currently out on loan, and even ebooks if they have a call number. This is a great opportunity to explore additional resources on any topic.

Topic Explorer: When patrons perform a keyword search in Summon, the Topic Explorer pane opens on the right panel of the results screen, showing reference content, related topics, a recommended subject specialist to contact, and up to three research guides for the patron to explore. The topic explorer has over 50,000 topics associated with keywords found in actual Summon search logs, and it creates a great jumping off point for patrons to explore a new topic. As an undergraduate I had no idea there were subject specialist librarians, and I love that Summon points patrons directly to them as a resource.








bX Article Recommender: When patrons find an article that is interesting or useful to their research, the bX recommender will display recommended articles that may be helpful, exposing them to a set of cross-discipline resources and the potential to follow a chain of recommendations as far as they desire. bX uses anonymized data about researcher behavior to draw correlations between resources; researchers who read the first article also read the additional recommended resources. Because this is not simply a metadata match, these recommendations can be especially helpful for patrons who may not know all the right keywords to try when searching for a concept, something librarians are great at helping with that can be lost in self-guided search.

Citation Trail: The citation trail is a new feature in Primo. Now when patrons find an article that they’re interested in, they can explore the reference trails for that resource. With one click, patrons can link to articles cited by the resource, and articles that cite that resource. I love this feature; when I was in graduate school I spent a lot of time looking up articles from reference lists as a way of exploring a topic. Primo now makes it incredibly easy to find the related resources and expose patrons to material they might not have found otherwise.

Database Recommender: Sometimes the best result is simply the right database. The database recommender in Summon will match a search query to tags created by the library and the Summon community and suggest databases that the patron may want to explore. This is a great way for librarians to recommend specific resources for patrons who will never seek face-to-face help, and a great starting point for patrons who might be looking for a more targeted set of results.


Ex Libris understands that exploration is a fundamental part of learning and of the research process. All of these features are ones that will soon exist in both Primo and Summon, meaning your patrons will be soon able to take advantage of all these opportunities for serendipitous discovery, regardless of which product you currently use. 

Thursday, September 15, 2016

Evaluating Content Neutrality: Bias and Research Integrity



Eddie Neuwirth, Director of Product Management - Discovery Services, Ex Libris


Any bias within a system is a serious issue when people rely upon that system for information.  Historically, conversations about bias have focused on the media as a powerful tool to influence people. On university and college campuses worldwide the topic of bias, usually in a social or political context, continues to be an important and ongoing challenge for academia.  Most recently, conversations about various internet services and the technology companies behind them have even suggested that technologies and vendors such as Facebook could influence the outcome of elections because of their proprietary search algorithms.

In a New York Times op-ed and other research, Zeynep Tufekci, associate professor at the School of Information and Library Science at the University of North Carolina at Chapel Hill, argues that Facebook’s search algorithm is certainly biased.  With its propensity to favor more upbeat information over more challenging subjects, Tufekci suggests that Facebook can limit the information flow of important topics. This, in turn, can affect the information that users are exposed to and ultimately consume.

If bias finds its way into a library discovery service, particularly in an academic setting, the stakes are high – research integrity could be negatively impacted.  Matthew Reidsma, Web Services Librarian, Grand Valley State University and Editor-in-Chief, Weave Journal of Library User Experience, recently provided a compelling overview on the topic of algorithmic bias in library discovery services. Reidsma’s research points out how algorithmically driven features can unwittingly introduce or reinforce social biases (such as gender discrimination) without human intervention to correct course.

It is highly unlikely that any library service vendor would knowingly put forth a discovery service that intentionally biased results toward particular information topics, or that intentionally pushes a social agenda. There is no incentive to do so. However, the idea that a discovery service could be designed and unintentionally skewed to favor content from one content provider over another is not so far-fetched. This is because content neutrality goes beyond the idea of possible algorithmic manipulation through relevancy ranking.

Many focus on the relevance algorithms as the most likely source of content bias in discovery services, but it is also how the discovery service is constructed, as well as elements of its user experience, that can impact and influence the content that users see and interact with. Factors leading to bias could include methods for de-duplicating content, presentation and quality of links to content, interface design and branding, and more. Ultimately, the system design of a library discovery service is complex, and ensuring content neutrality by removing bias can be difficult.

Libraries should call for discovery providers to commit to content-neutral practices in discovery systems to minimize any potential bias. Library discovery services that aim to be content neutral and unbiased should consider the following six principles:

  • Make content equally discoverable. Fair and equitable treatment of the metadata that the discovery tool provider uses is fundamental to content neutrality.
  • Ensure that technical considerations are balanced. To prevent biased results, it is important that the relevancy-rankings approach does not introduce a bias by favoring one provider over another.
  • Keep platforms separate, in delivery and in visual presentation. When a content provider also furnishes a discovery system, each should run on a separate platform.
  • Make content-neutrality the default. Libraries shouldn’t be burdened with making extensive changes to the system configuration just to achieve content neutrality.
  • Make delivery equitable. Access to full text should be treated the same regardless of source.
  • Provide a neutral user interface. The user interface should be impartial and not influence the selection of resources.
We invite you to read our Guide to Evaluating Content Neutrality in Discovery Systems to better understand content neutrality, the principles for evaluating discovery systems, and questions librarians can ask of vendors about their own discovery services.

Stay tuned: our next blog post in this 3-part series on content neutrality will focus on questions librarians can ask to ensure unbiased discovery. To view the previous post in this series click here.

Click here to access our content neutrality guide

Monday, September 12, 2016

A Librarian’s Guide to Becoming Data Driven



Beth McGough, Communications and Creative Services Manager, ProQuest

An interview with Michael Levine-Clark, Dean and Director of the University of Denver Libraries

Data alone cannot tell an actionable story – it must be taken in context with an understanding of the bigger picture. Then you can begin to understand what it all means.
Michael Levine-Clark, Dean and Director of the University of Denver Libraries, shared this advice during our conversation on how libraries can use data to make decisions.
At the University of Denver Libraries, deep data analysis has changed how the library builds collections, saved money, and resulted in a broader, deeper collection. Data analysis on the library’s print collection resulted in the decision to move a large portion of the collection offsite freeing up space for students – sparking increased usage of the library and easier access to library services.

Working together: Data across the library

I asked Michael why a library should put time and resources into deep data analysis, rather than other activities. He remarked data can inform a library about what activities it should focus on and how to target spending.
Michael recognized that time is a constant challenge. Large and small libraries would likely approach data analysis differently.
Large libraries could have librarians with expertise in statistical analysis, while a smaller library may pool resources and work as a team.
Regardless of a library’s size, he underlined the importance of working broadly across the library to bring in expertise on different types of resources and services.
Michael elaborated, working as a group leads to a better understanding of data and ultimately better decisions. 
Sharing expertise, a library can identify usage expectations for different types of resources and users. He said context is crucial, which can be provided by experts from across the library.
Michael provided guidance for libraries that decide put their focus on usage data. He warned not to make quick decisions based on one look at data. Look at it from as many angles as possible to understand patterns by subject, discipline, and resource type, among other categories.
His advice to libraries is to be careful when benchmarking against other institutions because each library has unique strengths. A library should also benchmark against itself over multiple years. Libraries can’t understand metrics, such as the acceptable level of journal usage, without a couple years of data.
As libraries start this process, Michael suggests focusing on one type of data such as subject or type of resource. When a library has a full understanding of that category the analysis can expand. 

Privacy, data, and unintended consequences

As the conversation shifted to privacy, Michael stated that aggregated data, such as a report from a vendor or publisher, does not present privacy issues. 
Privacy issues arise when that information is connected with students. It is valuable to libraries to understand the usage patterns students’ exhibit and track them to see how the library contributes to student success. But this is where libraries can run into privacy issues. Michael emphasized that a library would not want to track students so closely that all the books they read could be identified. Anonymized data helps libraries prove their value and identify what works. 
Michael commented on students’ willingness to share information with the library. He suspects if asked, students would share data but he emphasized the danger of unintended consequences, even if data is used with permission.

Data presentation and tools

Next, we discussed tools and presenting data. The top tool Michael recommends starting with is Excel. If a library has expertise with statistical tools they can be useful but are not necessary. He underscored that analysis doesn’t need to be perfect. Michael pointed out the importance of visualizing data to tell a story and suggested Tableau for visualization.
When the time comes to present data to university administrators Michael stressed the importance of keeping the report simple but telling as complex a story as possible.
Michael emphasized data presentations should not be in librarian-ese.
Speak in context for someone outside the library and translate the jargon. He pointed out that high-level administrators receive similar reports from across the university. The library’s report should be brief and visual. With experience you will learn what administrators respond to, some like to see the underlying data while others prefer the visualizations. 
Ultimately, Michael said, it is about learning how to communicate.
This post was originally published on the ProQuest Blog

Monday, September 5, 2016

Evaluating Content Neutrality in Discovery Systems – Part 1



Eddie Neuwirth, Director of Product Management - Discovery Services, Ex Libris

Just a few weeks ago the International Federation of Library Associations and Institutions (IFLA) released a new statement about what net neutrality means for librarians and library workers.
The heart of its statement is twofold:

1.       The mission of libraries is to give access to knowledge equitably.
2.       It is concerning to have that fundamental right controlled or made harder to achieve.

Ultimately, IFLA states, without an open Internet, information monopolies threaten to destroy the diversity of information and points of view that presently exist. IFLA’s primary premise in support of net neutrality is that it is a principle of universal and non-discriminatory access to information:

“It is compromised when service providers seek to give preference, unfairly, to one source or type of traffic over another, effectively restricting choice and determining which parts of the internet people will find easiest to use. Inevitably, the most powerful will be better placed to optimize the performance of their content.”

IFLA’s statement on net neutrality, along with its recommendations for libraries, is very reminiscent of and parallels the many viewpoints on the topic of content neutrality in discovery services that came up a few years ago and is still discussed today.

Content neutrality was widely discussed at major library conferences and in several published blog posts throughout 2013 and 2014, resulting in the creation of the NISO Open Discovery Initiative and the publication of its recommendations.
The recent IFLA statement identifies the significant issues for libraries regarding content neutrality, and classifies them into two categories:

a) The freedom of access to information through avoiding information monopolies;
b) The freedom of expression to ensure information diversity.

Some central tenets of the IFLA statement are:
  • The right to seek, impart and receive information and ideas and to obtain equitable access to all content is a universal right.
  • Without neutrality, the ability of libraries, as information providers, is compromised.
  • Breaches of neutrality compromise library users’ ability to access information in a balanced fashion more broadly.
  • Access to information is a prerequisite for a diversity of opinions and the growth of knowledge in general.
  • Technology can distort patterns of content and service consumption.
In a 2013 blog post, appropriately titled “Content Neutrality,” Wally Grotophorst of George Mason University positioned content neutrality in comparison to the concept of net neutrality in the following way:

“Content neutrality” is a similar idea. Our “access provider” in this instance is the discovery platform vendor. The analogs to traffic shaping or billing distortions occur instead around the metadata that’s being searched to “discover” relevant content. As with ISPs and net neutrality, there are some companies that just provide a discovery platform and others that are also in the content business. As before, vertical integration and perceptions of competitive advantage are problem incubators.

Like net neutrality, as highlighted by the IFLA statement, content neutrality in discovery services is not a concept that has disappeared from the library landscape. However, the topic of content neutrality does not have the same visibility it once did, even though libraries are spending even larger sums of money and effort curating vast collections to meet the diverse needs of researchers and end-users. Perhaps as many (or most) libraries have already adopted a discovery service to maximize discoverability of their collections – as a means to attract researchers to the library, expose their vast collection of resources, and promote the value of the library across campus –their focus has turned elsewhere and they are too ready to accept that these tools are doing their job as promised.

Ex Libris encourages libraries to continue to remain focused on content neutrality in discovery solutions. For libraries, the stakes of content neutrality in their discovery service is high. If a library’s chosen discovery system is not content neutral – that is, if it does not offer the ability to democratically discover the entirety of the library’s collection without bias toward some providers over others – then libraries should naturally question these extra investments.

We invite you to read our Guide to Evaluating Content Neutrality in Discovery Systems to better understand content neutrality, principles for evaluating discovery systems, and questions librarians can ask of vendors about their own discovery services.

Stay tuned: our next blog post in this 3-part series on content neutrality will focus on the six core principles for ensuring unbiased discovery.


Thursday, September 1, 2016

Linked Data Collaboration Program Update



Shlomo Sanders, CTO, Resource Management, Ex Libris

It is six months since Ex Libris started the 2016 Linked Data Collaboration Program. Now is a good time to review our accomplishments and directions for the future.

The goal of the Linked Data Collaboration Program is to facilitate interaction between Ex Libris and Alma and Primo customers who are interested in adding linked data features to those products. The program has 41 institutions at varying levels of participation: 24 in North America, 12 in Europe, and 5 in Asia Pacific. The 2016 program is focused on the following primary tracks: Alma Technical Services, Linked Data Publishing, and Discovery. All the tracks have led to active development, with deliverables in production in 2016.

Looking back on 2016, our estimation of what could be done was in some ways too ambitious and in other ways not ambitious enough. Our strategy has evolved over the last year, with the understanding that we need to bring linked data to the vast majority of libraries who may have heard the Linked Data buzz but do not have the resources needed to take the deep dive.

The addition of linked data features to over 700 Alma institutions and over 1000 Primo institutions will serve multiple purposes:

  • Gradual development of the infrastructure to support linked data.
  • Education of libraries by exposing librarians to linked data features on a day-to-day basis.
  • "Feeding the fire" of linked data initiatives.
  • Perhaps most importantly, visibly extending patron services through linked data, demonstrating that linked data is not only a theoretical future improvement for librarians.

Our understanding of what can and should be done at this time has become more focused. For the sake of simplicity I will describe Alma as a combined track. Alma is adding URIs to MARC records and making those URIs visible to all librarians. Examples of such URIs include Authors, Subjects for leading authoritative authorities (LC, MESH, GND), VIAF, Languages and WikiData. See below for two examples.

Making the linked data visible in various places in Alma helps support the multiple objectives described above. Every librarian that uses Alma will have access to the URIs, thus further educating the market.

The image below shows a typical link to access URIs in Alma.

Below are two displays of URIs retrieved through linked data in Alma.
 

At the same time, the URIs are published downstream to discovery and will be accessible in Primo. These URIs are also exposed in all APIs that return BIB records. For example, they are exposed in the API returning JSON-LD and the new URI support for BIB records returning RDA/RDF.

This support is enabled with the click of a button, as seen in the image at left.

Early on, we discovered that our BIBFRAME track was premature. On the bright side, all we are doing for RDA/RDF will give us a good head start for BIBFRAME when it matures enough to be used in a production environment.

Primo, with Summon to follow, will make the URIs available for library use in a new linked data section of its PNX. More importantly, the new Primo discovery interface will have a sample Angular-JS “directive” that displays information based on URIs. This is the real power of linked data: bringing relevant information to the end user that was difficult or even impossible before. The next step will be to include “out-of-the-box” linked data-powered user interface functionality that any library can use in their discovery whether they are experts in linked data or not.
Following are some simple examples of discovery linked data use that will work end-to-end in Alma and Primo without link data expertise.

In the example below you can see how additional subjects are retrieved and displayed from the Library of Congress using Subject Authority URIs as managed by Alma. 

Other useful examples might be additional collaborating authors from VIAF, additional works by the same author from VIAF or WikiData or general information from WikiData. The possibilities are endless.

Soon we will begin thinking about how to further enhance the linked data deliverables in 2017. Let us know what you think –  your input is valuable to us.
And you can read more in our linked data white paper, "Putting Linked Data at the Service of Libraries."



Tuesday, August 30, 2016

The Library’s Buzz



Dani Guzman, Product Marketing Director, Ex Libris

As the summer is drawing to a close, the Library Buzz looks at the just-concluded Summer Olympics (no, not the one in Rio). We also note the start of the school year with articles on the role of library media specialists and makerspaces in education today. Then we peer just a wee bit into the future, with news of a tool being developed for preserving our digital history, a call for smart copyright risk management, and a robot making librarians’ lives easier.

The University of Dayton held its first-ever Library Olympics in early August, as reported by the Smithsonian website. The events included both physical and mental challenges, such as a tricky speed sorting event and a campus treasure hunt based on LOC call numbers. Champions were also chosen in such competitions as balancing bound journals on one’s head, running a book cart through a twisty course, and tossing journals toward a target. Go for the Gold here >>>

Makerspaces – collaborative spaces in which to gather and share skills, tools and information for creative activity – have come to American community colleges. As the website of EdSurge notes, nearly half of undergraduate students in the United States are at community colleges, where these makerspaces are taking off. The teachers are volunteers, including professors and students on equal footing, and the spaces they use are any available library rooms. The “maker movement”, as it is coming to be known, is “about community, creativity, and experiential learning.” Enter the ‘makerspace’ here >>>

Jenna Grodzicki writes that “library media specialists foster some of the most authentic learning in schools today,” in a brief, but very persuasive, article in Knowledge Quest, the Journal of the American Association of School Librarians. She explains how library media specialists help schoolchildren today (also mentioning the value of makerspaces – see above) and challenges the “naysayers who don’t appreciate how crucial we are to our schools.” Find out here >>>

The St. Louis Post-Dispatch carried an article in its Education section describing a new research tool under development at Washington University, in collaboration with the Maryland Institute for Technology in the Humanities and the University of California-Riverside. The project is called DocNow and it is designed to collect and curate those digital records of historically significant events that may be lost with time, especially real-time comments, images and interactions on social media. Read more here >>>

On the website of CILIP (the Chartered Institute of Library and Information Professionals), a recent lecture by a member of the Libraries and Archives Copyright Alliance (LACA) is reviewed in depth. In an “excellent talk”, as it was described by the reviewer, Naomi Korn focused on the current balance between copyright related risk and risk management. Although “fair use” remains a somewhat unclear term, Korn does not want libraries shying away altogether from copyrighted material. Our own Leganto Course Resource List solution, for example, uses incorporated tools for mitigating the risk of copyright infringement. Korn also stresses the need for copyright policies and procedures to manage the risk. Read more here >>>

Finally, we may have come full circle back to a focus on the physical and mental demands on librarians. In this case, however, we look at how one of their more menial tasks can be taken over by a newly designed robot. Singapore’s National Library Board, the Library Journal reports, is already using AuRoSS (autonomous robotic shelf scanning system), a robot that systematically scans library shelves for misplaced books and issues a report to the librarian. Meet AuRoSS here >>>


Thursday, August 25, 2016

6 Principles Librarians Can Apply to Write Better Social Media Posts

 

Beth McGough, Communications and Creative Services Manager

Easy reading is damn hard writing. But if it's right, it's easy. It's the other way round, too. If it's slovenly written, then it's hard to read. It doesn't give the reader what the careful writer can give the reader.
Maya Angelou
Social media writers, perhaps, need to be the most careful writers. Constrained by character counts and short attention spans, tweets and Facebook posts must be concise. 
If a librarian has focused on developing strong academic writing skills it can be difficult to take the opposite approach to writing. 
I won’t call academics long winded but...
...academia certainly values detailed writing. 
In social media, writing needs to get to the essence of a message in 140 characters. 
If you find yourself writing tweets for the library - or even for your personal accounts - the principles below will help you shift your mindset from academic writing to social writing.
1. Be hyper-focused on the audience. Whether writing for students or other librarians put them first as you craft social posts. 
2. Intent. Consider intent from your point of view and the reader’s point of view. What is the purpose of the social message? Does it align with the purpose for which people use social media? 
3. All posts should be useful, educational and/or entertaining. Adding to the conversation and providing content readers will benefit from are central to social media posts.
4. Casual but grammatically correct. Stay away from a formal academic voice. Social media is an opportunity to bring out the humanity in your messages.
5. Concise – no fluff. Use only the words necessary to get your point across. Not only is this a good writing practice but shorter social posts are easier to share.
6. Use images to extend your message. The importance of images in social posts cannot but ignored. Images will catch the reader’s attention and can be used to extend your message. Images should be carefully chosen to reflect the message. You can also take advantage of this extra space to add more text. 
Like all writing, writing for social media is hard, but these principles will get you off to a good start.