Selecting a System

tracker

Last updated September 27, 2014.

FLO did not begin this project intending to adopt an open source system. Our goal was to find a robust and flexible ERM system that could be implemented in a consortial environment and work for individual member libraries with a variety of needs. Our open source journey was not about attempting to pinch pennies, though money will always be an issue for libraries, and neither was it about a noble desire to better the world of software, although there was a certain intrigue about the open source movement. It was about bringing order to the chaos and finding the option that was best for all of us and for each of us. In order to understand our choices, we investigated the current literature, assessed needs, trialed systems, and re-evaluated our priorities and processes. From this selection process, we found a system that was the best fit for us although not perfect, and it just happened to be open source.

Reviewing the Literature and Assessing Needs

During the summer of 2011, the FLO office conducted a literature review. Some of us were not fully aware of electronic resource management or ERM systems, or what our own needs were in this area. The literature review was helpful in bringing us all to a shared understanding of our needs and the possibilities. It uncovered research on the functions and priorities of ERM systems, the environment of tools and services that could interact with an ERM system, and the published standards and guidelines relating to ERM systems. In short, the literature review gave us common ground from which to move forward.

The literature review also compiled information about the ERM systems that were available. This part of the review included a basic appraisal of these systems, their functionality, integration with existing products, release dates, market performance, costs, and hosting options – all of which would help us determine if a product was worth further examination.

In addition to the literature review, a very informal survey was conducted among the FLO libraries to assess our current practices and desired functionality. It confirmed an array of responsibilities and an assortment of ERM tools and communication methods – all of which led to misinformation, duplicate data entry, and other inefficiencies. It was, in short, chaos to get a sense of what we thought we wanted for ERM systems, survey respondents identified specific ERM functionalities that were not necessarily common in all ERM systems, but were considered important to us.

After the literature review and survey results were shared with the member libraries and we felt there was a common base line understanding, we distributed a more elaborate survey in the fall of 2011. It was intended to drill more deeply into our ERM needs, to expand discussions on the topic and to include more staff at member libraries. The survey explored more detailed local ERM practices, satisfaction with current systems and methods, and plans for managing future resources. The survey responses did not change much from the earlier survey, and led to deeper discussions with a greater number of people to confirm that we should move forward with this project and to help FLO identify what our community ultimately wanted in an ERM system. We agreed that the system should:

  • Handle multiple types of e-resources, through multiple platforms and from multiple vendors
  • Centralize and collect all e-resource data for the consortium and for individual libraries
  • Eliminate duplicate or triplicate data entry
  • Allow for a standardized workflow for each individual library
  • Have an easy-to-use interface

One very significant question was also asked in this survey: Would you be interested in working collaboratively within FLO to create a shared ERM system? All the individual libraries said yes. This ERM journey would not have happened without the resounding interest and agreement from the members.  As you would expect in a consortial environment with different staffing levels, different specific needs and different processes, members also agreed that the system must be customizable for each member library.

The literature review and assessment process benefitted the member libraries in several ways. Staff from different libraries came to the topic from their individual perspectives and carrying the unique make-up and internal practices of their libraries.  Looking at the issue on a larger scale, getting a bird’s eye view of the different needs, and figuring out how everyone can work together helped ground and direct the search for an ERM solution that was right for us.

Concurrent Trials

Now we were ready to select and trial systems to understand how they actually worked and how we could work with them.  A trial would also provide us a better idea of where expectations and actual practices met or didn't meet. For the trial we chose EBSCO’s ERM Essentials and CORAL.

EBSCO’s ERM Essentials appeared to satisfy many of our requirements. It contained many of the data fields that were needed with options for customizable data fields. The system’s integration with current EBSCO products would help reduce the dreaded data entry. We had experience using EBSCO’s research databases and felt that EBSCO’s interfaces could be user-friendly and appealing. We were also familiar with and satisfied with their product support. 

CORAL, developed by the University of Notre Dame, was the other choice. Immediately we saw an easy to use and pleasing interface. From our literature review, it appeared capable of handling a variety of e-resources types. CORAL is a cloud-based and web-accessible system. It is built using a ubiquitous open source database and scripting language on an open source server application – MySQL and PHP on Apache. Furthermore, we thought that the system might have been able to accommodate some sort of consortial setup.

Kelly Drake, Systems Librarian from the FLO office, and representatives from four member libraries formed the Electronic Resource Management Task Force to oversee the trials. The members of this group were:

  • Catherine Tuohy, Assistant Director for Technology and Technical Services from Emmanuel College
  • Ann Glannon, Associate Director and Collection Management Librarian from Wheelock College
  • Allyson Harper-Nixon, Library Services Specialist from Wheelock College
  • Louisa Choy, Digital Services Librarian from Wheelock College
  • Kathleen Berry, Systems/E-Services Librarian from Wentworth Institute of Technology
  • Marilyn Geller, Collection Management Librarian from Lesley University

Emmanuel College and Wentworth Institute of Technology volunteered to trial CORAL. Wheelock College volunteered to trial EBSCO ERM Essentials. To provide a neutral perspective, Marilyn Geller served as the observer.

To prepare for the trial, the Wheelock members received from EBSCO an ERM Essentials comprehensive field dictionary of definitions and uses for all the fields, worked with the EBSCO vendor to set up their instance, some of which was pre-populated with data from their EBSCO purchases, and attended several training webinars.

To prepare for the CORAL trial, the FLO office staff created a sandbox and individual instances for Wentworth Institute and Emmanuel College. We downloaded documentation from the CORAL website. No formal training sessions were available, but existing documentation and the ability to explore the system together sufficed.

To evaluate the two systems side by side, users entered a wide range of data. There was no coordination in selecting resources to enter into each system, but we tried to identify common database setups as well as challenging ones. We wanted to see if the systems could handle multiple types of e-resources through multiple platforms from multiple vendors in the context of multiple scenarios. We used data from both institution-specific resources and shared, consortial subscriptions, all of which were entered into the systems.  After populating the systems, we were ready to explore different versions of the same functionalities and see their advantages and disadvantages.

Rubric Development and Evaluation

The task force developed a rubric (see Product Evaluation Matrix in the Appendix) to measure the capabilities of the two systems as well as the importance of the individual traits for the workflow of the users. We came up with a list of more than 30 traits that described our ideal system’s ERM functions as well as the system’s support, cost, maintenance, and future capacity. There was a 1-4 ratings scale: (1) unsatisfactory, (2) basic, (3) good, and (4) exemplary. Each rating for each trait had its own definition so that it would be clear what the differences were among each of the ratings.

To measure how important each trait was within the workflow, we used a 1-4 scale with 1 being not important and 4 being essential. The weights from each person were averaged to figure out what the high priority traits.

After identifying the most important capabilities, we then focused on how well each system managed those tasks.

However, at this point, we realized that comparison of some of our most important capabilities was not parallel due to the differences between commercial and open source systems. Obviously, we could not evaluate vendor training and support, and professionally produced documentation for an open source product. We noted that the ability to work directly with the system code and database was an open source advantage that was more limited in a vendor-supported system. Another open source difference in evaluation appeared to be the nature of the community using the software, and the ability of our organizations to both participate in the community in addition to hosting the software.

We looked at CORAL’s infrastructure, development process, and support systems. The software had been out for about one and a half years at that point. There were about 40 sites – a mix of small and large institutions – that were using CORAL.  The size of the community was somewhat small, but large enough to find a few people who were willing to share and contribute developments to the software. The community’s small size seemed like an advantage for us since we wanted to consider playing a role in CORAL development. A larger community might not be interested in the contributions of our small consortium. We assumed that since CORAL was presided over by a governance group of four larger institutions they had the resources to continue improving the software. The governance group appeared to be responsible for planning, decision-making, and development. Enhancement requests and bug fixes were also managed by this group, who would vote on which issues should be given priority. In addition to a website that offered documentation, a demo system, and a message board, the governance group maintained and participated in a listserv for discussions and product updates.  Reading the listserv messages suggested that improvements to CORAL were being put forward and shared.

The ERM Task Force also spent time considering the FLO office staff’s capacity to host and support an open source system. The FLO office had Kelly Drake, who was both part of the ERM Task Force and familiar with the code and the scripting languages. Installing and upgrading software was relatively straightforward; space requirements were minimal.  The ERM Task Force felt confident they could provide training and documentation.

The task force came back together after two months of trialing, and each group demoed its respective systems pointing out weaknesses and strengths and how the systems measured up against our highest priorities using the Product Evaluation Matrix. Our evaluation of the CORAL Open Source Community and of FLO’s resources was both informal and unsophisticated, but effectively we had begun building a separate metric for evaluating open source specific factors and had a growing list of characteristics and issues that effected our decision.

We agreed that despite the impressive power and granularity of EBSCO ERM Essentials, and the ability to allow separate instances for each library the system was not easy to learn and was too limited in its customizability for our purposes. Its dependency on EBSCO’s link resolver in the FLO environment where we use Exlibris’ SFX meant duplicate maintenance of a second knowledgebase and being limited to only what EBSCO included in their knowledgebase did not allow for multiple types of e-resources. Ultimately, the system did not accommodate our needs at the time.

CORAL‘s interface was straightforward and intuitive to navigate. It had all the basic functionality and could also accommodate multiple types of e-resources, through multiple platforms and from multiple vendors. The system was built modularly with interconnected parts, so it could be used in part or completely and reduced the need for duplicate data entry. Within the system’s administrative functions, there was room for customization. And in addition, when we looked at the CORAL community, we found potential advantages. Having been developed by the libraries at the University of Notre Dame, the system would continue to meet the needs, priorities, and limitations of academic libraries. The open source code made it possible to consider designing advanced customizations without getting vendor approval or waiting for someone else. The FLO Office staff felt capable of providing the necessary hosting and support services. CORAL appealed to us as a good fit for our present and potential needs.
After the trials, the review of the CORAL community and some internal soul searching, the Electronic Resource Management Task Force presented our findings from the entire ERM project to the larger FLO community. It was well received, and we agreed to begin the next stage: implementation.