When the Open Source System is the Best System

Last updated October 17, 2014.

Download 'When the Open Source System is the Best System' as ePUB Download 'When the Open Source System is the Best System' as PDF

Fenway Libraries Online Open Source Case Study

By Kelly Drake, Systems Librarian, Fenway Libraries Online Office
Marilyn Geller, Collection Management Librarian, Lesley University
Erin Wentz, Electronic Resources Librarian, Massachusetts College of Pharmacy and Health Sciences University
Louisa Choy, Digital Services Librarian, Wheelock College

In the spring of 2011, something that had been festering for many years finally broke out into full chaos. Slowly, imperceptibly at first, insidiously as time marched on, paper files, Excel spreadsheets, random emails, and home-grown databases, had been growing, massing, and spreading information about library electronic resource holdings throughout the offices of the 10 academic and special collections libraries that make up Fenway Libraries Online (FLO: www.flo.org/members). Discovering who had paid for what, what permissions applied to a resource, when a database subscription expired and other information associated with an electronic resource had become extremely time consuming, and was confusing in each library and in the consortium office. It was clear that something had to be done to manage these resources more efficiently.

What followed was a three-year odyssey into the world of electronic resource management (ERM) and open source systems (OSS). Led by the FLO office, the Electronic Resource Management Task Force, and eventually the FLO CORAL Development Committee (FCDC), members of the FLO community participated in system trials, product evaluation, implementation and enhancements, all in the pursuit of bringing order to the chaos that is electronic resource management. Along the way, we also learned a great deal about evaluating, implementing and supporting open source software and about participating in open source communities.

In addition to this web format, the case study is available as an EPUB and a PDF file.

Selecting a System

Last updated September 27, 2014.

FLO did not begin this project intending to adopt an open source system. Our goal was to find a robust and flexible ERM system that could be implemented in a consortial environment and work for individual member libraries with a variety of needs. Our open source journey was not about attempting to pinch pennies, though money will always be an issue for libraries, and neither was it about a noble desire to better the world of software, although there was a certain intrigue about the open source movement. It was about bringing order to the chaos and finding the option that was best for all of us and for each of us. In order to understand our choices, we investigated the current literature, assessed needs, trialed systems, and re-evaluated our priorities and processes. From this selection process, we found a system that was the best fit for us although not perfect, and it just happened to be open source.

Reviewing the Literature and Assessing Needs

During the summer of 2011, the FLO office conducted a literature review. Some of us were not fully aware of electronic resource management or ERM systems, or what our own needs were in this area. The literature review was helpful in bringing us all to a shared understanding of our needs and the possibilities. It uncovered research on the functions and priorities of ERM systems, the environment of tools and services that could interact with an ERM system, and the published standards and guidelines relating to ERM systems. In short, the literature review gave us common ground from which to move forward.

The literature review also compiled information about the ERM systems that were available. This part of the review included a basic appraisal of these systems, their functionality, integration with existing products, release dates, market performance, costs, and hosting options – all of which would help us determine if a product was worth further examination.

In addition to the literature review, a very informal survey was conducted among the FLO libraries to assess our current practices and desired functionality. It confirmed an array of responsibilities and an assortment of ERM tools and communication methods – all of which led to misinformation, duplicate data entry, and other inefficiencies. It was, in short, chaos to get a sense of what we thought we wanted for ERM systems, survey respondents identified specific ERM functionalities that were not necessarily common in all ERM systems, but were considered important to us.

After the literature review and survey results were shared with the member libraries and we felt there was a common base line understanding, we distributed a more elaborate survey in the fall of 2011. It was intended to drill more deeply into our ERM needs, to expand discussions on the topic and to include more staff at member libraries. The survey explored more detailed local ERM practices, satisfaction with current systems and methods, and plans for managing future resources. The survey responses did not change much from the earlier survey, and led to deeper discussions with a greater number of people to confirm that we should move forward with this project and to help FLO identify what our community ultimately wanted in an ERM system. We agreed that the system should:

  • Handle multiple types of e-resources, through multiple platforms and from multiple vendors
  • Centralize and collect all e-resource data for the consortium and for individual libraries
  • Eliminate duplicate or triplicate data entry
  • Allow for a standardized workflow for each individual library
  • Have an easy-to-use interface

One very significant question was also asked in this survey: Would you be interested in working collaboratively within FLO to create a shared ERM system? All the individual libraries said yes. This ERM journey would not have happened without the resounding interest and agreement from the members.  As you would expect in a consortial environment with different staffing levels, different specific needs and different processes, members also agreed that the system must be customizable for each member library.

The literature review and assessment process benefitted the member libraries in several ways. Staff from different libraries came to the topic from their individual perspectives and carrying the unique make-up and internal practices of their libraries.  Looking at the issue on a larger scale, getting a bird’s eye view of the different needs, and figuring out how everyone can work together helped ground and direct the search for an ERM solution that was right for us.

Concurrent Trials

Now we were ready to select and trial systems to understand how they actually worked and how we could work with them.  A trial would also provide us a better idea of where expectations and actual practices met or didn't meet. For the trial we chose EBSCO’s ERM Essentials and CORAL.

EBSCO’s ERM Essentials appeared to satisfy many of our requirements. It contained many of the data fields that were needed with options for customizable data fields. The system’s integration with current EBSCO products would help reduce the dreaded data entry. We had experience using EBSCO’s research databases and felt that EBSCO’s interfaces could be user-friendly and appealing. We were also familiar with and satisfied with their product support. 

CORAL, developed by the University of Notre Dame, was the other choice. Immediately we saw an easy to use and pleasing interface. From our literature review, it appeared capable of handling a variety of e-resources types. CORAL is a cloud-based and web-accessible system. It is built using a ubiquitous open source database and scripting language on an open source server application – MySQL and PHP on Apache. Furthermore, we thought that the system might have been able to accommodate some sort of consortial setup.

Kelly Drake, Systems Librarian from the FLO office, and representatives from four member libraries formed the Electronic Resource Management Task Force to oversee the trials. The members of this group were:

  • Catherine Tuohy, Assistant Director for Technology and Technical Services from Emmanuel College
  • Ann Glannon, Associate Director and Collection Management Librarian from Wheelock College
  • Allyson Harper-Nixon, Library Services Specialist from Wheelock College
  • Louisa Choy, Digital Services Librarian from Wheelock College
  • Kathleen Berry, Systems/E-Services Librarian from Wentworth Institute of Technology
  • Marilyn Geller, Collection Management Librarian from Lesley University

Emmanuel College and Wentworth Institute of Technology volunteered to trial CORAL. Wheelock College volunteered to trial EBSCO ERM Essentials. To provide a neutral perspective, Marilyn Geller served as the observer.

To prepare for the trial, the Wheelock members received from EBSCO an ERM Essentials comprehensive field dictionary of definitions and uses for all the fields, worked with the EBSCO vendor to set up their instance, some of which was pre-populated with data from their EBSCO purchases, and attended several training webinars.

To prepare for the CORAL trial, the FLO office staff created a sandbox and individual instances for Wentworth Institute and Emmanuel College. We downloaded documentation from the CORAL website. No formal training sessions were available, but existing documentation and the ability to explore the system together sufficed.

To evaluate the two systems side by side, users entered a wide range of data. There was no coordination in selecting resources to enter into each system, but we tried to identify common database setups as well as challenging ones. We wanted to see if the systems could handle multiple types of e-resources through multiple platforms from multiple vendors in the context of multiple scenarios. We used data from both institution-specific resources and shared, consortial subscriptions, all of which were entered into the systems.  After populating the systems, we were ready to explore different versions of the same functionalities and see their advantages and disadvantages.

Rubric Development and Evaluation

The task force developed a rubric (see Product Evaluation Matrix in the Appendix) to measure the capabilities of the two systems as well as the importance of the individual traits for the workflow of the users. We came up with a list of more than 30 traits that described our ideal system’s ERM functions as well as the system’s support, cost, maintenance, and future capacity. There was a 1-4 ratings scale: (1) unsatisfactory, (2) basic, (3) good, and (4) exemplary. Each rating for each trait had its own definition so that it would be clear what the differences were among each of the ratings.

To measure how important each trait was within the workflow, we used a 1-4 scale with 1 being not important and 4 being essential. The weights from each person were averaged to figure out what the high priority traits.

After identifying the most important capabilities, we then focused on how well each system managed those tasks.

However, at this point, we realized that comparison of some of our most important capabilities was not parallel due to the differences between commercial and open source systems. Obviously, we could not evaluate vendor training and support, and professionally produced documentation for an open source product. We noted that the ability to work directly with the system code and database was an open source advantage that was more limited in a vendor-supported system. Another open source difference in evaluation appeared to be the nature of the community using the software, and the ability of our organizations to both participate in the community in addition to hosting the software.

We looked at CORAL’s infrastructure, development process, and support systems. The software had been out for about one and a half years at that point. There were about 40 sites – a mix of small and large institutions – that were using CORAL.  The size of the community was somewhat small, but large enough to find a few people who were willing to share and contribute developments to the software. The community’s small size seemed like an advantage for us since we wanted to consider playing a role in CORAL development. A larger community might not be interested in the contributions of our small consortium. We assumed that since CORAL was presided over by a governance group of four larger institutions they had the resources to continue improving the software. The governance group appeared to be responsible for planning, decision-making, and development. Enhancement requests and bug fixes were also managed by this group, who would vote on which issues should be given priority. In addition to a website that offered documentation, a demo system, and a message board, the governance group maintained and participated in a listserv for discussions and product updates.  Reading the listserv messages suggested that improvements to CORAL were being put forward and shared.

The ERM Task Force also spent time considering the FLO office staff’s capacity to host and support an open source system. The FLO office had Kelly Drake, who was both part of the ERM Task Force and familiar with the code and the scripting languages. Installing and upgrading software was relatively straightforward; space requirements were minimal.  The ERM Task Force felt confident they could provide training and documentation.

The task force came back together after two months of trialing, and each group demoed its respective systems pointing out weaknesses and strengths and how the systems measured up against our highest priorities using the Product Evaluation Matrix. Our evaluation of the CORAL Open Source Community and of FLO’s resources was both informal and unsophisticated, but effectively we had begun building a separate metric for evaluating open source specific factors and had a growing list of characteristics and issues that effected our decision.

We agreed that despite the impressive power and granularity of EBSCO ERM Essentials, and the ability to allow separate instances for each library the system was not easy to learn and was too limited in its customizability for our purposes. Its dependency on EBSCO’s link resolver in the FLO environment where we use Exlibris’ SFX meant duplicate maintenance of a second knowledgebase and being limited to only what EBSCO included in their knowledgebase did not allow for multiple types of e-resources. Ultimately, the system did not accommodate our needs at the time.

CORAL‘s interface was straightforward and intuitive to navigate. It had all the basic functionality and could also accommodate multiple types of e-resources, through multiple platforms and from multiple vendors. The system was built modularly with interconnected parts, so it could be used in part or completely and reduced the need for duplicate data entry. Within the system’s administrative functions, there was room for customization. And in addition, when we looked at the CORAL community, we found potential advantages. Having been developed by the libraries at the University of Notre Dame, the system would continue to meet the needs, priorities, and limitations of academic libraries. The open source code made it possible to consider designing advanced customizations without getting vendor approval or waiting for someone else. The FLO Office staff felt capable of providing the necessary hosting and support services. CORAL appealed to us as a good fit for our present and potential needs.
After the trials, the review of the CORAL community and some internal soul searching, the Electronic Resource Management Task Force presented our findings from the entire ERM project to the larger FLO community. It was well received, and we agreed to begin the next stage: implementation.

Implementing the System

Last updated September 27, 2014.

After discussion regarding implementation options, the original four libraries agreed to get started right away. Librarians from the four libraries that had participated in the initial evaluations worked together with the FLO staff to begin the implementation process since we already had some familiarity with the system. The idea was to learn more about the system and train the other libraries once we had a thorough understanding.

Phase One: From Trial to Early Adopters

The initial group knew something about what features existed and had seen the system in action whether as trial participants or evaluators. The group attempted to share one consortium-wide instance, but that idea quickly proved problematic due to the sensitive nature of information, such as logins, costs and variant workflows. At the same time, we also shared some resources that the FLO Office staff administered, and we wanted to eliminate the need for duplicate data entry.  As illustrated below, the current strategy used a hybrid method for recording consortial-wide and library-specific information. The FLO staff maintains one consortial instance of CORAL that is used to feed shared information to the library-specific CORAL instances.   As a group, we continued to use the sandbox to test different options for problematic situations. We could easily compare several methods for accomplishing a single task.

While continuing to populate individual instances, we also created shared, customized documentation, unlike vendor-supplied manuals. Librarians at each site entered local data into the institution’s unique instance and met regularly to review those experiences. We used our community documentation site to share confusing experiences and document areas that benefited from official policy. This online documentation acted both as a means of communication between meetings and as an agenda for those meetings. When several people found different solutions to accomplish the same goal or when one of us came across unique circumstances that challenged us, we had long discussions online, on the phone, or in person to work out preferred solutions. For example, group members agreed upon a method to distinguish between consortially acquired eBooks from one company and eBooks that an individual library independently acquired from the same company. FLO staff members recorded these decisions for easy reference later. The initial trial period also included the development of a common Field Dictionary, and we were able to expand this during subsequent use. Most new elements represented additions to the system, such as the elaboration of the roles an organization could play in the e-resource chain. Other elements clarified terminology. In some cases, individuals were using different terms for the same idea, while in other cases individuals were using the same word to mean different things.   All of this clarity of communication led to common understandings, and best practices.

Phase One implementation was completed in January 2013, even as we continue to build shared documentation, best practices and additional functionality.

Phase Two: Mentoring the Next Libraries

By the end of Phase One, we had several individual instances of CORAL well on the way to being fully populated. We had a consortial instance that included information about organizations and resources shared by all, and we had communal documentation based on common understandings. We also had a group of experienced users ready to become mentors.

Phase Two began in earnest in February 2013 for the remainder of the FLO libraries including Emerson College, Massachusetts College of Art and Design, Massachusetts College of Pharmacy and Health Sciences, Museum of Fine Arts, New England College of Optometry, and New England Conservatory. In the absence of vendor support, this first group trained and served as consultants to the second group of FLO Libraries. The early adopters and the FLO staff members created a series of in-depth training sessions, complete with assignments, to help the second phase participants think through each of the implementation steps. In the initial meeting, early adopters covered reasons CORAL might be useful to the remaining libraries and gave a broad overview of the system to bring everyone to a shared understanding. In subsequent meetings, the first group and FLO staff members created specific topical training sessions on each piece of the system, and provided sandboxes for each institution. Each training session included summaries of the best practices some of us had worked out as well as thoughtful discussions about how those practices might be modified in the future. The early adopters shared recommendations about overall implementation strategies and detailed information about how to use certain features. These helpful hints provided a clear path for each new library.

The early adopters also warned the new participants about system idiosyncrasies, and introduced them to the bug list on the CORAL GitHub website. For example, early adopters explained how adding an item in the licensing module before creating a corresponding entry in the organizations module would create ghost entries and showed them where this was found in the GitHub list. Paid company trainers might not have been so frank about these types of quirks, and vendors often do not supply an easily accessible bug list.

Throughout the training sessions, we also discussed mapping each of our existing workflows to the CORAL system. Follow-up exercises reinforced the in-person training. When new users had questions, the early adopters guided the questioner through potential solutions or helped review the documentation. As hard as this is to imagine, some of these questions were about issues the early adopters had not yet encountered. In those cases, both groups devised new strategies together and incorporated the decisions into the documentation, and our Phase Two adopters became contributors to the growing documentation collection. The FLO office and the early adopters combined practicality with philosophical considerations while developing locally specific training sessions.

At the end of the training, each Phase Two library was given its own live instance complete with consortial-wide date and could begin entering actual data. In addition, FLO staff members eased the implementation process for a few Phase Two participants by transferring content from older SQL-based systems into the appropriate CORAL instances. Phase Two librarians were ready to run with the system.

Member libraries have progressed since then at different rates.   Some libraries are still transitioning from previous electronic resource management strategies to sole reliance on CORAL. Many are using CORAL in conjunction with other tools, and a small number have made it a priority to implement CORAL more fully in the near future.   Although Phase Two of the implementation has officially ended, for many of us, CORAL adoption is a work in progress. As a benefit of participating in Phase One, the early adopters also had the advantage of being able to retrieve e-resource information more quickly since it was already stored in individual CORAL instances. A few libraries are now exclusively using CORAL and have left previous management and tracking methods behind.

Phase Three: Sharing the Wealth

Those of us who were early adopters were truly local experts. We knew the CORAL system as implementers and mentors. A proprietary vendor’s support staff, on the other hand, would have simply applied general knowledge about that system’s overall functionality. A vendor support staff’s knowledge would not have reflected local choices. We also knew the Phase Two participants as colleagues with whom we already shared systems and resources. Through previous experience on a number of consortial committees, librarians in both groups had already established relationships with each other. The early adopters had background knowledge that enabled them to anticipate specific individuals’ concerns and interests. This familiarity made it easy to tailor training sessions for and respond to questions from the Phase Two participants. The early adopters acted as dedicated mentors who could often respond more quickly than vendor support staff, who tend to represent a wide number of customers on an ad-hoc basis.  This internal support was a tremendous advantage to those of us who were Phase Two participants. This mentorship role did not create an unwelcome burden for the early adopters. The additional time was required but had the benefit of strengthening existing relationships.

Because CORAL is open source software with no restriction on the number of instances we can run, this allowed us the freedom to use sandboxes in a variety of ways. Sandboxes were easy to build and did not require significant time of the FLO staff.  The sandbox provided a stress-free environment for the participants to explore the system, experiment and apply new knowledge. 

Using an open source system has been beneficial in allowing consortium members to implement the system on their own schedules, but detrimental in that there is no external vendor pressure to complete training and implementation. Getting an Electronic Resource Management system up and running is a large project, whether the ERM system (ERMS) is proprietary or open source. If FLO had chosen a proprietary ERM system, librarians at each member institution would have faced a similar amount of work. We would still have needed to decide how to make use of such a system. We would have needed to expend the same amount of time and effort to enter the institutions’ data. However, we may have had to do all this on a vendor’s schedule instead of our own schedules. The downside of this is that the lacking sense of urgency may prolong the process.

The relationships among participating librarians that developed during this project remain strong. We now know even more about each other’s responsibilities and workflows.   These relationships facilitate – and are facilitated by – the practical aspect of maintaining CORAL. We have regularly scheduled meetings where consortium-wide concerns are discussed. We supplement those in-person meetings with messages on the internal listserv. Both the in-person meetings and the virtual conversations also allow participants to share new issues and new ways to extend the system. The CORAL implementation project strengthened bonds among FLO members and gave all of us a strong foundation for understanding the larger ERM environment.

We tend to rely on each other first, rather than on the wider CORAL Community for most questions. This inclination comes in part from the strength of the relationships that have organically grown as we worked out shared and local practices. Rather than send a question to the larger CORAL group, we send questions to each other because we know FLO’s customizations. In effect, FLO has developed our own internal community that we use before going to the larger body of CORAL users.

We also realize that we need to participate more in the larger CORAL community. However, while a few individuals in that community actively respond to questions and comments, based on the number of CORAL listserv subscribers, we know that many others don’t respond. CORAL discussions are dispersed across GitHub forums, the listserv, and a few other locations. It takes effort to track all of the conversations, as no single platform is definitive. This lack of a centralized place for interactions also contributes to our habit for internal FLO conversations. 

At the same time, FLO members have extended some support to others outside of the consortium as we continue to learn how best to participate in open source communities.  FLO collaborated with other institutions to transform the locally developed field dictionary into a CORAL glossary. A few librarians volunteered to update the public version on GitHub as necessary. FLO members responded, and continue to respond, to messages on the wider CORAL listserv to offer advice to others who are considering CORAL or who have questions.  

Development: From Users to Contributors

Last updated September 27, 2014.

The FLO CORAL Development Committee

As happy as FLO was with CORAL, there was still room for improvement in the software. There was also an opportunity to participate in open source system development and learn about the resources required to truly participate in Open Source Software. FLO had already been using open source software for a number of years, including Drupal for the content management system, SubjectsPlus for the library guides, IR+ as an institution repository system, and now CORAL, but all of our activities with these software packages were limited to downloading, installing, and implementing. FLO had yet to significantly engage and contribute to any of these open source communities.

It was with this idea of exploring the process of contributing in mind that a group of FLO’s dedicated CORAL users met in August 2013. The FLO CORAL Development Committee (FCDC) consisted of seven members from five of the FLO libraries and two members from the FLO office. In addition to the original seven members of the Electronic Resources Management Task Force, we were joined by Erin Wentz, Assistant Professor and Electronic Resources Librarian, Massachusetts College of Pharmacy and Health Sciences University (MCPHS) and Adam Shire, Member Services Librarian, FLO. The purpose of the committee was to create software specifications, a set of design and technical requirements for our proposed enhancement, and contributing code to the software base; our goal was to be contributing citizens of an open source community. Our plan was to:

  • Create a process within FLO enhancement candidate nomination and selections;
  • Select and contribute small enhancements: those that required only minor changes to the software code;
  • Select and write specification for a large enhancement with the idea of funding the code and database development that would provide a significant improvement or extension of the system functionality.

Enhancement Candidate Nomination and Selection

The first task on FCDC’s agenda was the nomination and selection of enhancement projects for development. There was an existing list of ideas for improvements to software that we had generated throughout the implementation process. In addition, we had noted a number of enhancement requests on the CORAL listserv. Some of these had not necessarily been posted as requests but instead as functional questions, such as “how are users tracking the cost history of a resource?” Using these questions, our existing list, as well as some new contributions, the team created a spreadsheet with the nominations, brief descriptions of the proposed functions and the affected modules. 

Once we had listed all of the possible candidates for enhancement, each team member independently rated the importance of the proposed enhancement on a scale of 1 to 5, with 5 being most important. We averaged the ratings so that each enhancement nomination received one score for importance. Based on the FLO system librarian’s knowledge of the code and database, the perceived difficulty of the enhancement was also rated. For instance, enhancements that only affected the display code were rated as easy or a “1”, while those requiring code changes to several pages as well as database modification were rated as hard or a “5”.

Next, we split the enhancement nominations into two groups: the small enhancements and the large enhancements. Small enhancements, those rated less than a 3 on perceived difficulty, were those that might require only a few lines of code in one module, such as making windows larger. Larger enhancements were defined as those that would take more coding and possibly involve more than one module, and received a rating of 4 or 5. The top five small enhancements were: enable wildcard searching in the “Funds” field; fix the Terms Tools bug so that linking works when related SFX public target names contain a space; change “Name” label; hyperlink the Login URL field; and make edit windows large. The most important large enhancement was the Cost History and Cost Reporting functionality.

Writing the Large Enhancement Specification

In November 2013, FCDC was ready to start building a specification document for the Cost History and Cost Reporting functions. Not only had FCDC rated this enhancement as its most important, it was also a recurring request on the CORAL listserv. In the production system, some cost history functionality was possible by co-opting another field and manipulating the input data, but that solution still did not provide a method for fully recording the cost history of a resource, nor did it provide a means for reporting on the history that was collected.

As a first step in the cost history enhancement project, Kelly Drake notified the CORAL Governance Committee via Benjamin Heet, Electronic Resources Librarian, North Carolina State University, of our plan to develop this aspect of the ERMS and inquired about the process for getting code included in the software. As mentioned previously, our intent was not only to improve functionality for our own use, but also to write code that could be contributed back to the CORAL community. The inquiry was well received, and in his reply, Ben confirmed that the cost history functionality was very much in demand. He also forwarded a copy of a Cost History Specification that had been written by Consortium Luxembourg, a group of libraries in Luxembourg who were actively using CORAL for their ERMS.

With encouragement from the Governance Committee, the requests from the listserv and the Consortium Luxembourg specification in mind, FCDC began the process of exploring and developing the specification. FLO reviewed Consortium Luxembourg’s draft and deemed most of it suitable for our needs. FLO simplified the pricing information in the data entry section and added fields to indicate how that price was generated. We wanted to track, for example, that the pricing of a resource in 2011 was based on the number of full-time equivalent chemistry students, but that in 2012, it was based on the number of full-time equivalent users in all departments. We also expanded the number of reports that could be generated using that data to improve collection development and budgetary decisions. In the four months between then and February 2014, we went through four iterative cycles of specification development and negotiations. Kelly Drake outlined various aspects of the functional requirements through emails, in-person discussion, or screen mockups, and the committee provided feedback, clarification, and alternative directions.

Finalizing the Specification with the Community and Governance Committee

By February 2014, we felt we had completed the Cost History and Cost Reporting Specification and were ready to show the CORAL Community and the Governance Committee. Because the proposed enhancement was very much in line with requests outlined on the listserv and by the Consortium Luxembourg specification, FCDC felt that the Specification would be well received. The document itself was 90% complete, leaving room for additional changes and minor revisions prior to finalization. FCDC submitted the Specification to both the Governance Committee and the CORAL community via the listserv on February 19, 2014.

Listserv members responded with almost immediate and positive feedback and acceptance.

The Governance Committee had a number of questions and suggestions. In the CORAL environment, it is the Governance Committee that is responsible for changes to the code base. As such, they are tasked with understanding how the system is used in order to ensure proposed code changes will be consistent with the current code and in the best interest of the community. The Governance Committee is also more aware of the interrelation of the different modules or functions of the software and can provide input on best practices.

Some of the members were concerned that some proposed changes would conflict aspects of the functionality that we weren’t aware of.  Subsequent to our proposed changes, one Committee member instituted a poll of the known users to determine the actual usage. Based upon that feedback, the FCDC suggestion was accepted. The Governance Committee also suggested that the proposed Reporting function could be combined with the Statistics module, a suggestion that FCDC readily agreed to. After reviewing the functionality and negotiating alternatives for three months from February to May, both FCDC and the Governance Committee felt the specification was complete.

Releasing the Request for Proposals

At this point in the FCDC process, we had completed our proposed goal of creating an enhancement specification. Once finalized, the software specification was posted to the larger CORAL community and other interested parties as a Request for Proposals. We hope to receive and review proposals, award a contract and begin the programming work shortly. When the programming is completed, we will share it with the CORAL community and begin testing and modifying as needed. We feel that we are well on our way to being proud contributing members of an open source community.

Lessons Learned

Last updated September 27, 2014.

By the spring of 2014, FLO was three years into our ERMS Open Source project. Six of the ten libraries were happily and actively relying on CORAL for storage and retrieval of our ERM data. Three more libraries were in the initial input stages. A follow-up ERM survey confirmed that those libraries that had implemented CORAL were pleased with the software. Of those that hadn’t yet adopted it, most had plans to begin data input within the coming months. In addition, a software specification that would significantly improve the reporting functions of the system was completed and ready to go out for bid. The days of searching emails, calling random people and browsing multiple spreadsheets in hopes of discovering the password to a database’s administration page were clearly numbered. In the process, we learned many lessons regarding open source selection, implementation and software development.

Open Source Benefits

As we noted when discussing the trialing and implementation phases, FLO learned that open source software provides several advantages to its users. These include the ability to bring libraries onto the system using a phased implementation process, lack of upfront monetary costs associated with a vendor-supplied system, and absence of contract discussions and restrictions. We also noted an increased sense of job satisfaction and community building within our consortium.

An Evaluation Process

A major benefit of the process was also the development of the Matrix for Selecting and Implementing Open Source Systems. The Matrix contains three metrics, each addressing one of three major areas of concern: the software or product, the open source community, and the implementing organization (see Appendix).   We also learned that, while evaluation is necessary, the process is a not decision tree. There is no right answer, but users must continuously be identifying the risks and minimizing them.

The Product Evaluation Metric included a list of desired attributes, e.g., the product should do this or have that, an associated range of statuses from not developed to highly developed, and a weighting system to determine importance of attributes.

As we began to participate more in the CORAL community, we noticed characteristics of the group that affected the software were not specifically related to the system, but obviously impacted its performance and viability. This observation led us to develop the Open Source Community Evaluation Metric, which lists attributes that are different from those in the product metric in that they focus on who is doing development and support, not what work is being done. They describe the status, culture, and resources of the community. Unlike the product metric, the community evaluation didn’t seem to lend itself to rating, but rather to a recording of the status. While the elements document traits on a “low” to “high” scale, there is no value judgment associated with them. No classification is either “good” or “bad”, but is judged based on how we perceive the various functions within the applicable community.

The development and rating of the open source community subsequently led us to begin a list of attributes that we could assess in our own individual libraries and in our consortial organization: the Organization Evaluation Metric. This metric can help identify the level of resources that organizations can bring to the open source project, such as the level of staff and administrative buy-in, what types of related expertise resides within the organization, along with other important traits. The important takeaway from this metric is the base line understanding of what the library or the library and the consortium can bring to the project.
Having developed the three lists of traits for product, community and organization, we began seeing the evaluation as a three-step, iterative process. A good starting point is evaluating the product itself using the Product Evaluation Metric. If an organization determines that the product has potential value, it should be noted what the strengths and problem areas are. There are currently 19 elements in this metric. They may have varying levels of importance or relevance for each organization. Establishing at the outset which elements are most important, which elements are necessary but not crucial, and which elements are peripheral will help prioritize results of this survey. Finding that a product scores poorly for an element that has been defined as peripheral has less impact than a poor rating for an important element. The process of ranking and weighting elements of this metric is not about deriving an overall score for the product, but rather it is about understanding the important elements to improve and whether there is a general sense that the product will be good enough to make the improvement process worthwhile. The outcome of the Product Metric Evaluation should be to understand the product’s strengths and weaknesses and which ones are most important to focus on first.

Next, using the Organization Evaluation Metric, either library or library and consortium, should be evaluated to determine what strengths can be brought to this project. This metric can help identify the level of resources that can be brought to the open source project. Used in conjunction with the Product Evaluation Metric, the picture of what needs work and who is or is not available to do that work becomes clearer. Organizational resources can change based on levels of staff and administrative buy-in. If, for example, there’s enough administrative buy-in and enough staff interest, a low level of expertise can be overcome. The important take-away from this evaluation is the base line understanding of what the library or the library and the consortium can bring to the project.

In the next step, the open source community should be evaluated using the Open Source Community Evaluation Metric to determine how well an organization’s strength match with the community’s needs and how well the community’s strengths match with the organization’s needs. Where there are mismatches, both the Product Evaluation Metric and Organization Evaluation Metric should be reevaluated to determine flexibility and willingness to accept the consequences of areas that don’t match. Variables that can work to neutralize weakness should be noted. For example, where an organization may decide that it can commit staff time but has little or no expertise, a community that offers enough support will be essential. Conversely, organizations that feel confident with their level of expertise may find that community support is not a relevant issue. For each variable on any one of the charts, there should be some response found in the other charts. 

Developing and using the Matrix for Selecting and Implementing Open Source Systems substantially aided our understanding of the issues we encountered, and the amount of resources we needed to commit to the CORAL project. Having gone through the FCDC experience, we came to understand how we can benefit from this information. We also learned the process of evaluation never ends. As long as your institution is using an open source product, continuous evaluation of its community and your organization’s capacity is necessary.

Open Source Costs

Of course, all that continuous evaluation uses staff resources, resources that, as we also learned in our trial, are already more heavily used in an open source environment. For the entire value open source affords, we learned that open source projects also have a number of costs, primarily stemming from the time involved in the process.  

During the trial and implementation phases, we invested a great deal of time learning the system. Training and creating documentation, processes that would not have been necessary in a vendor environment, also required significant staff resources. In addition, we also made, and continue to make, a conscious effort to communicate with the larger CORAL community, which although a small time investment, is extremely important. Unfortunately, we failed to quantify the additional hours for this phase of the project.

We did however, track the amount of time spent developing the Cost History Specification. By our calculations, eight staff members invested as much as six hours per month for 10 months for a total of 480 hours.  During the specification building, we learned that illustrations or wireframes are more accessible as a means of describing proposed changes than written descriptions are more effective in communicating specification changes, but require significant time to create. While we approved textual descriptions of proposed changes to the resource history entry screen, when committee members saw the actual layout, we were able to offer more helpful comments. The same was true during the discussion process with the Governance Committee. Seeing the proposed changes is a more effective means of conveying the change than either a written or verbal discussion. But building wireframes and screenshot illustrations is a time-intensive process.

Once the specification change is illustrated or clearly explained, project staff need additional time to fully engage in understanding specification ramifications. The developer and his or her advisors must fully understand the proposed changes and have a full grasp of the implications for the existing, and proposed, functionality in order to conceptually alter a complex piece of software. The lack of vendor training and support that seemed like a disadvantage early on turned out to pay dividends at this point as FCDC members had the in-depth knowledge to comment on the specification. Successful specification development relies on the ability of the Specification Team to immerse itself in the proposed functional issues and to commit significant staff resources to the process. The time to build consensus and communicate clearly was significant throughout the process. Additionally specification development is an iterative negotiation with the larger community. The extent of time required at this stage of the process was something we had not anticipated.

Looking back, we realize that we mistook universal support for the overall project for total acceptance of the details of our solution. We assumed we would create the enhancement and everyone would be satisfied. Community members had different ideas and valuable contributions to offer. The inclusion of these suggestions required additional time to understand and negotiate. 

Conclusion

Last updated September 27, 2014.

As a consortium of 10 libraries, FLO has always focused on consensus building and leveraging the resources of each member for the benefit of all. When librarians at one member library commented on the mess that electronic resource tracking had become, the consortium worked as a group to find a solution to this common problem. With this history, working in the open source world turned out to be a natural fit. The experience of selecting and implementing an open source system taught us valuable lessons about how to manage staff resources and become self-reliant in the absence of vendor support. The experience of working to extend the software for the good of the entire open source community taught us more about skill, innovative thinking, code development, and the art of being a good citizen of an open source community. Evaluation of the CORAL community and our organization continues for FCDC. The larger FLO organization is also applying the lessons learned from this project to other open source evaluations, especially in regards to apply the Metrics for Selecting and Implementing Open Source Systems.

For any library, or other enterprise, that has IT capacity and that needs responsive systems, open source software is always going to be one of the available options. Learning to efficiently analyze that software and its fit with your organization is an important skill.

Appendices

Last updated September 29, 2014.

FLO Members

Last updated September 27, 2014.

Library Members

  • Emerson College
  • Emmanuel College
  • Lesley University
    • Lesley University College of Art and Design
  • Massachusetts College of Art and Design
  • Massachusetts College of Pharmacy and Health Sciences University
  • Museum of Fine Arts
    • School of the Museum of Fine Arts
  • New England College of Optometry
  • New England Conservatory
  • Wentworth Institute of Technology
  • Wheelock College

ERM Trial Members

  • Emmanuel College
    • Catherine Tuohy, Assistant Director for Technology and Technical
  • Fenway Libraries Online Office
    • Kelly Drake, Systems Librarian
  • Lesley University
    • Marilyn Geller, Collection Management Librarian
  • Wentworth Institute of Technology
    • Kathleen Berry, Systems/E-Services Librarian
    • Marianne Thibodeau, Assistant Director
  • Wheelock College
    • Ann Glannon, Associate Director and Collection Management
    • Allyson Harper-Nixon, Library Services Specialist
    • Louisa Choy, Digital Services Librarian

FCDC (FLO CORAL Development Committee)

  • Emmanuel College
    • Catherine Tuohy, Assistant Director for Technology and Technical
  • Fenway Libraries Online Office
    • Adam Shire, Member Services Librarian
    • Kelly Drake, Systems Librarian
  • Lesley University
    • Marilyn Geller, Collection Management Librarian
  • Massachusetts College of Pharmacy and Health Sciences University
    • Erin Wentz, Electronic Resources Librarian        
  • Wentworth Institute of Technology
    • Kathleen Berry, Systems/E-Services Librarian
  • Wheelock College
    • Ann Glannon, Associate Director and Collection Management
    • Allyson Harper-Nixon, Library Services Specialist
    • Louisa Choy, Digital Services Librarian

Case Study Authors

  • Fenway Libraries Online Office
    • Kelly Drake, Systems Librarian
  • Lesley University
    • Marilyn Geller, Collection Management Librarian
  • Massachusetts College of Pharmacy and Health Sciences University
    • Erin Wentz, Electronic Resources Librarian        
  • Wheelock College
    • Louisa Choy, Digital Services Librarian