Lessons Learned

Last updated September 27, 2014.

By the spring of 2014, FLO was three years into our ERMS Open Source project. Six of the ten libraries were happily and actively relying on CORAL for storage and retrieval of our ERM data. Three more libraries were in the initial input stages. A follow-up ERM survey confirmed that those libraries that had implemented CORAL were pleased with the software. Of those that hadn’t yet adopted it, most had plans to begin data input within the coming months. In addition, a software specification that would significantly improve the reporting functions of the system was completed and ready to go out for bid. The days of searching emails, calling random people and browsing multiple spreadsheets in hopes of discovering the password to a database’s administration page were clearly numbered. In the process, we learned many lessons regarding open source selection, implementation and software development.

Open Source Benefits

As we noted when discussing the trialing and implementation phases, FLO learned that open source software provides several advantages to its users. These include the ability to bring libraries onto the system using a phased implementation process, lack of upfront monetary costs associated with a vendor-supplied system, and absence of contract discussions and restrictions. We also noted an increased sense of job satisfaction and community building within our consortium.

An Evaluation Process

A major benefit of the process was also the development of the Matrix for Selecting and Implementing Open Source Systems. The Matrix contains three metrics, each addressing one of three major areas of concern: the software or product, the open source community, and the implementing organization (see Appendix).   We also learned that, while evaluation is necessary, the process is a not decision tree. There is no right answer, but users must continuously be identifying the risks and minimizing them.

The Product Evaluation Metric included a list of desired attributes, e.g., the product should do this or have that, an associated range of statuses from not developed to highly developed, and a weighting system to determine importance of attributes.

As we began to participate more in the CORAL community, we noticed characteristics of the group that affected the software were not specifically related to the system, but obviously impacted its performance and viability. This observation led us to develop the Open Source Community Evaluation Metric, which lists attributes that are different from those in the product metric in that they focus on who is doing development and support, not what work is being done. They describe the status, culture, and resources of the community. Unlike the product metric, the community evaluation didn’t seem to lend itself to rating, but rather to a recording of the status. While the elements document traits on a “low” to “high” scale, there is no value judgment associated with them. No classification is either “good” or “bad”, but is judged based on how we perceive the various functions within the applicable community.

The development and rating of the open source community subsequently led us to begin a list of attributes that we could assess in our own individual libraries and in our consortial organization: the Organization Evaluation Metric. This metric can help identify the level of resources that organizations can bring to the open source project, such as the level of staff and administrative buy-in, what types of related expertise resides within the organization, along with other important traits. The important takeaway from this metric is the base line understanding of what the library or the library and the consortium can bring to the project.
Having developed the three lists of traits for product, community and organization, we began seeing the evaluation as a three-step, iterative process. A good starting point is evaluating the product itself using the Product Evaluation Metric. If an organization determines that the product has potential value, it should be noted what the strengths and problem areas are. There are currently 19 elements in this metric. They may have varying levels of importance or relevance for each organization. Establishing at the outset which elements are most important, which elements are necessary but not crucial, and which elements are peripheral will help prioritize results of this survey. Finding that a product scores poorly for an element that has been defined as peripheral has less impact than a poor rating for an important element. The process of ranking and weighting elements of this metric is not about deriving an overall score for the product, but rather it is about understanding the important elements to improve and whether there is a general sense that the product will be good enough to make the improvement process worthwhile. The outcome of the Product Metric Evaluation should be to understand the product’s strengths and weaknesses and which ones are most important to focus on first.

Next, using the Organization Evaluation Metric, either library or library and consortium, should be evaluated to determine what strengths can be brought to this project. This metric can help identify the level of resources that can be brought to the open source project. Used in conjunction with the Product Evaluation Metric, the picture of what needs work and who is or is not available to do that work becomes clearer. Organizational resources can change based on levels of staff and administrative buy-in. If, for example, there’s enough administrative buy-in and enough staff interest, a low level of expertise can be overcome. The important take-away from this evaluation is the base line understanding of what the library or the library and the consortium can bring to the project.

In the next step, the open source community should be evaluated using the Open Source Community Evaluation Metric to determine how well an organization’s strength match with the community’s needs and how well the community’s strengths match with the organization’s needs. Where there are mismatches, both the Product Evaluation Metric and Organization Evaluation Metric should be reevaluated to determine flexibility and willingness to accept the consequences of areas that don’t match. Variables that can work to neutralize weakness should be noted. For example, where an organization may decide that it can commit staff time but has little or no expertise, a community that offers enough support will be essential. Conversely, organizations that feel confident with their level of expertise may find that community support is not a relevant issue. For each variable on any one of the charts, there should be some response found in the other charts. 

Developing and using the Matrix for Selecting and Implementing Open Source Systems substantially aided our understanding of the issues we encountered, and the amount of resources we needed to commit to the CORAL project. Having gone through the FCDC experience, we came to understand how we can benefit from this information. We also learned the process of evaluation never ends. As long as your institution is using an open source product, continuous evaluation of its community and your organization’s capacity is necessary.

Open Source Costs

Of course, all that continuous evaluation uses staff resources, resources that, as we also learned in our trial, are already more heavily used in an open source environment. For the entire value open source affords, we learned that open source projects also have a number of costs, primarily stemming from the time involved in the process.  

During the trial and implementation phases, we invested a great deal of time learning the system. Training and creating documentation, processes that would not have been necessary in a vendor environment, also required significant staff resources. In addition, we also made, and continue to make, a conscious effort to communicate with the larger CORAL community, which although a small time investment, is extremely important. Unfortunately, we failed to quantify the additional hours for this phase of the project.

We did however, track the amount of time spent developing the Cost History Specification. By our calculations, eight staff members invested as much as six hours per month for 10 months for a total of 480 hours.  During the specification building, we learned that illustrations or wireframes are more accessible as a means of describing proposed changes than written descriptions are more effective in communicating specification changes, but require significant time to create. While we approved textual descriptions of proposed changes to the resource history entry screen, when committee members saw the actual layout, we were able to offer more helpful comments. The same was true during the discussion process with the Governance Committee. Seeing the proposed changes is a more effective means of conveying the change than either a written or verbal discussion. But building wireframes and screenshot illustrations is a time-intensive process.

Once the specification change is illustrated or clearly explained, project staff need additional time to fully engage in understanding specification ramifications. The developer and his or her advisors must fully understand the proposed changes and have a full grasp of the implications for the existing, and proposed, functionality in order to conceptually alter a complex piece of software. The lack of vendor training and support that seemed like a disadvantage early on turned out to pay dividends at this point as FCDC members had the in-depth knowledge to comment on the specification. Successful specification development relies on the ability of the Specification Team to immerse itself in the proposed functional issues and to commit significant staff resources to the process. The time to build consensus and communicate clearly was significant throughout the process. Additionally specification development is an iterative negotiation with the larger community. The extent of time required at this stage of the process was something we had not anticipated.

Looking back, we realize that we mistook universal support for the overall project for total acceptance of the details of our solution. We assumed we would create the enhancement and everyone would be satisfied. Community members had different ideas and valuable contributions to offer. The inclusion of these suggestions required additional time to understand and negotiate.