How Thinking About Corporate Libraries and Basic Information Professional Skills Weathered the Long Winter

March 11, 2014

I don’t know about you but it has been an unexpectedly long and challenging winter. Luckily I have had several projects that have kept me focused and busy and I can greet the spring with a sense of accomplishment and promise.

In February I completed the manuscript for my latest book, Corporate Libraries: Basic Principles in a Changing Landscape. This was a great project for me because I worked with some top notch contributors, Ulla de Stricker, Heather Carine, Marydee Ojala, Jacqueline Bartek. In addition, James Matarazzo and Toby Pearlstein offered some words of wisdom.

I also picked up on some work that Matarazzo did back in 1990. His Corporate Library Excellence had profiles of several well respected corporate libraries including Abbott Laboratories. Due to the recent hire of a colleague of mine at Abbott, I knew this corporate library was still an example of excellence. So I reached out to both Abbott Laboratories Library Services and the AbbVie Library.

The contributions by both Brenda Stenger and Wendy Hamilton added two very rich case studies that demonstrate the importance of basic information professional skills in providing excellent library services.

Writing the book offered me the opportunity to look at how information professionals were applying their skills in new arenas such as big data. Technology is the tool that must be mastered and handled with craftsmanship in the land of corporate and special libraries. Evaluating and offering my own thoughts about why skills related to the collection, organization, and access to information remains crucial in today’s highly technical environment was a great way to begin 2014.

As we move into the season of renewal, I’m taking on a new project that will take me deep into the realm of library service valuation. Stay tuned for more.

Constance Ard, March 11, 2014

Scared by Search

September 26, 2013

I read recently in a Beyond Search article, “Shodan and the Scary Side of Search”  the summary of a Forbes article. The original article, “The Terrifying Search Engine that Finds Internet-Connected Cameras, Traffic Lights, Medical Devise, Baby Monitors and Power Plants” is certainly worth the full read. However the abstract from Beyond Search offers a goo look at Shodan.

Invasions or privacy, new methods of terror, voyeurism and other unseemly actions can be facilitated with this search engine.  On a larger scale, the interconnectedness of today’s digital world makes for rich fodder for the likes of Shodan.

As the article states:

The article notes that many modern buildings that house everything from apartments to businesses to government facilities have security, lighting, and HVAC systems connected to the Internet, where they could be hijacked. Even entire power grids could be usurped. The unnerving possibilities seem endless.

However, for every negative there is a positive and Shodan can be used to ensure better security. In the end the Beyond Search conclusion that as consumers we have the responsibility for tightening our personal security is not a statement to be ignored.

Constance Ard, September 26, 2013

Science Fiction or Advanced Research Development and the Future of Google

August 6, 2013

I read with interest a post, “Google and Synthetic Biology: The Next Big Thing?” from the ever challenging Stephen E Arnold on Beyond Search.  He and his team of crack researchers have stumbled upon some intriguing revelations about some current work happening at Google.

It seems that synthetic biology is the direction of effort at Google[x]Labs. Never mind the search issue created by the name for Google’s R&D facility, what is more intriguing is that the hype around Google Glass seems to be a distracting factor from the work that is happening.  In my lifetime I’ve seen things get smaller and smaller when it comes to computing but not even in my wildest imagination did it occur to me that computing could be inserted into my eye.

Nanotechnology has been quite the buzzword for a few years in computing and manufacturing circles but the application given to nanotechnology within the concept of Google Glass makes for near science fiction type reading.

The usage possibilities of this next generation technology are mind boggling including medical functions.

Arnold suggests a few questions worth serious consideration:

As Google’s founders age, perhaps the nanotechnology will rescue a key engineer from a debilitating illness? Perhaps the nanodevices will allow Google to make each person a walking talking smartphone? No emasculating gizmo required. No silly eyeglasses necessary. The computer is not on the eyeball as a contact lens is. The computer is in the eyeball. Sound like science fiction? I no longer think that generic manipulation and fabrication is reserved for university or US government laboratories. Google has the staff, the money, and the business motivation to push costly, multi-disciplinary inventions from an experimental stage into the product channel.

From a business perspective the future seems bright if Google can move quickly. Google is no longer concentrating efforts on search or even ad revenues, they have seemingly moved on to a true innovation.  It’s like 1998 all over again with a revolution in computing technology that will shift the universe if the Arnold IT team is right on the money. Speaking from experience, this is a crack team of researchers that actually go beyond the first page of web results no matter which search engine they are using so.

Arnold even suggests that those interested in knowing more can contact him for a for-fee briefing. My dollar is on the fact that who ever opts for that option will get their money’s worth and may even get their reality rocked.  If you are seeking some insight into future development in computing, innovative businesses or what Google is really doing the very least you can do is follow the writing of Stephen Arnold. He has a longer write up about this topic at Citizentekk:

Constance Ard, August 6, 2013

Digital Information: Gaps in Knowledge Understanding and Access

January 30, 2013

Stephen Arnold took a conversation he and a few of our colleagues had and wrote more about it in his Beyond Search blog. “Thoughts about Commercial Databases: 2013” is worth a review and I’ve added a few of my own thoughts here for your consideration.

The conclusion of our discussion is summed up nicely  by Arnold in that the digital future of information companies is gloomy and his post outlines a few familiar names in the world of libraries.

  • Ebsco Electronic Publishing (everything but the kitchen sink coverage)
  • Elsevier (scientific and technical with Fast Search in its background)
  • ProQuest (everything but the kitchen sink coverage plus Dialog)
  • Thomson Reuters (multiple disciplines, including financial real time info)
  • Wolters Kluwer (mostly legal and medical and a truckload of individual brands)

During our discussion the questions was posed, how can database companies grow? The short answer is their are no obvious growth patterns beyond acquiring other information publishers. A point that caused one in the group to say, eventually the beasts will begin eating themselves because of the hunger when there is no fresh meat. Amusing and yet frightening.

Arnold quotes “Why Acquisitions Fail: The Five Main Factory by Pearson Education” to explain the key factors in why acquisitions result in problems rather than soaring success.

The fact that library budgets continue to shrink, open access continues to grow and  large database companies fail to adjust business models for these realities causes deep concern for the researcher in me. As Arnold states:

The business model for these firms has been built on selling “must have” information to markets who need the information to do their job. The reason for the stress on this group of companies is that the traditional customers are strapped for cash or have lower cost alternatives.

Other concerns abound as well. As libraries continue to limit access to physical collections thanks to the value of library real estate, strain is placed on the serendipity of browsing researchers. Digital research presents its own challenges. It often leaves one feeling as though they have retrieved a a good match but is it really the best and is it complete? When you add in that many of today’s students, even those training to be librarians,  do not successfully distinguish between source and provider in the electronic age, the concerns for access, understanding and knowledge abound.

While Arnold concentrates on the outlook of commercial databases and even suggests that an acquisition by Google to monetize the content with ads could be a shift in the future of information publishing, there are other concerns to ponder.  Curated content has a future, but what that future holds in terms of commercial versus open access is yet to be thought out in light of what Arnold suggests as the trend for 2013 commercial databases.

Those who think that public search companies are keeping the archive of all digital information are in for a rude awakening. Librarians and information professionals need to get beyond teaching people how to search. As professionals, we have a duty to understand the business pressures of our information suppliers, free or fee, and what those pressures do to the availability of yesterday’s information in today’s reality of right now access.

Information professionals must think about and prepare for the inevitability of lost information. The Way Back Machine may be expanding their database but they are not archiving the complete history of companies that are no longer in business. Think about the number of start-ups that are no longer around, who were the corporate officers, what was their credit history? The gaps in corporate information mean that there will be gaps in ongoing competitive intelligence.

This is a simple issue on the surface with unfolding complexities that warrant thought and planning and action. Just as the burning of the Timbuktu library means of loss of valuable information, so too do the cost pressures and lack of access and exposure to digital data.

Innovation on the commercial side seems nearly impossible. Curation and access on the public search side is limited by the ability for providers to drive their profit in light of their own business models. Open access is being challenged to the point where advocates such as Aaron Swartz ends his own life. The Library of Congress is archiving Twitter when they may be better serving the longevity of knowledge and information by archiving the “free” information on the world wide web.

Of course, the practical part of me that understands that daily life grinds on no matter what understands that this is a good intellectual argument. In the long run will this have a significant impact on daily life? Probably not. It is something that when I think about the history of knowledge and culture, gets my mind whirling. Business will do what businesses do, libraries will do what libraries do and maybe just maybe the digital gaps won’t cause overwhelming repetition of mistakes.

Either way, it is fun to think and share and get input from intelligent colleagues.

Constance Ard, January 30, 2013

Big Data Meets Predictive Coding: Economic Impact To Be Determined

December 10, 2012

Litigation, especially for high profile companies means big money. In the age of big data, litigation and eDiscovery means big money too. So this Corporate Counsel article, “Change Is Coming: The Evolution of E-Discovery Economics,” caught my attention.

If you have ever done a Google keyword search, even if it is a custom search, you realize that keyword search is not efficient in returning narrow, accurate, relevant results. In a deluge of data, when time and costs matters, keyword searching is probably not the most efficient method of reviewing and producing documents in response to a discovery request. However, the article does make the valid point that until recently keyword search was the best we had available.

Now we enter the age of predictive coding. This technology opens an entirely new view of the massive landscape of structured and unstructured data.

Predictive coding is software that is trained by a user to predict which documents in a document set will be responsive and which will be non-responsive. Predictive coding goes by many names, including computer-assisted review and technology-assisted review.

Predictive coding aims to reduce the number of documents reviewed by ranking the documents according to a calculated level of responsiveness. Instead of looking at every email written by a custodian over a three-year time period, predictive coding uses a number of factors including keywords, writing style, subject matter of the writing, and even punctuation style to determine the chain of documents that are most relevant to the matter. These underlying programmable algorithms vary between software brands.

The discussion in the Corporate Counsel article is lengthy and worth reading carefully. Predictive coding is hot in the technology solutions landscape and Recommind was one of the first in the market space. However, as we all know, the world of law is a bit slow to embrace the newest technology. Someone else needs to test the water and find out if it is too hot, too cold, or maybe it is the Baby Bear solution to eDiscovery and offers the just right option. 

I think the massive amount of information that is being created, retained and therefore open for discovery needs more than predictive technology. I think active information management policies and procedures will be a key to the effective cost control of the evolving land of litigation.

Constance Ard, December 10, 2012

Lawyers Hold Ultimate Responsibility for eDiscovery Processes: Are You Prepared?

December 7, 2012

Sometimes risk management as related to eDiscovery takes on new levels of quality control. As I read a recent Metropolitan Corporate Counsel article, “Risk Management and Quality Control of E-Discovery Vendors” the complexity of eDiscovery was driven home.

Understanding the capabilities, quality and costs related to using eDiscovery vendors is an important responsibility. The reason being, the attorney has ultimate responsibility.

This article offers some very good reminders about what should be asked, inspected and known about your vendors. It is more than a processing capability. Is the data truly secure? Is your vendor using other providers?

As the article points out:

The consequences are steep for failing to be fully engaged with e-discovery vendors. In J-M Mfg. Co., Inc. v. McDermott Will & Emery, No. BC 462 832 (Cal. App. Dep’t Super. Ct. L.A. Cnty. filed June 2, 2011), the defendant faced a legal malpractice suit when it allegedly did not carefully review the work of contract attorneys at an e-discovery vendor, resulting in the production of almost 4,000 privileged documents to the federal government in a whistleblower suit.

The need to be involved and to understand the capabilities of your vendors is important. Understanding in-house processes is just as important. The dimensions of where information is stored, how it is accessed and the management involved are critical components of knowing what, when, how and who related to eDiscovery processes.

Constance Ard, December 7, 2012


Get every new post delivered to your Inbox.

Join 478 other followers