EBM and Clinical Support Librarians@UCHC

A blog for medical students, faculty and librarians about their use of evidence based medicine, clinical literature, Web 2.0, sources and search strategies

Category Archives: Scholarly Publications

Academic Libraries, News, Current Reports: Storehouses, Tool Sheds and Peripheral Brains

Returning from a blogging-break, I found several links in my email account to a variety of newly-published library-related reports or scholarly articles. Two that I enjoyed reading are referenced below.

We do not yet understand the scholarly significance of large swaths of the digital universe.”

So wrote Dan Hazen, associate librarian of collection management at Harvard College, in his 2010 article, “Rethinking Research Library Collections: A Policy Framework for Straitened Times and Beyond“, available through open access to anyone, and published in 2010 in Library Resources & Technical Services (citation link below).

I especially liked his caption “Libraries as Storehouses, Libraries as Tool Sheds”.  As a mid-career librarian, the life work of my generation have been the planners, implementers and fixers for this migration of library collections from the Real (hold it in your hand, be in the library to read it) to the Virtual (it’s online 24×7 from anywhere you are). Another central issue for research libraries also focuses on historical archiving and reliable preservation of generations of printed, and digital, collections.  The excerpt below from Dr. Hazen’s article summarizes such concerns much more eloquently than I can:

” The mass of information resources now available on the Web, many of them free, is fundamentally changing the library community’s thinking about collections….  Links to freely available digital content, meta-search capabilities that cut across products and platforms, and local aggregations of electronic resources, will all play a growing role in libraries’ collections and content strategies. This in turn will also reduce the physicality of library holdings and alter the functionalities of their spaces.  But we need to go further.  Three aspects of Web-based content require close attention. First, the search engines that today allow users to find materials on the Web are neither transparent nor fully revealing of useful content in predictable ways. Google Scholar, for example, relies upon opaque search algorithms and relevance rankings that appear not to fully exploit the wealth of standards-based metadata that libraries routinely provide. But most libraries do little better, investing their cataloged resources with robust metadata that our discovery tools rarely handle well. Second, sources on the Web—whether websites themselves or the  data, images, objects, and documents embedded within them—are notoriously unstable. Content is added, changed, and removed; links shift around and disappear. Scholarship relies on enduring access to constant content, a goal which remains elusive in the digital domain. Capture, curation, and digital preservation are all implicated in this conundrum.  Third, dispersed and disparate Web content requires tools that can work across amalgamated sets of sources in predictable and repeatable ways. Some of the uses are well-understood, while others reflect a new realm of inquiry that includes text mining, pattern recognition, visualization, and simulation. The needs are perhaps most pressing around massive accumulations of raw data. Libraries, working together and also with academics and information technologists, have an evolving role in creating and supporting the tools that will enable students and scholars to take full advantage of the digital world. It is not yet clear whether lead roles can or should be pre-ordained: arrangements that embody flexibility and contingency seem most likely to succeed. “

Source: Library Resources & Technical Services – Vol.  54(2), pages 115-121. (2010)


Staff at OCLC collaborated on a report published in 2010, entitled Perceptions of Libraries 2010: Context and Community .  The 59-page report is an update of a 2005 edition, and is available free online from this link. Below is an excerpt from the Conclusion:

Love for librarians remains. Like the library brand, it grew stronger. It seems that self-sufficient information consumers still appreciate expertise and a passion for learning—but they like it best on their time, with their tools. It’s cool to ask an expert—online. It was not cool to ask a librarian for help in 1950 (Public Library Inquiry, 1950); it’s still not cool. Many more perceptions and attitudes have remained the same for the information consumer in the last five years. She still wants to self-serve and self-navigate the info sphere. She discovered the benefits of surfing the Internet by 2003 and, by 2010, was using more powerful tools. She is creating her own apps. She still knows good information when she sees it. She takes her information habits, and perceptions, with her as she ages. While she may be a bit less impressed with online information resources as they have become commonplace, nothing has yet replaced the value and speed of a search engine. And, her personal device connects her to a network where she can share the knowledge gained. She shares her info sphere with older information consumers but does not welcome information gates or gatekeepers. Her advice for libraries: more hours, more content, more computers and of course—more books.

And any librarian in 2011 would not be surprised by this chart (also from the OCLC report):

Source: Both excerpts from OCLChttp://www.oclc.org/reports/2010perceptions/2010perceptions_all.pdf – All rights reserved – Copyright 2011


* I prefer to believe that our users still have a need for reference librarians who are seasoned … a term much preferable to “old” (!).  However canny, shrewd and wise will also win a smile from those behind that desk.

Just like Dr. Rose, librarians have large peripheral brains too!

News, Health Science Literature: Elsevier introduces SciVerse

Wow! For those of us who use information resources produced by Elsevier on a daily basis, it’s been a bit of a shock to tune into Scopus®, SciTopics® or ScienceDirect® this week to see how different they now look. (Or as a corny analogy, that figurative, proverbial 800-pound gorilla sitting in the corner of the room has decided to move house in September 2010.)

On Monday, Aug 30 2010, Elsevier announced their plans to combine and morph these sites into one platform named SciVerse Hub®. Read their press release here. First, I wanted to provide some definitions from the company as to which resources will be combined by this single search engine:

Image Source: http://www.scitopics.com/faq.jsp – All rights reserved – Copyright 2010

In plain terms, SciVerse Hub is an entry point for library users to simultaneously search the contents of:

  • Scopus (a subscription database indexing 18,000 titles from more than 5,000 international publishers including coverage of 16,500 peer-reviewed journals in the scientific, technical, medical and social sciences literature)
  • Science Direct (a subscription access point for searching 10 million articles from over 2,500 journals and 6,000+ e-books, reference works, book series and handbooks issued by Elsevier)
  • SciTopics.org (a free online expert-generated knowledge sharing service for the global research community)

Scirus.org® is a scientific search engine (created and maintained by Elsevier).  Scirus currently indexes 38 million websites found on open-access and mostly educational, scientific or government sites, incorporating what librarians refer to as grey literatureScirus will search these sources separately and bring back a sorted list of retrievals (with duplicate citations removed) to the SciVerse Hub site.

(Note: When I teach a Google Scholar class, considerable time is spent comparing along with the class participants why retrievals using Scirus.org to search for scientific information tend to produce “better” results than G–gle Scholar. Time well-spent, IMHO.)

Following are two screenshots from the SciVerse site:

Image Source: http://www.info.sciverse.com/what-sciverse – All rights reserved – Copyright 2010

And this page:

Image Source: http://www.scopus.com/home.url – All rights reserved – Copyright 2010

Note that the capability of searching each individual resource separately has been retained.


An informational video on SciVerse is well worth watching (and short at 3.5 minutes in length)… link to it here.  Another helpful reference resource: an 8-page training handout for using the new site which can be downloaded here.

In promotional materials, Elsevier refers to SciVerse as a “new knowledge ecosystem“. Their information products are integral to the daily work of clinical, health science and scientific research library users worldwide. Here’s hoping this migration runs seamlessly (as in: invisibly and glitch-free).

Image Source: http://www.info.sciverse.com/what-sciverse – All rights reserved – Copyright 2010

News, Academic Libraries, Medical Literature: For Wiley Journals, A New Look

Image Source:  http://info.onlinelibrary.wiley.com/view/0/index.html – All rights reserved – Copyright 2010.

In a next week, those who rely on subscription journals or database content from John Wiley Company – one of the largest sci-tech-medicine international publishers – will notice a major redesign of their website.

The familiar Wiley Interscience page is going to be replaced by their new portal, Wiley Online Library. While plans are put into place for the switch-over on Saturday, Aug 7, 2010, users may notice some downtime on the Interscience site.  (Both sites cannot run simultaneously.)

Following is a screenshot of the company announcement made in April 2010:

Image Source:  http://info.onlinelibrary.wiley.com/view/0/index.html – All rights reserved – Copyright 2010

Note:  The implementation of the new portal includes our subscription to the Cochrane Library.

Any major migration in technology can present some access or usability challenges.  (Forewarned is forearmed, the old saying goes.)  If you have bookmarked specific links to Wiley journal articles or e-textbooks, you will need to update the URLs to the new site…  as the old links will stop working within a few weeks.

Here’s hoping for a smooth transition for Wiley!

Teaching & Learning in Medicine, Research Methodology, Biostatistics: Show Me the Evidence (Part 3)

This post is the third in a series entitled Show Me the Evidence. It is about the evidence gained from bibliometric data and journal impact factor analysis.

Let’s start with an excerpt from an 2008 article:

The assumption that Impact Factor (IF) is a number absolutely proportional to science quality has led to misuses beyond the index’s original scope, even in the opinion of its devisor*. When the IF is inappropriately attributed to all articles within a single journal, it leads to false applications regarding the  evaluation of individual scientists or research groups. This is, unfortunately, a common practice, especially among governmental funding boards and academic institutions entitled to judge scientists for positioning and grant allocation. The IF has thus accumulated huge strength and importance, mainly implied by its, at least to a degree, undue application as an index of overall scientific quality“.

Excerpt on page 1 from “The Top-Ten in Journal Impact Factor Manipulation” by ME Falagas and VG Alexiou, published in Arch Immunol Ther Exp (Warsz). 2008 Jul-Aug;56(4):223-6 – All rights reserved – Copyright 2010


This post was sparked by a recent reference question from a retired professor who needed some assistance on how to find and search the Journal Citation Reports® database*, which is described by its’ producer, Thomson Reuters,  in this way:

Journal Citation Reports® (JCR) offers a systematic, objective means to critically evaluate the world’s leading journals, with quantifiable, statistical information based on citation data. By compiling articles’ cited references, JCR® helps to measure research influence and impact at the journal and category levels, and shows the relationship between citing and cited journals. “

Text source: Thomson-Reuters –http://wokinfo.com/products_tools/analytical/jcr/ – All rights reserved – Copyright 2010

So let’s take a look ways to find current evidence about publication patterns in the biomedical literature.

There were two things about JCR® that needed explanation for the professor.  First, the latest annual edition of JCR® was released in June 2010 and indexes journal citation data for the 2008 calendar year only (not 2009).  Second, only those journal titles indexed in the Web of Science database* are searchable in JCR®.

As one example: Let’s say that you’re a scientist working on stem cell research and you subscribe to ten international journals that are critical to your continuing professional knowledge, lab work and research.  It’ll be a good idea to check the list of 6,600+ journals that are included in the Web of Science database in order to determine if your “best” journals are searchable in Journal Citation Reports®.  If those titles are not covered in JCR®, you’ll be missing essential facts for comparing bibliometric data.

Here is a screenshot of a search done on the 2009 JCR® database for journals indexed under Cell and Tissue Engineering:

Image source: Thomson-Reuters – All rights reserved – Copyright 2010


Some folks assume that “every journal in the world” is included in Journal Citation Reports, but that’s not the case.  9,100 journal titles were indexed in the 2009 edition.

Another thing to know is that there are six subsets available for annual subscription from  JCR® and UConn Libraries subscribes only to these two: Science Citation Index Expanded (indexing of 7,100 major journals across 150 disciplines and Social Sciences Citation Index (2,474 journals across 50 social science disciplines).

Below is a screenshot from an online tutorial about ways to search Journal Citations Reports® (with my added comment in the upper left-hand corner):

Image source: Thomson-Reuters – http://thomsonreuters.com/products_services/science/science_products/a-z/journal_citation_reports – All rights reserved – Copyright 2010


Next, the professor asked: “I have a manuscript to submit for publication.  Is this the only place I should use to look at statistics about specific journal titles? “.

While JCR® is an important reference resource, it’s neither free or the only one available worldwide for researchers to search.  Below are sites which provide evidence that there are other ways to do citation analysis in year 2010 (some are free, some are via subscription).


SCImago Journal & Country Rank Indicator (SJR). I like this site – it is easy to learn to use.  There are many ways to search their datasets (ranked by country, by journal title, by countries grouped by continent, etc.).  I also found their Map generators intriguing, which show comparative relationships between discipline or subject-specific citations.

Below is a screenshot of the SCImago Journal & Country Rank page showing a search done on Year “2008”,  “Medicine” as a general category, “Emergency Medicine” as a specialty and USA for the “country, with a limit for displaying journals that had at minimum 12 citable documents over 8 years:

Image source: http://www.scimagojr.com/journalrank.php – All rights reserved – Copyright 2010


Link here to a 2007 paper written by the creators of SCImago which describe the process by which journals are ranked on their site.


Those who have access to the Scopus database* through their library may have already discovered the Scopus Journal Analyzer, where allows one to select a discipline (shown below as “Biochemistry, Genetics and Molecular Biology”) and a journal title (“Cell” was searched below) and then choose method of analysis to determine journal impact.  Elsevier is the producer of the Scopus database. 15,000 journal titles are indexed for inclusion in Scopus analytics.

A screenshot below shows results of a search performed in Scopus Journal Analyzer recently for the broad topic of Biochemistry, Genetics and Molecular Biology.  The journal Cell holds the most-cited place in the list (no surprise there):


The journal analyzer can be sorted using the following criteria: SJR versus SNIP.  I  found out that four years of data are necessary for sorting results using these filters.  Below, see a different screenshot:  rankings by SJR and SNIP for the same subject area:


Explanations for SJR and SNIP were easily found in the Scopus Help section (screenshots shown below):   

Credit for all Scopus Images shown above: Elsevier B.V. – All rights reserved – Copyright 2010

Want a different way to search Scopus analytics for evidence? Use the search feature in Journal Analyzer to select and compare up to ten Scopus sources on number of citations, documents, and percentage not cited.

A 12-page PDF white paper (from 2006) is available to download from Scopus, entitled “Using Scopus for Bibliometric Analysis: A Users’ Guide“.  Following is an excerpt from that document:

Introduced in January 2006, the Scopus Citation Tracker enables users to easily evaluate research by using citation data. This tool offers at-a-glance
intelligence about the influence of a set of articles, an author or group of authors over time, so users can quickly spot trends using a visual table of citations broken down by article and chronology
. “

Text Source:  Courtesy of Elsevier B.V. – All rights reserved – Copyright 2010

Two other Scopus pages which I found useful were the Scopus Top Cited page and Scopus Journal Metrics Factsheet.


No use trying to make this post pithy.  It would be an error not to mention the following means of assessing the scientific literature:  Eigenfactor, h-index and JANE.

University of Washington biology professor Carl Bergstrom and colleagues created the  Eigenfactor Project™.   The main webpage is  http://www.eigenfactor.org.

Give the interactive map a try: click here. Here is an example for Molecular & Cell Biology Map:

Image Credit:  http://www.eigenfactor.org/map/ –  All rights reserved – Copyright 2010

Following is an excerpt from a May 2007 article that Dr. Bergstrom wrote for the Association of College and Research Libraries publication, College & Research Library News:

We can view the Eigenfactor score of a journal as a rough estimate of how often a journal will be used by scholars. The Eigenfactor algorithm corresponds to a simple model of research in which readers follow citations as they move from journal to journal. The algorithm effectively calculates the trajectory of a hypothetical “random researcher” who behaves as follows: Our random researcher begins by going to the library and selecting a journal article at random. After reading the article, she selects at random one of the citations from the article. She then proceeds to the cited work and reads a random article there. She selects a new citation from this article, and follows that citation to her next journal volume. The researcher does this ad infinitum.

” Since we lack the time to carry out this experiment in practice, Eigenfactor uses mathematics to simulate this process.

” Because our random researcher moves among journals according to the citation network that connects them, the frequency with which she visits each journal gives us a measure of that journal’s importance within network of academic citations. Moreover, if real researchers find a sizable fraction of the articles that they read by following citation chains, the amount of time that our random researcher spends with each journal may give us a reasonable estimate of the amount of time that real researchers spend with each journal.

Text source: College & Research Library News – Vol 68:5 (May 2007) – All rights reserved – Copyright 2010

A slideshow presentation created by Dr. Bergstrom and presented at a conference hosted by Microsoft in 2009 can be viewed here.

Professor Alan Fersht wrote an article in 2009 published in PNAS Vol. 106(17):6883-4 (Apr 28 2009) entitled “The Most Influential Journals: Impact Factor and Eigenfactor” which is available free online on the PubMedCentral website.


Physics professor Jorge E. Hirsch wrote a paper published in 2005 in PNAS entitled “An Index to Quantify an Individual’s Scientific Research Output“, in which he outlined the algorithm known as the Hirsch Index (or h-Index).

And – LOL – according to Scopus, that PNAS paper by Dr. Hirsch has been cited 575 times!


JANE (or Journal Author/Name Estimator) is a software tool created in 2007 by members of the Biosemantics Group, a collaborative group at the Medical Informatics department of the Erasmus MC University Medical Center of Rotterdam and the Center for Human and Clinical Genetics of the Leiden University Medical Center.  Following is  how the creators of JANE describe the purpose of the tool:

Have you recently written a paper, but you’re not sure to which journal you should submit it? Or are you an editor, and do you need to find reviewers for a particular paper? JANE can help!  Just enter the title and/or abstract of the paper in the box, and click on ‘Find journals’ or ‘Find authors”.  JANE will then compare your document to millions of documents in Medline [over 10 years] to find the best matching journals or authors. ”  —

Source: http://www.biosemantics.org/index.php?page=jane – All rights reserved – Copyright 2010

M. J. Schuemie and J.A. Kors – two of the creators of JANE – published a paper about the software in the journal Bioinformatics – Vol 24:5 (Mar 1 2008).


* Dr. Eugene Garfield was a co-founder of the Institute for Scientific Information, the producer of Science Citation Index.  A professor at the University of Pennsylvania and a prolific author, Dr. Garfield is now 85 years old.  Here is a link to his website.

In 1955, he wrote a paper titled “Citation Indexes to Science: A New Dimension in Documentation through Association of Ideas“, published in the journal Science (Vol. 122:108-111).  The online version is available to be read at this link.

From looking around on his Library website (url above), I think he has a sense of humor and the soul of an archivist. A great deal of his professional life has been taken up thinking about information management, and the ways in which scientists use their literature. I – and other librarians everywhere – should thank him for being an early adopter!

For example, in a commentary he wrote in 1963 published in the journal Science (Vol. 141:3579 – Aug 2 1963), titled “Citations in Popular and Interpretive Science Writing“, he admonishes mainstream periodical editors for not including basic volume and issue information.  Here is a direct quote: ” Librarians and scientists spend hundreds of hours tracking down precise literature citations which are missing in articles published in otherwise reputable publications like Scientific American, the New York Times, or The Sciences-a task that could be eliminated if brief but complete citations were given. This is certainly false economy and annoying “.  Garfield… You go!

The text of a presentation he gave at the International Congress on Peer Review And Biomedical Publication (2005) can be read online at  “The Agony and the Ecstasy: The History and Meaning of the Journal Impact Factor“.

I performed an author search on PubMed for his publications and created a small group of citations, those search results can be viewed here.


In addition to thanking Dr. Garfield for creating this field of citation analysis, there are many fellow health science or academic librarians whose work has helped me understand this complex subject, or who have made public their own instruction for others to benefit from. These folks deserve recognition (and applause!).

Thanks to UCHC collection management librarian, Arta Dobbs, for her suggestions and explanations of sources and methods of bibliometric analysis.

Thanks to Janice Flahiff and Jolene Miller, librarians at Mulford Health Science, University of Toledo (Ohio) who have written a great fact sheet on the uses and misuses of interpreting journal impact factors.

Props to Kathi Sarli, health science librarian at Bernard Becker Medical Library of Washington University of St. Louis,  wrote a very useful library guide called “Tools for Authors“… check the section-tab for “Preparing for Publication“.

I enjoyed watching an excellent tutorial on Journal Impact Factors produced by librarians at the Ebling Library for the Health Sciences, University of Wisconsin-Madison.

Finally, remember this is all about Publish or Perish.

* Subscription via UCHC Library.  If off-campus, use your library proxy number to connect.

Teaching & Learning in Medicine, Research Methodology, Biostatistics: Show Me the Evidence (Part 1)

Question everything… especially what you read.

A 2009 quote from Dr. P,  PBL facilitator

.One of the many tasks for first-year graduate students in clinical or research areas is building a healthy skepticism about what one reads in the medical literature.

Ideally, as they progress through four years of medical education, students find that they must change their approach to searching as well as exploring what new resources will answer their questions of increasing complexity.  What answered their learning issues in their first year often doesn’t carry over to their third-year clerkship, when they are faced with finding solutions to the care of actual living patients.

This evolution (both practical and intellectual) asks that they grow a set of appraisal techniques for examining, embracing or rejecting what they find in the ever-increasing assortment of health science, pharmacology or social science databases available to them. (Note that I’m not referring to what can be found by simply plugging a few words into a search engine.)

And very likely, they jettison the use of a few previously well-used resources  as their clinical questions and experience become more complex.


.Separate what is of statistical importance from what is clinically significant.

Another 2009 quote from Dr. P

As a facilitator for PBL, many first-year students have stated in class that they rely most on the library’s subscription to the Access Medicine* collection – including Harrisons’ Online – as a “first place” to go to do research.

It is what the librarians consider as a sort of a “package product”.  This subscription resource has developed in major ways over the years which UCHC Library has been providing it for our users.

As examples: there are now 60 core medical textbooks on the site, lists of DDx criteria, audio cases, calculators and clinical videos, podcasts, study guides for USMLE.  The library added subscriptions to Access Surgery and Access Emergency Medicine when they became available from the company.

Residents especially appreciate having 24×7 access to these resources.

And truly, we librarians were thrilled back in 1999 when the subscription to a digital version of Harrisons’ Principals of Internal Medicine was rolled out.  LOL.  (Link here to an academic paper from 1999 reviewing the resource.)  Back then, the medical and dental students were excited about this 16th digital edition too, although most of them elected to purchase their own hardbound copy of the textbook.  These memories seem a little quaint from eleven years on.

In 2010, here’s a screenshot of the newly-redesigned Access Medicine front page:

Photo/Text source: http://www.accessmedicine.com/features.aspx – All rights reserved – Copyright 2010

What are other examples of what librarians consider garden-variety “packaged databases” that are frequently mentioned by first year students as essential to their research?

MD-Consult*, Up to Date* and for locating primary studies (or for “just shopping around” as one student said), PubMed.

As librarians (and instructors) a major teaching role for us is to encourage their exploration… and also to model the effective use of these information resources.  Feedback from students or faculty on the nature of their experiences as they  “consume” these products is very important.

And (dare I mention!) the librarians are there in the classrooms to also reinforce that using sources such as Google or Google Scholar to do credible clinical research represent truly two of the least satisfactory choices but also the ones most easily or readily available.   (Sigh.)

There are many free information sites in the world… the librarians don’t use or teach (or endorse) many of them. Why? Not because we are close-minded, too traditional, or old and cranky. This is a conversation thread that will be continued in Part 2 of this post.


Seeing a dozen patients with XYZ syndrome will significantly increase their practical assessment skills.  So will participating in the care of a patient that even the seasoned clinicians and experts haven’t yet figured out a diagnosis for. A common short-hand for diagnostic skills is Horses versus Zebras.

Learning to comb the literature for clinically-sound research studies – and weighing what has been found for validity or predictive value – are skills not easily learned.  Is four years sufficient time for practice in this pick-and-choose process?

Many students in their third and fourth year of study come back to meet with the reference librarians for a “refresher course” on how to search more efficiently, as they begin their required fourth year individual research project (called their “selective“).

I consider these reference training sessions with students as excellent indicators that they are growing quite sophisticated about what they consider to be “good” evidence.  Getting choosy is a wonderful thing.


* Please note: Resources mentioned are subscriptions and limited to UCHC students, staff and faculty only.  If off-site, use the Library’s proxy access to connect to them.

Scholarly Publishing, Research, Academic Libraries, Content Management: Seismic Changes, the We and the It (Part 1)

Read this (and then perhaps LOL):

Y2K in the Wider World… Come January 1, 2000 will we have power? Will we be able to get money out of our bank accounts? Are cities preparing for chaos and civil commotion, or are they assuming all will be well? We talk with representatives of the power, banking, and air travel industries, as well as San Francisco city government to find out how concerned they are and what preparations they are making for the roll-over to the next millennium. “

Found on the Internet Archive – Excerpt take from a feature on Y2K including a 29-minute video, shot in March 1999, in which City of San Francisco administrators and officials from commerce discuss their preparations for Y2K.


As we approach the end of the first decade of the 21st century, one can only marvel at how fast these ten years have gone by.

For those who were working in 1999, do you remember it in this way: as a brief and unique historical period when many otherwise rational and deliberate people working in professions governed by highly-computerized, networked environments (which includes anyone working in an academic library) were bugging out over Y2K ?

Here is an extreme example of what some feared would happen on the first day of the New Millenium: link to a classic Nike Y2K commercial (filmed in San Francisco) that is still great to watch:

Video Credit: http://www.youtube.com – All rights reserved – Copyright 2009

Happily, the actual Y2K disruptions proved to be minor. And the power didn’t go out on January 1 2000!

As year 2010 approaches, a potential action-list for academic libraries in the near-term is described in a recent report from folks at OCLC with input from members of the RLG Partnership Research Information Management Roadmap Working Group.

Following is an excerpt from this November 2009 report:

Researchers are drowning in a deluge of raw data and published information, and face a bewildering array of options for disseminating and sharing their work. The choices these researchers make have implications on intellectual ownership, potential audience, ways of measuring impact, potential re-use, and long-term preservation. ”

Source Credit:  “Support for the Research Process: An Academic Library Manifesto“, posted November 2009 online by OCLC – All rights reserved – Copyright 2009

This position paper is available full-text (at no cost) from this link.  Here is a Wordle cloud created today, using key words or phrases plucked from the report:

Image Credit: http://www.wordle.net/ – All rights reserved – Copyright 2009


As 2009 winds down, I offer no predictions as to new directions or demands which the academic  library and library staff will meet.  Only that each year, the physical impact of the “real” library will shift to accommodate ever more virtual resources.  The monies and staff time spent in providing links to the “digital” library presence will increase.

But I’m an optimist.  Having been a librarian since 1991, I truly believe that our professions’ highest skill and value lies in our unique abilities to serve our faculty, clinical staff and students in the “librarians-as-educational-midwives” mode.

Consider the “we” and the “it“. By this,  I mean “it” as a Reference Transaction conducted between a librarian and an individual (or a group) with the purpose of addressing a unique question or research requirement… it makes little difference whether the persons are standing together in the library, or talking on a phone or using email (or Meebo).

Any reference librarian can tell you that the “it” query is a conversation unique to those two individuals.  One of the fun parts of being a reference librarian is that the questions are always different, and often you learn as much as the library user when doing the research with them!

Who is the “We“?  A group of people with diverse functions and roles, all of which are necessary to make an academic library operate efficiently… humans who buy materials, teach users, circulate and shelf books physically or digitally, maintain the electronic infrastructure, put paper in the copy machines.

We (the librarians) consider what in the world is available to collect, of what value these resources will be to our users.

We buy appropriate, comprehensive and current materials to be added to the library collection (real or virtual).

We sign contracts with information providers based on our needs as an institutional library, and in accordance with strict fiscal rules and conditions.  There is never enough money available for all that we need to collect, but we do the very best with what monies are available, and we evaluate the collections annually.

We provide the intellectual organization as well as maintain the physical environment as well as the systems architecture or framework by which to search for these materials.

We get that information out onto the virtual (or real/actual) shelf to be found by the user.  We index, evaluate, weed.  We keep the real machines (and lights) running and also the website or information portals.

We get the call at 2:00am when the machines wink out.  We get to contact the publisher(s) to report – and resolve – problems with digital access or missing subscriptions.

In the library of 2010, “we” represent the experienced staff who instruct, demonstrate and train those who ask about what (of many) resources to search, how to search these to retrieve relevant information.

We ask our users to consider the relevancy of their retrievals, and if needed, to begin the process again using other literature sources (that “we” suggest).

Once information that is deemed “good” by the user has been found (because we don’t decide that… the user does!), we select and provide software and instruction to effectively manage the accumulating output of these searches.

The “it” and the “we” are not services that will ever be provided by a quick search on Google (or Google Scholar).

The “We” can teach practically anyone to search effectively and competently on PubMed in under an hour and that is a point of pride among our profession, whether the particular we is a health science librarian in Seattle, Cleveland, Munich or Mexico City.

We provide the person-to-person answer for those questions.  While machines are essential to the work of the academic-research libraries, for now only humans can complete the educational role.

In spite of free search engines, open access journals, tons of virtual points of access to content or social-networking opportunities, the services and collections provided by a formal academic health science library (and staff) remain integral to the pursuit of scientific research.

This discussion is Part 1… other threads in the mix will be followed up in Part 2 (TBA later this week).

Please let me know what you think.


As to non-library predictions for the coming year, this article from the Dec 29 2009 Wall Street Journal (digital edition) examines a few controversial ideas from a Russian professor, Igor Panarin, about the future of the United States.  The content is available (at no cost) at: http://online.wsj.com/article/SB123051100709638419.html


* As a former Californian and resident of San Francisco, where The Big Quake hit on April 18-23, 1906, I’ll verify that it is unsettling to live in earthquake country.  Here is a link to an eyewitness account of that disaster written by Dr. George Blumer, among many other descriptions of events written by those who lived through it, posted on The Virtual Museum of  the City of San Francisco site. Bay Area public health officials continually prepare their rescue personnel (and citizens) for future earthquakes by practicing a staged mock-disaster each year.  Also see public service/public health promotion and emergency preparedness tips at: http://72hours.org/.

News, Medical Research, UCHC Faculty: Immune Levels + Holiday Stress? Not Beneficial

If you ever thought the stress of seeing your extended family over the holidays was slowly killing you—the bad news: a new research report in the December 2009 print issue of Journal of Leukocyte Biology shows that you might be right. Here’s the good news: results from the same study might lead to entirely new treatments that help keep autoimmune diseases like lupus, arthritis, and eczema under control. “

Text credit: Press Release from Eurekalert (Nov 30 2009)


Recently published research by scientists from UCHC examine the effects of psychological stress on the human immune system.  The paper was published this month in the Journal of Leukocyte Biolology – Vol. 86: 1275 (Dec 2009); the abstract is available to read here: http://www.jleukbio.org/cgi/content/abstract/86/6/1275.

The citation found on PubMed can be viewed here.

News, Scientific Literature, Bioinformatics, Search Technologies: MedlineRanker

Anyone who works with geneticists and biomedical researchers already knows that learning the language of their science is daunting for a non-scientist to understand. This international community has developed dozens of highly specific databases, data-mining software and cooperative, collective digital libraries for their own use.  In an approximate sense, one could even imagine the mapping of the human genome as one vast wiki.  Clinical care follows the translational research of these investigators.

This month in Nucleic Acids Research, Volume 37-July 1 2009, the Supplement 2: Web Server issue was published, described by Oxford University Press as:

“…the seventh in a series of annual special issues dedicated to web-based software resources for analysis and visualization of molecular biology data. The present issue reports on 112 web servers with a special emphasis on metagenomics, molecular network and pathway analysis, and biological text mining”..

Full-text of the NAR-Supplement 2 is available open-access for anyone in the world to read, on PubMedCentral.


An article in that special issue attracted my interest, entitled MedlineRanker: flexible ranking of online literature” and written by a group of computational scientists affiliated with the Computational Biology & Data Mining Group of the Max Delbruck Center for Molecular Medicine (MDC) in Berlin.

The six authors describe their project in this way:

We have implemented the MedlineRanker webserver, which allows a flexible ranking of Medline for a topic of interest without expert knowledge. Given some abstracts related to a topic, the program deduces automatically the most discriminative words in comparison to a random selection. These words are used to score other abstracts, including those from not yet annotated recent publications, which can be then ranked by relevance. We show that our tool can be highly accurate and that it is able to process millions of abstracts in a practical amount of time.

Source: Link from Nucleic Acids Research – Vol. 37, Suppl. 2: W141-W146

Please view the four Supplementary Data (note: these open as either Word or Excel documents) that describe search terms used to search  PubMed using the MedlineRanker server.

The illustrations in the article look like a cross between a tag cloud and a Wordle picture.

MedlineRanker is free for use and is available at http://cbdm.mdc-berlin.de/tools/medlineranker.

A list of current research projects from MDC can be viewed at this link.


In January 2009, Supplement 1 – Datatabase Server Issue was published in  Nucleic Acids Research, Vol. 37 and that is also available online on the PubMedCentral archive.


The 122 sites listed in the July 2009 NAR supplement will be added to the 1,200 already listed in the Bioinformatics Links Directory which:  “... now expands to almost 1400 unique web servers, databases and resources for computational research in the life sciences. All links are freely accessible to the public, and may be browsed by biological category and research task subcategory. “

For more information on text-mining programs written by scientists from around the world, go to the Bioinformatics Links Directory-Literature: Text Mining page.

News, Scientific Literature, Visualization: Cell Press and Elsevier introduce Article of the Future

STM publishers Cell Press and Elsevier ratcheted up the technological ante this month with their announcement on Monday, Jul 20 2009, of a shared project called Article of the Future, which they are funding to provide:

“… an on-going collaboration with the scientific community to redefine how the scientific article is presented online. The project’s goal is to take full advantage of online capabilities, allowing readers individualized entry points and routes through the content, while using the latest advances in visualization techniques“.

Text source: http://beta.cell.com – All rights reserved – Copyright 2009


Currently available are two “prototypical” articles which the companies have put up in order to solicit feedback about the page and suggestions from the worldwide scientific community about useability and function.

Here’s one nice feature of the demo: “Integrated audio and video [will] let authors present the context of their article via an interview or video presentation and allow animations to be displayed more effectively”.

Below is a screenshot showing visualizations of tables from article Prototype #2, entitled “Identification of Positionally Distinct Astrocyte Subtypes whose Identities Are Specified by a Homeodomain Code” by Christian Hochstim, Benjamin Deneen, Agnes Lukaszewicz, Qiao Zhou and David J. Anderson.

This article was published originally published in the journal Cell (Vol. 133, issue 2 – May 2 2008, p 510-522).

CellPressElesevierJuly 2009collaborationExample

Image Source: http://beta.cell.com/hochstim/inc/hochstim_article.pdf – All rights reserved – Copyright 2009

Visitors to the Article of the Future page are encouraged to provide direct (anonymous) input about the site using a 10-item online survey.

Thanks to AD for telling me about this.

P.S. This news release was first read on Twitter – Cell Press News around 1oopm today – http://twitter.com/CellPressNews – but the funny thing is, neither of the companies have posted a press release on their official websites yet (as of 3:15pm EST – Jul 20 2009).

Note: Chronicle of Higher Education also wrote about this venture – see entry dated July 20, 2009 at this link.