Managing digital assets is now a critical part of our society. I wanted to put together some notes for a presentation on personal digital archiving that I’ve proposed for the Covenant’ MidWinter gathering of pastors. If this proposal is accepted, this post will be a first, rough draft of the content I hope to cover there. Continue reading “Personal Digital Archiving for Clergy”
We are still talking a lot of data at North Park – in particular Chicago data. So I’m going to start getting my hands dirty working with this data to build capacity for future partnerships with faculty and students. So here is the first in what I hope to be many installments of the “Working with Chicago Data” series.
Mapping Chicago’s Grocery Stores
First step: Download data from the Chicago Data Portal (https://data.cityofchicago.org/). I’m using the Grocery Store 2013 dataset for this example.
The data itself seems pretty clean and well formatted. I’m going to use Tableau for this example because that’s the tool I’m learning right now. I opened Tableau and imported the spreadsheet from the Chicago Data Portal. I ended up creating 4 different visualizations based on this data.
The first is a map of grocery store locations. It uses the latitude and longitude from the dataset to create points. Pretty standard and vanilla.
These next map is much more interesting. It takes into account the size of the store (measured in square footage) and codes that as size and color. Larger stores have larger, darker circles.
The last two maps were just variations on the second map. One version filtered out “small stores” that were less than 10,000 square feet. The other filtered out stores with the work “liquor” in the title. On a technical levels, these filters were easy to apply. However, I’m completely aware of the cultural assumptions I’m bringing to bear here. When I (white, affluent, middle class) think about a grocery store I think about a large store that doesn’t have the word “liquor” in the title.
That’s that! It was pretty easy to get this data and put it to use in the form of a map. I used Tableau here but I could also use Excel (with the power map add) or a more specialized tool like ArcGIS.
In terms of next steps or extensions:
- It would be interesting to compare results using a different tool. Might be good to showcase the basic steps for using each tool.
- It would be very interesting to add neighborhood boundaries and/or other information such as demographic information and/or economic status. I’ll have to look at ways to incorporate this data.
- It would also be very interesting to combine this data with user feedback like Yelp reviews.
As a way to document and share my work, I wanted to post this short online tutorial I made about using Google Scholar and the Brandel Library. I manage the data feeds (from SFX and now from EBSCO) that make these library links possible but I also feel like I needed to do more to make these connections apparent to our users. There are a number of reasons for this:
- First, I love Google Scholar and I find it very useful for known item searching. Given students and faculty another tool seems very helpful.
- Second, given the movement toward Open Access, I think “open” tools like Google Scholar do a better job searching that “gray” content that traditional databases struggle with.
- Lastly, the connections between Google Scholar and the Library are seamless and relatively transparent – which are good things! – some faculty believe that everything is “on Google Scholar” without realizing that the library is providing many of those links. So I think this is an opportunity to demonstrate value and market the library.
The tutorial making process at North Park is really quite nice – we have a dedicated terminal with a high quality microphone and specialized programs like Audacity and Camtasia that make it easy to create high quality tutorials. I’ve done several and am definitely getting better at using these tools – though I still don’t love the sound of my voice!
We are talking a lot about data, data literacy, and how North Park University can use Chicago data in the classroom. There are already a lot of courses using data in instruction and research so part of my work is figuring out what is already happening. Continue reading “Chicago Data for Undergraduate Research”
First, I’ve determined that there is no one “roadmap” that will lead my library into digital publishing. So, instead of creating a map, I’m going to do the best I can to sketch out the terrain ahead and think about questions that can guide our path.
This section tries to address two main questions: What is happening in the world of scholarly publishing that is relevant to North Park? What is happening within the North Park setting that is relevant to a library published endeavor? Quick thoughts:
- Continued movement toward Open Access. There is still work to be done in our local context but that is the clear movement. The Covenant Quarterly and Journal of Hip Hop Studies indicate this trend is taking root on campus.
- Institutional branding. There is a renewed focus in institutional branding and online presence. There could be powerful connections to make here.
- Publishing and the North Park mission. My sense is that North Park values diverse contributions to the academic community more than creating a specialized repository
- Chicago. There might be some opportunities to promote North Park within the regional context through research and student projects.
We need to define the scope of this project. There are many different efforts that fall under the broad category of “digital publishing”, including:
- Institutional Repositories
- Digital Humanities
- Data Repository
- Open Educational Resources
- Campus multimedia (lectures, performances, etc.)
Of these options, I think the most appropriate level and scope would be an institutional repository that contains simple/static documents such as PDFs. A next step would be to curate multimedia from across campus.
Even within this scope, the library will need to make editorial and collection development decisions to make sure that (1) we have a critical mass of content and (2) that there is some editorial scope. I think we should prioritize the following content areas and focus on building relationships with relevant parties.
- Honor’s Projects and Papers
- Student Research
- Master’s Thesis
- NPPress Student Research
- Covenant History Papers
- Partnerships with different courses/programs.
- Journal Articles
- Faculty/Staff Presentations and other “gray” literature
- Papers from campus symposiums
- Offer hosting/support for existing campus projects
Political Realities/Soft Skills
We would need some strong support from across campus to take on this project and lead the campus here. Given the proposed scope of this project, here are the people I think it would be important to connect with:
- The President
- Campus Deans
- The University Marketing and Communication Office
- Honors Program
- Seminary Faculty
- Faculty/Tenure Committee
- NPPRESS Leadership
- Student Research Committee
Some of these needed connections blend into the next set of questions that seeks to define the scope of this project and effort. I think if we have 5 strong allies (willing to contribute the content they are responsible for) that would make a strong starting point.
Do we have the technical and social workflows to produce, distribute and preserve this content? There are many overlapping questions here, but here is an attempt to list the important ones:
- Do we have the rights/permissions to publish these materials? Who will work with each group to determine these permissions and who will maintain the paperwork?
- Do we have the staff expertise, staff time, and faculty/staff connections to successfully manage this projects?
- What is the ongoing cost of this project in terms of hosting costs, incentives and open access fees, etc.?
- Where does this rank compared to other library/institutional priorities?
- What are peer institutions doing? What can we learn from them?
I just finished some “behind the scenes” updated to the Northfield Historical Society site and wanted to document that process here. It was definitely a bit messy at times and quite labor intensive, but I think it was the best way to deal with the situation I faced.
Starting last year, I got about 260 images from the old Northfield Historical Society site (archived here: http://www.oldsite.northfieldhistoricalsociety.org/). These images varied greatly in quality; there were a few large, high quality TIFF images but most files were small JPEGs the size of thumbnails. In order to get intellectual control over these files, I renamed them and manually formatted metadata (using Dublin Core) to create an Omeka site. This was part of my “Web Design for Organizations” class I took through GSLIS.
However, the low quality images didn’t look great online. In fact, they looked pretty bad. So I inquired if higher quality versions of the files existed somewhere else. After some searching, they were able to get higher quality images from another source. Success!
This new batch of files was a treasure trove…but also had a few problems:
- Very different file naming conventions.
- Included many additional photos not found in the initial ingest.
- Did not include all the files from the initial ingest.
- Included both TIFF and JPEG images.
So I needed to match the new files with the older set of images (keeping the highest quality image in each case) and then incorporate the new files into the file naming convention I established earlier.I used two tools that were particularly helpful in this process. One was a Batch Rename tool and the other was a Image Duplication tool. Both were extremely helpful.
Here are two screenshots of the same “item” with two very different image files.
The improvement is hard to miss!
My library just fielded a question from the Nursing department who, after reading this article from the Chronicle of Higher Education, wanted to know our policy for posting articles and chapters into our Learning Management System (LMS).
While I was drafting a response in private, I thought it would be good to summarize that article and then post my response here for future updating and public re-use.
The article is commentary on the Georgia State University lawsuit where three publishers – Cambridge University Press, Oxford University Press, and Sage Publications – challenged the Georgia State University’s policy that allowed faculty members to upload excerpts from books into their LMS. Thankfully, the court has decided that the vast majority (70/75) of these uses were “Fair Use” and therefore legal under the law.
But, as the article points out, the issue at stake is not just the Georgia State University uses but to clarify (perhaps define?) the legal limits of copyright and fair use as it related to academic libraries. So the case is not limited to those three published and that one university, the results are much more far-reaching.
The publishers’ request for a very broad injunction is not really a surprise. The plaintiffs always intended for the GSU case to establish a precedent that publishers could use to persuade colleges to pay for digital licenses from a company they work with, the Copyright Clearance Center.
So, like the author of this commentary from the Chronicle of Higher Education and likely most academic librarians, I am rooting for GSU in this case and hope that it established precedent that ensures a broad definition of fair use and does not impose time consuming record keeping to track the fair use of copyrighted material.
So, given my thoughts, how should I respond to the faculty inquiry about our policy regarding fair use. I think it’s an opportunity to both establish the broad playing field, underscore the ramifications of this decision, and invite further conversation.
Response to Faculty Inquiry
Thanks for reaching out with a question about copyright and fair use as it relates to articles and book chapters in an academic setting. This is clearly an important and heavily contested issue – one that really precludes a simple policy or rule – so I’m happy to provide some background and some safe best practices and then invite further conversation.
Best Practices for Licensed Content
In general, if you are using electronic resources licensed by the Brandel Library, we encourage you to provide permalinks to the library’s subscription into Moodle. Two main reasons for this policy:
- This is almost always a permitted use within our license agreements. Some database license agreements allow articles to be uploaded directly into a LMS but other licenses expressly forbid this. This prevents this level of confusion and creates a better experience for students and faculty.
- Linking back to the publisher provides the library with vital statistics. Linking this way ensures that we can make collection development decisions that reflect accurate usage – posting a PDF in Moodle prevents the library from tracking usage and impairs our ability to use data to make collection development decisions.
Fair Uses for Non-Licensed Content
This gets slightly more thorny with non-licensed content such as print book chapters, articles from print journals, or articles not available through the library’s online resources. Assuming that such materials are under copyright – which is a safe assumption unless it was published before 1923 or published with a Creative Commons license of some kind – the only legal option to consider is Fair Use.
The US legal code (Section 107 of the Copyright Act) defines four factors to consider with fair use:
- the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
- the nature of the copyrighted work;
- the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
- the effect of the use upon the potential market for or value of the copyrighted work.
Given that we are a university, the purpose and character of the use is educational and therefore the first factor will almost always support fair use. However, all educational uses are not permitted – copying an entire book and distributing it to a class would not be a fair use – and therefore all four factors should be considered.
The Georgia State University ruling seems to indicate that the courts view that using a single chapter from a book as fair use but that multiple chapters from a single book is problematic. However, posting a PDF of a scholarly article in Moodle would be problematic and would likely not quality as a fair use of that material. We are working on building up our electronic reserve capabilities here in the library and should be able to provide more robust services in this area soon.
I will conclude by underscoring a few things:
One issue at stake in the GSU case is how extensive our institutional record keeping needs to be in this area. The publishers want to require extensive recording keeping that GSU (and most schools) would view as very burdensome and a hindrance to fair use.
The proposed injunction would also require university personnel to confirm that every excerpt uploaded to course websites met the fair-use criteria and to keep track of information about the book, which parts were used, the number of total pages, the sources that were consulted to determine whether digital permissions were available, the date of the investigation, the number of students enrolled in the course, and the name of the professor. The university would have to maintain those records for three years.
North Park does not currently require any record keeping and entrusts faculty members to make informed decisions about fair use. The library will continue to follow this case and inform the campus if our record keeping policies need to change.
Second, one reason that fair use is so fuzzy and unclear is that there have not been many cases that have tested the limits of fair use as it related to academic institutions. As an academic library, we want to rigorous defend the rights of authors and content creators by respecting fair use and honoring our licensing agreements with the publishers we work with. On the other hand, we also want to claim the full expression of fair use afforded to us in the law.
I’m taking the Foundations of Data Curation class at GSLIS and just finished a progress report for the MODIS Snow Frequency data set. Here is a link to the report.
I want to jot down a quick research question so that I can pick it up at a later date.
Essentially, I’d like to research how religious leaders gather and use information. By religious leaders, I mean pastors and clergy (though I’d be interested in other religious traditions as well) and I think I’m defining “information” in the most general sense – including biblical and theological commentary. Here are a few more basic questions:
- What is the role of technology (blogs, social media, etc.) in information gathering?
- How can denominations and Christian organizations package content so that it is most meaningful to religious leaders?
- What is the relationship between information gathering and “politics”. Does diversity (or lack of diversity) in information sources translated into a particular theological point of view?
- How do seminaries (and seminary libraries/librarians) prepare students to succeed in this information landscape?
I’m interested and energized by this topic for several reasons:
- It connects with my thesis work. I could extend the framing metaphor of that work – centripetal and centrifugal force – and look at how denominations (the Covenant in particular) could better use information resources to build and foster identity.
- It connects to my work with the Covenant Quarterly. My strong sense – supported by research and other people’s experience – is that moving toward and online, Open Access publication will allow for the publication to have a greater reach and impact.
- It connects to my work with the Commission on Covenant History and could guide our work there.
I have been thinking a lot about web design and usability recently and thought it would be helpful to catalog websites that I think look really good and that I look at for ideas/inspiration as I tweak existing Omeka themes for the Northfield Historical Society.
I’m not really employing any set criteria, per se, about what sites are included or excluded – only that the site looks good and is very clear and functional. I noticed that many of these sites are information rich environments – I guess this makes sense because that’s the sort of environment I’m most interested in creating.
This site runs on Omeka and is the closest (i.e. most doable) example that I’m following. I like the boxy navigation and how the images are featured on all the pages (collection view, item view, etc.). I also like the simple color scheme of black and gray.
Another good looking special collection page that offers clear navigation and instant ability to navigate the site. Again, sort of “blocky” and geometric with a gray color scheme.