#1 Making Public Research Involvement Replicable
Aim: Stir thought about how acknowledgement of the work and storage of the methods can aid replication.
Problem: Holiday suitcase, how much can we carry? And what do we really need
Intervention: DOI connected to protocol and paper with a small paragraph in the methods
Question: At what cost?
#2 Should Patients Have access to Data About Them
Stir thought for legislation to free personal data access in real time that is proprietary
1)When patients lose health insurance but have implanted devices, they are at risk as devices can't be monitored
2) Patients can use this information to self-manage their care
At what cost?
What we can do Lobby and Research
Will share case studies and self-management
Abstract: Research materials in biological sciences are difficult to track down, a fact that we should not accept in science. The solution is simple, everyone to just do a better job in tracking reagents in their labs and everyone should then do a better job in reporting on these reagents. Of course, just like eating right and exercising for weight loss the implementation of a simple plan may not be as simple as one would hope.
SciScore, is an automated tool that helps journals track down whether the author is using reagents with sufficient information to find them, especially the reagents that are part of the RRID initiative. These include key biological resources, like antibodies, cell lines and model organisms, that have been called out specifically by the National Institutes of Health as being the key participants in irreproducible research. I would like to present our progress to date on this tool.
Abstract: Humanities Commons is a platform where users can create profiles, share work on websites or in our open access repository, and form groups for discussion and collaboration. We've heard, at least anecdotally that 1) people intend to update their profiles and share more work, but struggle to find time, and 2) that some people get to the platform, but don't know how to make use of the platform. This summer, we are trying a Humanities Commons Summer Camp, where members can join a group where they get a challenge every two weeks. The challenge prompts them to explore an aspect of the platform, and respond and report back to the group about what they find, or what they've accomplished. The summer camp thus introduces participants to the various features of the Commons, puts them in conversation with colleagues, and gives them experience using a group. This lightning talk will present our goals for the summer camp–to encourage use of the Commons and model group facilitation–and highlight (at least preliminary) lessons that might be applied to other communities.
Abstract: The hard won wisdom of a workshop road warrior in 4 simple lessons.
1. If you are teaching a standalone workshop, teach only that which can be best taught in a standalone workshop. Don't teach everything that everyone should do over the whole research workflow; people will be overwhelmed and won't know where to start. Don't teach methods that only apply to a specific moment in the workflow; it likely won't apply to stage the researcher is currently at, and will be forgotten by the next project.
2. Let people use their own data and code for exercises. Adoption of reproducibility tools and methods hinges on whether a research can apply them to their own data and code. Using their own data and code within the workshop allows barriers to be addressed within the workshop itself, increasing the chances that the researcher will be ready to adopt. This makes for a much messier but more meaningful workshop.
3. Teaching the "why" of reproducibility is a waste of everyone's time. If a researcher has somehow defended 2 or more hours of their time to attend a reproducibility workshop, they are already motivated to make their research reproducible. They are there to learn how, not why.
4. Curate: Be ruthless in editing of curriculum and curation of supporting resources. Anyone can google and find guidance. The best contribution an expert can make is to curate piles of guidance and recommend the best first steps.
Abstract: Future Commons is a new research network that explores the cultural, economic, disciplinary, and structural impediments that have limited the adoption of Open Science among researchers and institutions.
The project takes as its starting point the premise that the adoption (or not) of Open Science and Scholarship is primarily a problem of desire, motivation, and knowledge rather than, primarily, a lack of models, infrastructure, or support. That is to say, that the essential building blocks of a world wide Open Science community already exist; the problem has been a lack of builders willing to put them in place or live in the resulting ediface.
Based on a comprehensive survey of existing initiatives, the project seeks to develop ways of developing and marshalling support for Open Science on a collaborative and consensus-based Commons model.
This lightning talk will discuss the context, goals, and next steps of the Future Commons project, including an invitation to join the network in advance of its application in early 2019 to the SSHRC Partnership Programme.
Abstract: How do students and clinicians engage with concepts of scholarly communication? How can we leverage discipline-specific behaviors to better teach these threshold concepts. Librarian Eric Robinson is developing a project to examine these questions among graduate students in occupational therapy at the University of St. Augustine for Health Sciences. Attendees are encouraged to consider discipline-specialization and how scholarly communication instruction can benefit from examining these questions.
Abstract: To begin, this lightning talk will give an easy-to-understand definition of what a block chain is, and how it is (or may be) used by publishers to track their content. I will suggest that this behaviour may seriously impact subversive ways of sharing content in the scholarly community (such as requests for access to articles through informal communication with authors over twitter). To contrast, I will also reference block chain endorsement by the Alliance of Independent Authors. I will conclude by asking the audience to consider where they stand on the introduction of block chain in publishing with regards to their personal and professional ethics, and ask them to consider how to best serve their library and support their researchers in the face of these upcoming changes.
Abstract: What if scholarly communications was truly collaborative, with researchers seeking out the best and most relevant resources available through their institution at all stages of scholarly communications? How close are we to meeting this goal?
SFU Library recently conducted usability testing with graduate student library staff on the newly revised Scholarly Publishing and Open Access webpages. We were surprised to learn that these students, who are open access and open source advocates, were completely unaware of many of the scholarly communications resources and tools offered by the library. When faced with a number of hypothetical scenarios related to graduate student research, they told us in no uncertain terms that they simply wouldn't come to the library website to look for information to answer questions about such things as applying for funding for open access, assessing the quality of journals, or measuring the impact of their scholarly work. Our initial goal — to test the usability of the new webpages — was suddenly unimportant when compared to the larger issue of the need to raise awareness of the services we offer to those who can benefit from them.
Our university libraries work hard to provide scholarly communications services to scholars, through web content, workshops, consultations, handouts, classroom instruction, digital publishing opportunities, and more. Many of these are developed in response to researchers' needs, while others serve to promote and advocate for sustainable alternatives to traditional forms of scholarly publishing. But all of these resources lack meaning if researchers don’t know they exist, or fail to seek them out because they underestimate their value.
It's apparent that university libraries have a long road ahead to become fully collaborative with researchers at our institutions. Developing new and more effective partnerships throughout the university is key to creating a collaborative process where researchers can tap into the most relevant tools and resources to support their scholarly work. At SFU Library, we plan to continue to build meaningful partnerships with relevant groups to better integrate library services into research and scholarly communications activities throughout the university. We aim to leverage our champions, researchers who use a wide range of scholarly communications services, to promote these services to their colleagues within and beyond their departments.
Abstract: It’s well known and widely accepted that FAIR data should be findable, accessible, interoperable as well as reusable. As one of the largest research data producers worldwide, China released new national-level rules recently to promote research data sharing across different disciplines. It's the "Measures for Managing Scientific Data" launched by the General Office of the State Council in China.
Here’s the analysis of the new law judging from the FAIR perspectives. First, the rules require compilation of scientific data resource catalogues to make data findable. Second, to be accessible, DOI and URI are two popular tools to identify unique data resources. While there’re also some local practices, such as the invention of national-level standard titled “Science and technology resource identification” and specific identifiers, such as those used in agricultural science. Third, to be interoperable and reusable, standardized management and open sharing of research data had been called on throughout different levels, from national level to institutional level, as well as encouragement of individual participation. Subsequently, construction of national data centers together with a scientific data archive system shall guarantee the reuse of data.
Above all, the national-level rules open a new door for research data stewardship and sharing in China. Years of efforts are still needed for better open data through real practices, but we know for sure that here is a good start.
Abstract: In 2017, the authors won a competitive grant for the corresponding author to attend the Institute for Research Design in Librarianship (IRDL; http://irdlonline.org/), a weeklong summer seminar aimed at teaching qualitative and quantitative research skills to librarians. Drawing from knowledge gained at the Institute, the authors designed, distributed, and analyzed a survey of their faculty's perceptions and practices regarding open access scholarly publishing. The first such environmental scan at the University of Connecticut, this study yielded insightful and sometimes surprising results. The speaker will highlight some of these results using Tableau visualizations and will also highlight key challenges and successes of the process of developing their first research study, thanks to the support of the grant-funded Institute. Like FSCI, the IRDL also took place in southern California and also entailed sleeping in dorms for a week! This lightning talk will be a draw for two key populations: (1) librarians interested in building their research skills and (2) people interested in perceptions of OA at a Top 20 public university largely untouched by the OA movement.
Abstract: A lot of time, effort and money is expended by libraries at research institutions to develop and maintain institutional repositories (IRs) that can describe and store the research data produced by their institution’s researchers. But is it paying off? Data IRs are often out of sight and out of mind of the researchers they offer to support. They lack consistency, visibility and critical mass. They are losing to subject and commercial alternatives, are not supported by policy and are resource intensive to maintain. If nothing changes, could data IRs become an endangered species? Or worse, could they become zombie repositories that continue to limp along on, blissfully unaware that their time in the scholarly communications universe has ended?
Abstract: All too often we hear that scholarly communication would be different if incentive structures such as tenure and promotion recognized a different way of doing things—openness, reproducibility, diversity, to take but a few examples—as indicators of quality work. The HuMetricsHSS initiative is an attempt to build a flexible, holistic, values-based evaluation framework that would enable the implementation of such structures in a range of higher ed institutions.
Abstract: There are several projects in the research community to make the citation data extracted from research papers more re-usable. This talk presents results from the CyrCitEc project to create a publicly available source of open citation content data extracted from PDF papers available at a research information system. To reach this aim the project team has created four outputs: 1) an open source software to parse papers’ metadata and full text PDFs; 2) an open service to process papers’ PDFs to extract citation data; 3) a dataset of citation data, including citation contexts (currently mostly for papers in Cyrillic); and 4) a visualization tool that provides users insight into the citation data extraction process and gives some control over the citation data parsing quality.
Abstract: The talk presents an integration of the citation data produced by the CyrCitEc project with tools of research information system Socionet. Socionet provides collections of research papers’ metadata. CyrCitEc parses the citation data from research papers’ PDF, including in-text citations and citation contexts. Socionet utilizes the citation data to build computer-generated annotations for the in-text citations in full text PDFs. This integration allows making in-text citation as interactive elements and opens an opportunity to create a communication instrument between citing and cited authors.