- E01 – Maximizing Visibility of Your Work in the Open Access Jungle
- E02 – Getting Attention and Bringing Others on Board: Applying Basics in Marketing and Communications to Advance Open Research
- E03 – Engaging with Open Infrastructure Globally
- E04 – Logic Models: Program Planning and Evaluation for the Global Impact of Open Scholarship
- E05 – Responding to Global Challenges: Developing and Evaluating Research & Data Instruction in Low and Middle Income Countries (LMICs)
- E06 – Analyzing Your Institution’s Publishing Output
- E07 – The Role of AI Ethics in Scientific Publications on Open Science Platforms: Examining the Use of AI in Global Publication Governance Models for Impactful Knowledge Sharing and Social Benefit
- E08 – Publishing from Collections Using Linked Open Data Source and Computational Publishing Pipelines
- E09 – Understanding, Benchmarking, and Tracking Equity and Inclusion in Open Access and Open Science
- L10 – Evaluating Open Access Journals: Moving from Provocative to Practical in Characterizing Journal Practices
- L11 – Applying Strategic Doing, an Agile Strategy Discipline, to Build Collaborations Across Diverse Teams
- L12 – Catalyzing Team Science: How to Forge an Interdisciplinary Team to Attack a Complex Research Problem
- L13 – Using the ORCID, Sherpa Romeo, and Unpaywall APIs in R to Harvest Institutional Data
- L14 – The FAIR Principles in the Scholarly Communications Lifecycle
- Scroll down to see the abstracts for each course
E01 – Maximizing Visibility of Your Work in the Open Access Jungle
Georgina Harris, Timothy Vollmer
Abstract: With a shift toward open access (OA) publishing and an increasing number of journals and publishing platforms, there are plenty of opportunities for researchers to share their scholarship. But how can authors manage and maximize their scholarly impact? This course will explore tactics that early career researchers can take in order to increase the visibility of their academic writing, with a focus on the OA publishing ecosystem. We will cover different types of copyright licenses, scholarly profiles, networking tools, platforms, and research metrics. We will also discuss opportunities to work with publishers, news outlets and societies to increase the visibility of your articles with a hands-on exercise. By the end of this course, you will understand why impact matters, what measuring it can (and can’t) do, and what short- and longer-term actions you can take to improve the potential of your work.
Audience: Researchers
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
7:00 – 8:00 AM
Wednesday, August 2
7:00 – 8:00 AM
Thursday, August 3
7:00 – 8:00 AM
E02 – Getting Attention and Bringing Others on Board: Applying Basics in Marketing and Communications to Advance Open Research
Jennifer Gibson, Rowena Walton
Abstract: Getting the attention of faculty, students, decision-makers, and others and convincing them to break out of long-established habits to try something new is a defining aspect of work in scholarly communications. The future of open research is dependent on our ability to change behaviors.
Putting compelling messages in front of the right audiences is a practiced art and science in marketing and communications. The world’s biggest brands are masters at convincing us that our shampoo is bad for our hair and that we need to buy more sugary soda.
Social marketing, which long precedes social media, is the application of commercial marketing principles and practices to effect social and behavioral change. The same systems for understanding an individual’s needs and pains, for communicating to them in their world, on their terms, and convincing them to attempt a change in behavior can be used to promote adoption of open research practices as well as purchases of bacon double cheeseburgers.
This course will explore the basics of marketing strategy and their application in the research environment – to advance open research or any other type of behavior change. Participants will learn how to:
- Communicate powerfully by separating audiences according to their different interests.
- Get the most out of an outreach program by prioritizing specific audiences.
- Build a compelling offering by aligning the service with the audience’s needs and available choices.
- Cut through the noise by creating messages in the audience’s voice.
- Develop a comprehensive, impactful outreach program that gets attention from the right people.
- Monitor the program and make regular improvements to try to increase impact.
Audience: Individuals with the responsibility to promote and advocate for open research practices in the academic community, targeting faculty, students, librarians, publishers, administrators, and disciplinary communities. These may include librarians, community managers, start-ups, publishing staffers, and others.
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
7:00 – 8:30 AM
Wednesday, August 2
7:00 – 8:30 AM
Thursday, August 3
7:00 – 8:30 AM
E03 – Engaging with Open Infrastructure Globally
Gabriela Mejias, Xiaoli Chen, Nabil Ksibi, Shayn Smulyan, Ana Patricia Cardoso, Susan Collins
Abstract: A robust and resilient research infrastructure — one that supports the tools, services, and systems that researchers rely on — is essential to ensuring that the research process is as efficient and effective as possible. When research infrastructure is open, it is typically supported financially by membership fees, (time-limited) grant funding, or a combination of the two, and has a community-led governance structure. Open infrastructures are usually available globally but often adoption is spread unevenly.
DataCite, ORCID, and Crossref are open infrastructure organizations focused on connecting research entities and making them findable, uniquely identified, citable, and interoperable. All three are internationally focused, nonprofit, community governed, membership-based organizations that provide foundational open scholarly infrastructure. DataCite, ORCID, and Crossref all have the same core service: provision of unique, persistent identifiers and associated repositories of metadata and links accessible through open APIs and public datasets. Each set of services is centered on each organization’s focus research entity(s).
The three organizations are currently developing materials to clarify what a country’s research institutions, publishers, funders, and government bodies need to have in place minimally, to work successfully with global open infrastructure. Based on these materials, we are presenting a course for research organizations and researchers. In the course we will discuss:
- Why is it important that infrastructure is open, and what does that openness look like?
- What options do you have to use open infrastructure as part of your daily activities?
- What is metadata and why is it important for establishing evidence and provenance?
- How can you create and use metadata to make connections?
- How can you increase visibility and trust through the use of persistent identifiers and metadata?
Audience: Researchers, librarians, faculty/scholars, administrators
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
7:00 – 8:30 AM
Wednesday, August 2
7:00 – 8:30 AM
Thursday, August 3
7:00 – 8:30 AM
E04 – Logic Models: Program Planning and Evaluation for the Global Impact of Open Scholarship
Jennifer Miller
Abstract: Do you need to apply for a grant to fund your program? Do you need to evaluate a program to see if it achieved its intended results? Are the people evaluating your proposal or program especially concerned with impact? A logic model can help you to address all of these questions and more.
A logic model is a visual representation of a theory of change. That is, it shows how you think your program will get results. A logic model provides a structure for the case you are making that your program’s impact will increase wellbeing in society. This deceptively simple tool is the powerhouse of program planning and program evaluation. A logic model helps to identify all necessary resources and activities, measures of outputs and reach for accountable service delivery, and measures of outcomes and broader societal impact.
A logic model is often required when applying for or using grant funding. Even when they are not explicitly required, logic models are helpful to make the case for your program, develop a narrative, plan for evaluation, and describe the program’s success. Participants are especially encouraged to attend with a specific program in mind relevant to their own work. This workshop will be suitable for anyone submitting a grant proposal, evaluating proposed programs, or evaluating program results. Logic models are especially relevant in nonprofit settings and all levels of government. In the private sector, they can be relevant to placing an emphasis on a triple bottom line like profit, people, and planet.
In this workshop, participants will learn the structure and vocabulary to develop and interpret logic models; practice role playing collaborative logic model development in an interactive whiteboard case study for a global open science program; and use a provided template to develop, present, and exchange feedback on a logic model relevant to their own work. The instructor will be available by email for follow-up questions after the workshop to help participants finalize logic models they start during the workshop.
Audience: Researchers, librarians, administrators
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
8:00 – 9:00 AM
Wednesday, August 2
8:00 – 9:00 AM
Thursday, August 3
8:00 – 9:00 AM
E05 – Responding to Global Challenges: Developing and Evaluating Research & Data Instruction in Low and Middle Income Countries (LMICs)
Armine Lulejian, Tamara Galoyan, Lynn Kysh, Ruzanna Movsisyan
Abstract: With the availability of internet connection in low and middle income countries (LMICs) and recent challenges posed by the pandemic, it has become increasingly important to collaborate with colleagues globally. While opportunity for such collaboration is accessible, how to go about this is often rushed or overlooked. The purpose of this workshop is to develop an instructional intervention (course, workshop, or module) in response to a research or data challenge of an LMIC.
The workshop is divided into three main components of such undertaking.
- First, students will seek information or evidence to identify a research or data challenge in an LMIC, and develop appropriate and measurable learning objectives for an instructional response.
- Next, students will learn how to identify learner needs assessment to develop, design, and sketch modules, workshops, or classes; select instructors and collaborators; and select modalities of instruction.
- Finally, students will learn how to differentiate between three main types of mixed-methods design, and apply a mixed-methods design to create a sample evaluation plan for their own program.
At the end of the workshop, students will leave with a toolkit of templates and resources as well as their own blueprint of a working document of instructional materials to create an educational offering in LMICs.
Audience: Researchers, librarians, faculty/scholars, administrators, technical support
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
8:00 – 10:00 AM
Wednesday, August 2
8:00 – 10:00 AM
Thursday, August 3
8:00 – 10:00 AM
E06 – Analyzing Your Institution’s Publishing Output
Allison Langham-Putrow, Ana Enriquez
Abstract: Many people—publishers, researchers, university administrators, even others in your library—will try to tell you things about your university’s publishing output. They will state that researchers want their institution to pay a fee for open access publishing. Knowledge about your institution’s publishing output is power. This course will teach you how to analyze your institution’s output at scale. You will be able to find the repositories most used by your institution’s researchers, identify the funders funding your researchers, estimate which articles were covered by an APC or a read and publish agreement—still a small percentage at many universities. The instructors will highlight the framing of the Budapest Open Access Initiative 20th Anniversary Recommendations: “OA is not an end in itself, but a means to other ends, above all, to the equity, quality, usability, and sustainability of research.” The instructors will encourage participants to use data analysis to support these ends, not to support APCs and read and publish agreements, which simply move the paywall from the reader to the author.
After completing the course, participants will be able to
- Gain an understanding of their institution’s publishing output, such as number of publications per year, open access status of the publications, major funders of the research, estimates of how much funding has been spent toward article processing charges (APCs), and more.
- Think critically about institutional publishing data to make sustainable and values-driven scholarly communications decisions.
This course will build on open infrastructure, including Unpaywall and OpenRefine. We will provide examples of how to do analyses in both OpenRefine and Microsoft Excel.
The course will pair readings about equity in scholarly communications with data analysis.Participants will learn how to build a dataset. We will provide lessons about downloading data from different sources: Web of Science, Scopus, and The Lens. (Web of Science and Scopus are subscription databases; The Lens is freely available.) Participants may also have the opportunity to explore OpenAlex, a new data source.
Next, participants will learn data analysis methods that can help answer questions such as:
- Should you cancel or renew a subscription?
- Who is funding your institution’s researchers?
- Are your institution’s authors using an institutional repository?
- How can you push back against calls for your institution to accept APC-based open access publishing offers?
Libraries must be prepared to negotiate agreements that align with their values. By learning to do these analyses for themselves, participants will be better prepared to enter into negotiations with a publisher. The expertise developed through this course can make the uneven playing field of library-publisher negotiations slightly more even.
Course materials are already openly available at z.umn.edu/AIPO. This will be a facilitated course taught by the authors.
Audience: Librarians in all disciplines
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
8:00 – 11:00 AM
Wednesday, August 2
8:00 – 11:00 AM
Thursday, August 3
8:00 – 11:00 AM
E07 – The Role of AI Ethics in Scientific Publications on Open Science Platforms: Examining the Use of AI in Global Publication Governance Models for Impactful Knowledge Sharing and Social Benefit
Course Chairs: Francis Crawley, Perihan Elif Ekmekci, and Claudia Bauzer Medeiros,
Instructors: André Carlos Ponce de Leon Ferreira de Carvalho, Ana Persic, Alexander Bernier, Zhang Lili, Valery Sokolchik, Natalie Meyers, Rita S. Sitorus, and fellows
Abstract:
This online course of three 3-hour meetings over three days is designed to introduce a global audience to the role of artificial intelligence (AI) ethics in the publication of digital objects on open science platforms. The course has been created by a faculty of international experts in online publications, publication ethics, AI ethics, data policy, digital publication ontologies, and open science platforms. A number of organisations have contributed to the development of the topics, syllabus, and course materials, including the EOSC-Future/RDA Artificial Intelligence & Data Visitation Working Group (AIDV-WG), CODATA, and the International Science Council. A guiding document is UNESCO’s Recommendation on Open Science as well as its Recommendation on the Ethics of Artificial Intelligence.
The aim of the course is to demonstrate the value of AI ethics in contributing to the governance of reliable and trustworthy open science frameworks for knowledge creation and citizen benefit in our emerging digital societies. The course examines the ethical, regulatory, and policy implications arising from the development of AI in the publication of science two areas:
- The publication of algorithms, machine learning (ML) software, and other AI-related tools used for the advancement and development of science; and
- The use of AI and ML in scientific publications as it informs research and writing while also contributing to and/or challenging the integrity, robustness, and accountability of scientific publications and communications.
Audience: Researchers, librarians, faculty/scholars, publishers, technical support, AI & data ethicists, policymakers, AI & data scientists, AI & data governance experts
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
8:00 – 11:00 AM
Wednesday, August 2
8:00 – 11:00 AM
Thursday, August 3
8:00 – 11:00 AM
E08 – Publishing from Collections Using Linked Open Data Source and Computational Publishing Pipelines
Simon Worthington, Simon Bowie, Janneke Adema
Abstract: This is a hands-on class for participants with no prior experience of computational publishing (Jupyter Notebooks) or linked open data (Wikidata and Wikibase). The class has two demonstration use cases for the auto-creation of catalogue publications for exhibitions or publication listings – made from multiple linked open data (LOD) sources and publishing as multi-format: web, PDF and, ebook, etc.
- Catalogue information and media from the Corpus of Baroque Ceiling Paintings in Germany (CbDD).
- Catalogue information from the Thoth book publication metadata platform. Thoth is an open-source platform for creating and distributing single-source book metadata.
The example workflows have been put together by researchers from the two research consortia NFDI4Culture – National Research Data Infrastructure Germany, and COPIM (Community-led Open Publication Infrastructures for Monographs) in consultation with the publisher Open Book Publishers, Cambridge (UK).
Audience: Researchers, librarians, faculty/scholars, publishers in humanities and climate science
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
9:00 – 10:30 AM
Wednesday, August 2
9:00 – 10:30 AM
Thursday, August 3
9:00 – 10:30 AM
E09 – Understanding, Benchmarking, and Tracking Equity and Inclusion in Open Access and Open Science
Micah Altman
Abstract: Who participates in open-access publications and open-science research? This course – based on ongoing IMLS and Mellon Foundation supported research and education projects – is for researchers, practitioners and administrators wishing to understand, interpret, analyze, or measure participation in open scholarly activities.
Over three sessions, we will examine quantitative measures of open science and open access outputs; measures of international diversity; and measures of gender bias. Each session will include a discussion of core concepts and measures, key summary reports and databases, and quality and reliability measures.
Each session will be divided into three parts so that attendees can choose to engage the subject at the depth appropriate to their needs. The first part of each session – for all attendees – will cover core concepts and summary sources. This part is sufficient for those who wish to locate, understand and interpret existing summary reports and interactive websites to identify benchmarks and trends.
The second hour of each section will focus on exercises analysis using interactive R notebooks to analyze participation data retrieved from open APIs. This part will be of interest to those with an interest in conducting their own data analyses. The third part of the course is intended for those planning to actively collect new data within their own institutions or projects, and will focus on specific data-collection scenarios – based on a pre-course survey of enrolled participants.
Audience: Researchers, librarians, faculty/scholars, administrators
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
9:00 AM –12:00 PM
Wednesday, August 2
9:00 AM –12:00 PM
Thursday, August 3
9:00 AM –12:00 PM
L10 – Evaluating Open Access Journals: Moving from Provocative to Practical in Characterizing Journal Practices
Karen Gutzman, Annie Wescott
Abstract: In today’s scholarly publishing ecosystem, researchers, librarians, academic institutions, funders, and even publishers have difficulty in identifying and tracking journals that engage in practices ranging from fraudulent and deceptive to questionable and unethical.
In this course, we will define these specious practices, avoiding the binary “predatory” and “legitimate” classification by exploring the nuances of journal practices and how these practices have emerged as a result of the current academic publishing model. We will investigate tools for evaluating journal quality and discuss relevant case studies that will provide helpful context. Finally, we will review recommendations for raising awareness and promoting good practices in scholarly communications.
This course aims to prepare librarians and other support personnel to offer training and support for researchers in how to understand the norms in open access publishing and how to avoid deceptive or low-quality journals. We will cover useful tools for mitigating the likelihood of publishing in these journals and discuss steps to take to assist researchers who believe they may have published in such a journal.
This course will take place over three hours, with each hour containing a mixture of lecture and discussion based on a case study or investigation of a tool for evaluating journal quality. We encourage students to engage in discussions and share their own experiences.
Audience: Researchers, librarians, faculty/scholars, administrators
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
4:00 – 5:00 PM
Wednesday, August 2
4:00 – 5:00 PM
Thursday, August 3
4:00 – 5:00 PM
L11 – Applying Strategic Doing, an Agile Strategy Discipline, to Build Collaborations Across Diverse Teams
Jeffrey Agnoli, Meris Mandernach Longmeier
Abstract: Strategic Doing™ assists teams in answering four basic strategic questions using 10 simple rules. This method leverages a network approach to build collaboration, enhance trust, and produce measurable outcomes. Presenters will share how they apply these methods to build research and creative expression initiatives that enable strategic planning, ideation, and operations management.
Participants will learn how to answer these four basic questions to develop a compelling strategy:
- What could we do?
- What should we do?
- What will we do?
- What is our action plan?
These concepts map to “the science of team science” competencies, including but not limited to: how to promote psychological safety and transparency; democratic prioritizing; clarifying roles and responsibilities; and supporting more productive teams.
Audience: Researchers, librarians, faculty/scholars, administrators, technical support staff, interdisciplinary researchers, and those interested in supporting these types of teams.
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
4:00 – 5:30 PM
Wednesday, August 2
4:00 – 5:30 PM
Thursday, August 3
4:00 – 5:30 PM
L12 – Catalyzing Team Science: How to Forge an Interdisciplinary Team to Attack a Complex Research Problem
Ronald Margolis
Today’s research problems, particularly in the life and natural sciences, are characterized by the need for increasingly complex and technically challenging approaches. Often it takes a multidisciplinary approach to first understand and then solve such problems. To do so, teams of investigators may find they need to come together to bring distinct and sometimes novel expertise to bear on the problem.
The challenge is to assemble teams with from all levels of the academy, from trainees (undergraduate and graduate students) and junior faculty, to the most senior members of a department or discipline. Melding these differing levels of experience, expertise, working knowledge, and creativity to form a cohesive team is a task for organizers who must manage disparate personalities and career paths. Understanding the needs of junior versus senior members of a team as well as how to allocate credit, and encourage openness to ideas from all contributors requires both skill and tact. The willingness to sometimes push the envelope in search of answers may bring both tension and reward. Balancing individual career needs against emerging changes in scholarly communication further increases the need to understand challenges and opportunities.
Ideally participation in a collaborative team rewards both participants and the broader field leading to outcomes where the whole is greater than the sum of the parts. Adopting a common goal is not easy, but the results can lead to expanded opportunities for team members as they uncover new ideas and advance the field.
This course will explore the concept of a collaborative team approach to solving a complex problem and help participants to see how they might fit into such a concept and contribute to a team science approach to a complex and unfulfilled problem.
Activities will include defining the components needed to assemble a team, identifying the expertise and disciplines needed to address the problem as well as how to formulate rules for organizing and managing the team. The final steps will include how the team can report out its findings and reward participants.
Audience: Researchers, librarians, faculty/scholars, administrators, technical support
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
4:00 – 5:30 PM
Wednesday, August 2
4:00 – 5:30 PM
Thursday, August 3
4:00 – 5:30 PM
L13 – Using the ORCID, Sherpa Romeo, and Unpaywall APIs in R to Harvest Institutional Data
Clarke Iakovakis, Kay Bjornen, Brandon Katzir, Megan Macken
Abstract: The objectives of this course are to obtain a set of ORCID iDs for people affiliated with your institution, harvest a list of DOIs for publications associated with these iDs, and gather open access information for the articles using Sherpa Romeo and Unpaywall.
Students will work with a set of pre-written scripts in R, customizing them for their institutions to access the APIs for ORCID, Sherpa Romeo, and Unpaywall, and bring it all together into a manageable data file.
While some experience using R will be helpful, it is not required. However, although the basics of using R and understanding the code will be reviewed, the emphasis of the course will be on running the scripts and gathering and interpreting the data. In other words, this course is focused not on learning R, but rather on obtaining a dataset of publications based on institutional affiliation and open access information on those publications. It is inspired by a course taught previously at FSCI, available at https://osf.io/vpgbt/. The course will conclude with a discussion of using this data to develop outreach methods to authors to inform them of their right to deposit author manuscripts.
Audience: Researchers, librarians, faculty/scholars, publishers, administrators, technical support
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
4:00 – 7:00 PM
Wednesday, August 2
4:00 – 7:00 PM
Thursday, August 3
4:00 – 7:00 PM
L14 – The FAIR Principles in the Scholarly Communications Lifecycle
Matthias Liffers, Kathryn Unsworth
Abstract: This course will focus on FAIR research data management and stewardship practices. It will provide an understanding of FAIR (findable, accessible, interoperable, and reusable) data and how it fits into scholarly communication workflows. Participants will learn about the FAIR Data Principles and how they can be implemented with regard for indigenous data sovereignty under the CARE principles.
Good data stewardship is the cornerstone of knowledge, discovery, and innovation in research. The FAIR Data Principles address data creators, stewards, software engineers, publishers, and others to promote maximum use of research data. In research libraries, the principles can be used as a framework for fostering and extending research data services.
This course will provide an overview of the FAIR Data Principles and the drivers behind their development by a broad community of international stakeholders. We will explore a range of topics related to implementing FAIR principles, including how and where data can be described, stored, and made discoverable (e.g., data repositories, metadata); methods for identifying and citing data; interoperability of (meta)data; and tips for enabling data reuse (e.g., data licensing) with best-practice examples. Along the way, we will get hands-on with data and tools through self-paced exercises. There will be opportunities for participants to learn from each other and to develop skills in data management and expertise in making data FAIR.
The course will conclude with a look at applying the FAIR principles beyond data, such as vocabularies, platforms, software, and training materials.
Audience: Researchers, librarians, faculty/scholars, publishers, administrators, technical support staff, and research infrastructure project teams.
LIVE ZOOM SESSION SCHEDULE (All times Pacific UTC-7)
Tuesday, August 1
5:00 – 6:00 PM
Wednesday, August 2
5:00 – 6:00 PM
Thursday, August 3
5:00 – 6:00 PM