The Future of Research Communications and e-Scholarship

Search
Close this search box.

FSCI 2024

Increasing Transparency, Integrity, and Trust in Research

FORCE11 Scholarly Communication Institute in partnership with the UCLA Library

FSCI 2024 Courses & Abstracts

Modified: Fri, 24 May 2024 17:04:49 +0000
Published: 24 May 2024

Course numbers for each course are either V or C.

V are virtual courses, being held the week of July 22nd through 26th

  • Most courses are conducted on July 23rd, 24th, and 25th
  • Please select one early and one late course – most of them conflict
  • A few courses have a fourth session on July 26th, which will be specified below

C are campus courses, being held July 29th through the 31st

  • Please select only one early and one late course – all of them conflict

Virtual Courses

  • Held the week of July 22nd through 26th
  • Most courses are conducted on July 23rd, 24th, and 25th
  • Please select one early and one late course – most of them conflict
  • A few courses have a fourth session on July 26th, which will be specified below
Course NumberCourse TitleDesignated TimeslotZoom LengthLead InstructorAdditional Instructor(s)
V01Science Is a Social Process: Data Sharing Practices to Drive Scientific Discovery and Research IntegrityEarly: 7AM Pacific (UTC-7)1.5 hoursSarah Lippincott Jess Herzog 
V02Bibliometric Indicators and Open Science: A Hands-On PrimerEarly: 7AM Pacific (UTC-7)2 hoursAlex Wade Mike Taylor 
Kathryn Weber-Boer 
V03Research Information Management and Expertise Systems: Implementation, Administration, Staffing, and UtilizationEarly: 8AM Pacific (UTC-7)2 hoursClarke Iakovakis Jeffrey Agnoli
Rebecca Bryant
others pending
V04Understanding, Benchmarking, and Tracking Equity and Inclusion in Open Access and Open ScienceEarly: 8AM Pacific (UTC-7)3 hoursMicah Altman 
V05Analyzing Your Institution’s Publishing OutputEarly: 8AM Pacific (UTC-7) Tuesday through Thursday. Plus 10AM on Friday so people can attend the plenary.3 hoursAllison Langham-Putrow Ana Enriquez 
V06An Introduction to Querying the APIs of Open Metadata ServicesEarly: 9AM Pacific (UTC-7)1 hourAmanda French Luis Montilla 
Kelly Stathis 
V07Correction of the Literature: Post-Publication Scrutiny for Ensuring Scientific Record IntegrityEarly: 9AM Pacific (UTC-7) Plus one 1.5 hr session on Friday. 
Exact times:
Tues 7/23 9AM – 11:30 AM
Wed 7/24 9AM – 11AM
Thurs 7/25 9AM – 11AM
Fri 7/26 10AM – 11:30 AM 
2.5  hours – 1.5 hoursXimena Illarramendi Carmen Penido 
Mariana Ribeiro 
V08Good Governance for AI in Scientific Publications: Developing Policy for Reliability, Ethics, and IntegrityEarly: 9AM Pacific (UTC-7) – Unconfirmed and subject to change3 hoursFrancis Crawley 
V09Evaluating Open Access Journals: Moving from Provocative to Practical in Characterizing Journal PracticesLate: 4PM Pacific (UTC-7)1.5 hoursKaren Gutzman Annie Wescott 
V10Applying Strategic Doing, an Agile Strategy Discipline, to Build Collaborations Across Diverse TeamsLate: 4PM Pacific (UTC-7)1.5 hoursJeffrey Agnoli Meris Mandernach Longmeier 
V11Using the ORCID, Sherpa Romeo, and Unpaywall APIs in R to Harvest Institutional DataLate: 4PM Pacific (UTC-7)3 hoursDani Kirsch Brandon Katzir 
V12Empowering Future Trainers in Research Integrity: A Course Integrating Theory and Hands-On Exercises in Tools, Stakeholder Engagement, and Science OutreachLate: 4PM Pacific (UTC-7)3 hoursEleonora Colangelo Sidney Engelbrecht 
Christopher Magor 
V13No More Dream: Making Transparent Music Scholarly Communications a RealityLate: 4PM Pacific (UTC-7)1.5 hoursKathleen DeLaurenti Matthew Vest 
V14Preserving the Digital Future: Advanced Strategies for Web Archiving and Combating Reference RotLate: 5PM Pacific (UTC-72.5 hoursRosario Rogel-Salazar Alan Colin-Arce 
V15Managing Data for Research Transparency and Reproducibility: A Collaborative Class for Basic and Advanced PractitionersLate: 5PM Pacific (UTC-7)1.5 hoursJohn Borghi 
V16Assessing Open Access Journals Using DOAJ Criteria: An Interactive Course For All LevelsLate: 5PM Pacific (UTC-7)2 hoursIvonne LujanoMuhammad Imtiaz Subhani

Campus Courses

  • Held July 29th through July 31st
  • Please select only one early and one late course – all of them conflict
Course NumberCourse TitleLead Instructor Additional InstructorCourse time
C01The Science of Collaboration: Creating synergies to solve and report solutions to complex research problemsRonald Margolis Early
C02PREreview Open Reviewers Workshop: A Hands-On Ethical Peer Review Training Program For Researchers of All Career LevelsDaniela Saderi Vanessa Fairhurst Early
C03Making Research More Transparent with Quarto and RStudioBella Ratmelia Dong Danping Early
C04Unlocking Knowledge: Exploring Open Research Infrastructure and FAIR PrinciplesGabriela Mejias Xiaoli Chen Early
C05Forensic Scientometrics: Safeguarding Scientific Integrity and Trust in Research through Forensic InvestigationsLeslie McIntosh Suze Kundu Early
C06Analyzing Public Access Policy to Guide Scholarly Communications OutreachJonathan Grunert Nina Exner Early
C07Getting Attention and Bringing Others on Board: Applying Basics in Marketing and Communications to Advance Open ResearchJennifer Gibson Late
C08Good Governance for AI in Scientific Publications: Developing Policy for Reliability, Ethics, and IntegrityFrancis Crawley Late
C09Understanding, Benchmarking, and Tracking Equity and Inclusion in Open Access and Open ScienceMicah Altman Late
C10Evaluating Open Access Journals: Moving from Provocative to Practical in Characterizing Journal PracticesKaren Gutzman Late
C11The Butterfly Effect – Understanding the Big Picture Research Ecosystem to Help Open PracticeDanny Kingsley Late
C12Learn to bring an “infinite game” mindset to your daily work to build trustful, generous, effective research collaborationsBruce Caron Late
C13Navigating the Open Access Book LandscapeMichael Ladisch Jennifer Chan Late

V01: Science Is a Social Process: Data Sharing Practices to Drive Scientific Discovery and Research Integrity

Early: 7AM Pacific (UTC-7), 1.5 hours

Instructors:

  • Sarah Lippincott 
  • Jess Herzog 

Abstract:

New policies and guidance from federal agencies and funders reflect a growing push for greater collaboration, transparency, and accountability in scientific research. Achieving the full benefits of open science (including enhanced reproducibility, equitable access to knowledge, and accelerated discovery) requires the open sharing of data underpinning published research. The value of openly shared data depends entirely on its reusability. Reusable open data is a living asset, a part of a scholarly conversation, that exists to be interrogated, validated, learned from, and built-upon. It also remains a rare asset.

The pathway to increasing data reusability begins with moving a few people to do something different. Individuals winning over other individuals grow a culture for recognising a new behavior that they value. 

This session will provide an overview of the evolving policy landscape and its impact on publishing organizations and libraries and offer insight on how community stakeholders can support data sharing best practices and set the stage for productive data reuse. 

It will cover relevant policy developments, including their rationale and implications and practical approaches to supporting data sharing that aligns with the vision of a more inclusive, transparent, and collaborative research practices. 

The course will address several interrelated questions: 

  • Why is open data sharing important to research funders? 
  • How does data sharing contribute to scientific discovery and research integrity? 
  • How do we incentivize data sharing among researchers? 
  • What specific practices contribute to a dataset’s reusability? 

Participants will participate in collaborative exercises to answer these questions and will gain practical knowledge they can apply to their own work in libraries and publishing.

V02: Bibliometric Indicators and Open Science: A Hands-On Primer

Early: 7AM Pacific (UTC-7), 2 hours

Instructors:

  • Alex Wade 
  • Mike Taylor 
  • Kathryn Weber-Boer 

Abstract:

This course on Open Science and Bibliometric Indicators using is designed to provide participants with a comprehensive understanding of the role of bibliometric infrastructure in creating evaluative bibliometrics. The workshop will cover the construction, uses, and consequences of bibliometric indicators, as well as the digital infrastructure for evaluative bibliometrics. Participants will learn how to combine openly available datasets, your own institutional data,  as well as Dimensions and Altmetric data on Google Big Query to explore bibliometric indicators and evaluate academic performance.

The course will be divided into three parts. In the first part, participants will learn about the landscape of bibliometric infrastructure and the various databases and devices that have emerged since the beginning of the 2000s. The second part will focus on the entanglement of technology, users, and legal/ethical issues (such as GDPR and professional impact) and how these shape how evaluative bibliometrics is understood and practiced. The third part will provide hands-on training on how to explore and combine multiple datasets on Google Big Query to explore bibliometric indicators and evaluate academic performance. 

The key objectives of the workshop are to provide participants with a comprehensive understanding of the role of bibliometric infrastructure in practicing evaluative bibliometrics, to teach participants how to use Digital Science data on Google Big Query to explore bibliometric indicators and evaluate academic performance, and to provide participants with hands-on training on how to use bibliometric analytics suites, visualization programs, and current research information systems (CRIS).

V03: Research Information Management and Expertise Systems: Implementation, Administration, Staffing, and Utilization

Early:  8AM Pacific (UTC-7), 2 hours

Instructors:

  • Clarke Iakovakis
  • Jeffrey Agnoli 
  • Rebecca Bryant; others pending

Abstract:

Academic and research institutions across the world have increasingly adopted and implemented Expertise and Research Information Management (RIM) Systems to “support the transparent aggregation, curation, and utilization of data about institutional activities” (OCLC, “Research Information Management in the United States”). These systems may also be referred to as Faculty Information Systems, Current Research Information Systems, Faculty Activity Reporting Systems, and other terms. 

The above referenced OCLC report found that the RIM System landscape in the United States is characterized by decentralization and duplication (i.e. a single institution may have multiple systems performing RIM functions). Further, there is widespread confusion about what RIM Systems are and can do, and although RIM Systems often do support multiple use cases, they are often deployed serving only one use case. The absence of mandated national reporting requirements in the United States has meant that adoption is driven by other needs, such as showcasing expertise, faculty activity reporting, streamlining open access deposit process, supporting accreditation, and other uses. 

The purpose of this session is to provide an opportunity for a deep level of engagement with the many challenges and opportunities that RIM System present. It is intended for current RIM Systems practitioners, people exploring the adoption of a system, software developers working in this space, and all people who interface with these systems—including deans of research, librarians, scholarly communications and bibliometrics researchers, grants and sponsored programs staff, brand management, and more. 

The instructors will provide overviews of the topics, as well as anecdotes about our own experiences as RIM System administrators. We will have a number of discussion prompts to encourage attendees to share their own experiences if they are current administrators, or to ask questions if they are exploring. Depending on the number of attendees, this will take place either in breakout rooms or in a single room. Furthermore, guest speakers will be included on each day of the session in order to broaden the conversation and include people with a variety of uses and perspectives. 

The session will be led by the current Chair and Chair Elect of the Expert Finder System (EFS) Executive Committee. The EFS International Forum has provided webinars and professional conferences for several years, and is in the process of developing into a more formal professional organization. 

V04: Understanding, Benchmarking, and Tracking Equity and Inclusion in Open Access and Open Science

Early: 8AM Pacific (UTC-7), 3 hours

Instructor:

Micah Altman

Abstract:

Who participates in open-access publications and open-science research? This course — based on ongoing IMLS and Mellon Foundation supported research and education projects — is for researchers, practitioners and administrators wishing to understand, interpret, analyze, or measure participation in open scholarly activities. 

Over three sessions, we will examine quantitative measures of open science and open access outputs;  measures of international diversity; and measures of gender bias. Each session will include a discussion of core concepts and measures, key summary reports and databases, and quality and reliability measures.

Each session will be divided into three parts — so that attendees can choose to engage the subject at the depth appropriate to their needs. The first part of each session — for all attendees — will cover core concepts and summary sources. This part is sufficient for those who wish to locate, understand and interpret existing summary reports and interactive web-sites to identify benchmarks and trends.  The second hour of each section will focus on exercises analysis using interactive R notebooks to analyze participation data retrieved from open API’s. This part will be of interest to those with an interest in conducting their own data analyses. The third part of the course is intended for those planning to actively collect new data within their own institutions or projects, and will focus on specific data-collection scenarios — based on a pre-course survey of enrolled participants.

V05: Analyzing Your Institution’s Publishing Output

Early: 8AM Pacific (UTC-7), 3 hours Tuesday through Friday, with time shifted on Friday so that people can attend the plenary.

Instructors:

  • Allison Langham-Putrow 
  • Ana Enriquez 

Abstract:

Understanding your institution’s publishing output is crucial to scholarly communications work. This class will equip participants to analyze article publishing by authors at an institution. Note: This course involves work in advance of the first session and homework during the course.

After completing the course, participants will be able to

  • Gain an understanding of their institution’s publishing output, such as number of publications per year, open access status of the publications, major funders of the research, estimates of how much funding might be spent towards article processing charges (APCs), and more.
  • Think critically about institutional publishing data to make sustainable and values-driven scholarly communications decisions.

This course will build on open infrastructure, including Unpaywall and OpenRefine. We will provide examples of how to do analyses in both OpenRefine and Microsoft Excel. 

The course will consist of two parts. In the first, participants will learn how to build a dataset. We will provide lessons about downloading data from different sources: Web of Science, Scopus, and The Lens. (Web of Science and Scopus are subscription databases; The Lens is freely available.) 

In the second part of the course, participants will learn data analysis methods that can help answer questions such as:

  • Should you cancel or renew a subscription?
  • Who is funding your institution’s researchers?
  • Are your institution’s authors using an institutional repository?
  • Should you accept a publisher’s open access publishing offer?

Library agreements with publishers are at a crucial turning point, as they more and more often include OA publishing. By learning to do these analyses for themselves, participants will be better prepared to enter into negotiations with a publisher. The expertise developed through this course can make the uneven playing field of library-publisher negotiations slightly more even.

Course materials will be openly available. This will be a facilitated course taught by the authors.

V06: An Introduction to Querying the APIs of Open Metadata Services

Early: 9AM Pacific (UTC-7), 1 hour

Instructors:

  • Amanda French 
  • Luis Montilla 
  • Kelly Stathis 

Abstract:

Browser-based web applications such as the COKI Research Funding Dashboard, OA.Report, OpenAlex, CHORUS, The Lens, Dimensions, Web of Science, and Scopus all rely on API queries of open metadata services such as Crossref, DataCite and ROR to collect information about connections between institutions, funders, and research outputs.: This workshop is a behind-the-scenes look at such queries that will enable participants to understand “”how the sausage is made”” and to ask their own questions about the connections between research, research organizations, funders,  and grants. Participants will learn about the Research Organization Registry (ROR), the Crossref Open Funder Registry, Crossref DOIs, DataCite DOIs, and other elements that make these queries possible and will undertake several hands-on exercises to retrieve data according to their own parameters. The session will be suitable for beginners who are not familiar with API queries and want to learn more, especially scholarly communication librarians, funding organization staffers, and research administrators. 

V07: Correction of the Literature: Post-Publication Scrutiny for Ensuring Scientific Record Integrity

Early: 9AM Pacific (UTC-7), 2.5  hours – 1.5 hours  – with one 1.5 hr session on Friday

Exact times:

  • Tues 7/23 9AM – 11:30 AM
  • Wed 7/24 9AM – 11AM
  • Thurs 7/25 9AM – 11AM
  • Fri 7/26 10AM – 11:30 AM 

Instructors:

  • Ximena Illarramendi 
  • Carmen Penido
  • Mariana Ribeiro 

Abstract:

Research integrity serves as one of the cornerstones of scientific progress, ensuring that knowledge derived from scholarly endeavours remains accurate, transparent, and ethically conducted. Over the past two decades, there has been a notable surge in retractions across various disciplines and due to different reasons, exacerbated by instances observed during the Coronavirus Disease 2019 (COVID-19) pandemic. This trend has raised concerns regarding the efficacy of the current publishing system and processes. Regrettably, retracted papers continue to be cited even post-retraction, and the repercussions of scientific misconduct may persist for years. The ethical implications of delayed retractions, coupled with the potential harm caused by the perpetuation of erroneous information, are matters of significant concern. 

The digital age facilitates the rapid dissemination of both sound and flawed or misleading publications, an issue not always timely addressed by the scientific community. In this context, the revision and correction of a research paper after publication remains a challenge for all. This course aims to provide a thorough analysis of some of the intricate relationship between research integrity and the mechanisms for correction of the scientific literature. By means of group discussions, it will address the causes, consequences, and potential mitigation strategies associated with misconduct, questionable research practices (QRP), and honest error. Through a hands-on practical approach, participants, including researchers, graduate students, scientific reviewers, editors, and scholars/faculty members, will examine retraction cases discussed in Retraction Watch, learn to identify mistakes, and differentiate between QRP and honest errors. The discussions will be guided by the principles of research integrity and supported by the Committee on Publication Ethics (COPE) guidelines, the Council of Science Editors guidelines, the Singapore Statement, among other established global guidelines. 

Throughout the course, participants will engage in discussions and debates within small groups, exploring the roles of publication pressure, academic competition, and inadequate peer review processes in compromising research integrity. Moreover, this course is set to provide a new perspective on retractions, aligned with recent discussions in the literature towards a “more positive light”, understanding that the mechanisms for the correction of the scientific literature are an important way to maintain the reliability of the research record. Towards the conclusion, each group will present a set of guidelines and/or best practices aimed at strengthening research integrity and streamlining the retraction process within their respective institutions. These recommendations will contribute to the ongoing discourse on improving the current scientific publication process. It will be emphasized that the correction of scientific publications is an integral aspect of the scientific process, and active participation from all scientists, supported by institutions, is crucial. Research integrity serves as the bedrock for formulating proactive measures that safeguard the credibility and reliability of scientific knowledge, fostering trust in the scientific community. 

Course outline:

Course content and activities: We propose a hands-on practical course to study cases of retraction of flawed articles as a mechanism of scientific literature correction to immerse in a 4-day course on research integrity. We will discuss in small groups and plenary sessions retraction cases available in the literature, and discussed in Retraction Watch. As a final assignment, each group will be stimulated to produce a set of guidelines or best practices that could serve as  recommendations to foster scientific record Integrity. 

Course content: 

principles of research integrity, research misconduct (falsification, fabrication and plagiarism – FPP), Questionable research practices (QRP) and honest errors, predatory journals and publishers, post publication scrutiny. 

Duration for each class – 4 days. Different length sessions each day, between 90 and 150 minutes long.

Anticipated assignments: Previous reading: Committee on Publication Ethics (COPE) retraction guidelines – https://doi.org/10.24318/cope.2019.1.4. Case studies – A summary of the three cases selected on FPP, QRP, and honest errors will be given to prepare for discussions prior to the course.

V08: Good Governance for AI in Scientific Publications: Developing Policy for Reliability, Ethics, and Integrity

Early: 9AM Pacific (UTC-7), 3 hours – Unconfirmed and subject to change

Instructors:

  • Mr. Francis P. Crawley, CoDATA International Data Policy Committee (IDPC), Leuven, Belgium
  • Dr. Lili Zhang, Computer Network Information Center, Chinese Academy of Sciences, Beijing, China
  • Dr. Gitanjali Yadav, Group Leader, National Institute of Plant Genomic Research (NIPGR), New Delhi, India
  • Professor Perihan Elif Ekmekci, TOBB University, Ankara, Turkey
  • Dr. Chiedozie Ike, Irrua Specialist Teaching Hospital & Ambrose Alli University, Edo State, Nigeria
  • Professor Mara de Souza Freitas, Director, Institute of Bioethics, Universidade Católica Portuguesa (Catholic University of Portugal), Lisbon, Portugal
  • Professor Natalie Meyers, Professor of the Practice, Lucy Family Institute for Data & Society, University of Notre Dame, Indiana, United States
  • Professor Kris Dierickx, The Center for Bioethics & Law, Faculty of Medicine, KU Leuven, Belgium

Abstract:

This course is designed to introduce a global audience to the challenges of developing governance frameworks for artificial intelligence (AI) in scientific publications, with a particular focus on open science platforms. The course has been created by a faculty of international experts in scientific publications, publication policy & ethics, data and AI ethics and integrity, data and AI policy, and open science platforms. Several organizations have contributed to the development of the topics, syllabus, and course materials, including the CODATA International Data Policy Committee (IDPC), the EOSC-Future/RDA Artificial Intelligence & Data Visitation Working Group (AIDV-WG), and CoARA ERIP: Ethics and Research Integrity Policy in Responsible Research Assessment for Data and Artificial Intelligence. We also work with a number of publishers, universities, and science organizations in developing the course content and materials. Guiding documents include UNESCO’s Recommendation on Open Science as well as its Recommendation on the Ethics of Artificial Intelligence.

This course offers a global perspective on developing effective governance policies for AI in scientific research, with a focus on appreciating not only a broad, international context, but also examining regional and local contexts for science publications. It aims to foster an understanding of how international standards can be created while considering regional and national differences in approaches to ethical, reliable, and integrity AI practices in scientific publications.

The course examines the challenges AI brings to the publication of science, whether its contribution to scientific outcomes or in interpreting, reporting, and communicating of science. During a period when scientists, publishers, and policymakers are examining the role and governance of AI in scientific publications, this course examines the role of governance in establishing and promoting reliable and trustworthy open science frameworks for knowledge creation and citizen benefit in our emerging digital societies. 

The course discusses the ethical, regulatory, and policy implications arising from the development of AI in the publication of science the following areas:

the publication of algorithms, machine learning (ML) software, and other AI-related tools used for the advancement and development of science; the use of AI and ML in scientific publications as it informs research and writing while also contributing to and/or challenging the integrity, robustness, and accountability of scientific publications and communications; the increasing need to establish rules or modes of governance for the use of AI in scientific publication and research proposals; the need to develop standards for valuing digital contributions to science/knowledge in research assessment procedures and programmes, having regard to differences within and across disciplines; and the use of data/AI as tools for scientific publications and research assessment

The course leverages both the in person and online learning environments built on interactive frameworks and designed to be inclusive of diverse international audiences of researchers, publishers, and professionals with an interest in scholarly communication and open science practices. The course sessions use presentations to introduce the latest developments in AI ethics and its role in open scholarly publications communication while also providing background to ethics, integrity, policy, and governance frameworks for publications.

The emphasis is on discussion and sharing that builds throughout the sessions to refine our understanding of AI ethics in contexts of open scholarship and how this impacts knowledge sharing and the wellbeing of individuals and communities globally. Each course node will develop its own discussion on AI governance within a specific context that relates to current and ongoing revisions to publication practices, publication ethics, AI governance, and AI in research assessment.

The course builds on open scholarship platforms for knowledge sharing and the exploration of the role of ethics in ensuring the positive impact of scientific publications and communications for individuals and their communities. It draws on the experience of the faculty’s participation in global initiatives, including the Strategic Initiative for Developing Capacity in Ethical Review (SIDCER), Preparedness Planning for Clinical Research During Public Health Emergencies (PREP), Force 11, Virus Outbreak Data Network (VODAN) – GOFAIR, the EOSC-Future / RDA Artificial Intelligence & Data Visitation Working Group (AIDV-WG), and the CODATA International Data Policy Committee (IDPC).

This FSCI course presents an international context rooted in a variety of national/regional that examine both local and international trends in how AI ethics is emerging as a critical element in the governance of open scholarship and scientific communication against the background of an increasingly digitalized economic, cultural, and geo-political world.

V09: Evaluating Open Access Journals: Moving from Provocative to Practical in Characterizing Journal Practices

Late: 4PM Pacific (UTC-7), 1.5 hours

Instructors:

  • Karen Gutzman 
  • Annie Wescott 

Abstract:

In today’s scholarly publishing ecosystem, researchers, librarians, academic institutions, funders, and even publishers have difficulty in identifying and tracking journals that engage in practices ranging from fraudulent and deceptive to questionable and unethical. 

In this course, we will define these specious practices, avoiding the binary “predatory” and “legitimate” classification by exploring the nuances of journal practices and how these practices developed as unintended consequences of the current academic publishing model. We will investigate tools for evaluating journal quality and discuss relevant case studies that will provide helpful context. Finally, we will review recommendations for raising awareness and promoting good practices in scholarly communications. 

This course aims to prepare librarians and other support personnel to offer training and support for researchers in how to understand the norms in open access publishing and how to avoid deceptive or low-quality journals. We will cover useful tools for mitigating the likelihood of publishing in these journals and discuss steps to take to assist researchers who believe they may have published in such a journal. 

This course will take place over three hours with each hour containing a mixture of lecture and discussion based on a case study or investigation of a tool for evaluating journal quality. We encourage students to engage in discussions and share their own experiences.

V10: Applying Strategic Doing, an Agile Strategy Discipline, to Build Collaborations Across Diverse Teams

Late: 4PM Pacific (UTC-7), 1.5 hours

Instructors:

  • Jeffrey Agnoli 
  • Meris Mandernach Longmeier 

Abstract:

Strategic Doing assists teams in answering four basic strategic questions using 10 simple steps. This method leverages a network of network approach to build collaboration, enhance trust, and produce measurable outcomes. Presenters will share how they apply these methods to build research and creative expression initiatives that enable strategic planning, research sustainability plans, ideation, and operations management. Participants will learn how to answer these four basic questions to develop a compelling strategy: what could we do?, what should do?, what will we do?, and what is our action plan? These concepts map to “the science of team science” competencies, including but not limited to: how to promote psychological safety, transparency, democratic prioritizing, clarifying roles and responsibilities, and supporting more productive teams. 

V11: Using the ORCID, Sherpa Romeo, and Unpaywall APIs in R to Harvest Institutional Data

Late: 4PM Pacific (UTC-7), 3 hours

Instructors:

  • Dani Kirsch 
  • Brandon Katzir 

Abstract:

The objectives of this course are to obtain a set of ORCID iDs for people affiliated with your institution, harvest a list of DOIs for publications associated with these iDs, and gather open access information for the articles using Sherpa Romeo and Unpaywall. 

Students will work with a set of pre-written scripts in R, customizing them for their institutions to access the APIs for ORCID, Sherpa Romeo, and Unpaywall, and bring it all together into a manageable data file. 

While some experience using R will be helpful, it is not required. However, although the basics of using R and understanding the code will be reviewed, the emphasis of the course will be on running the scripts and gathering and interpreting the data. In other words, this course is focused not on learning R, but rather on obtaining a dataset of publications based on institutional affiliation and open access information on those publications. It is inspired by a course taught previously at FSCI, available at https://osf.io/vpgbt/. The course will conclude with a discussion of using this data to develop outreach methods to authors to inform them of their right to deposit author manuscripts.

V12: Empowering Future Trainers in Research Integrity: A Course Integrating Theory and Hands-On Exercises in Tools, Stakeholder Engagement, and Science Outreach

Late: 4PM Pacific (UTC-7), 3 hours

Instructors:

  • Eleonora Colangelo 
  • Sidney Engelbrecht 
  • Christopher Magor 

Abstract:

In the landscape of professional discourse, Research Integrity emerges as a cornerstone theme, yet the role of Research Integrity trainers often remains underappreciated. These trainers play a pivotal role in fostering an awareness culture surrounding scientific ethics principles among various stakeholders, including research administrators, librarians, publishing professionals, and researchers. This online course, spanning three sessions over three days, is crafted to address this gap and familiarize a diverse global audience with the vital responsibilities of research integrity trainers within the domain of open scholarship movements.

Throughout the course, we will delve into the multifaceted responsibilities of Research Integrity trainers and explore the future prospects for this cadre of guardians of integrity. Will institutions and publishers increasingly demand their presence? What are the expectations of a Research Integrity trainer? These questions, among others, will be thoroughly examined and discussed.

Developed collaboratively with a focus on research compliance, science editing, and publishing, this course is informed by the latest versions of cross-continental Codes of Conduct in Research Integrity endorsed by governments. Furthermore, the methodology employed is rooted in The Embassy of Good Science project, with co-instructors who are certified trainers.

Our primary objective is to underscore the critical role of research integrity training in shaping robust and trustworthy open science frameworks, particularly in our ever-evolving digital environment. Employing a virtue ethics-based approach, the course will feature interactive sessions aimed at enhancing participants’ understanding of research integrity for future trainers.

While the Dilemma Game app remains an integral part of Day 1 to explore real-world scenarios, it is important to note that it is only one aspect of the training. Other facets of Research Integrity training will be covered, including annual reports on research integrity, intellectual property, conflict of interest, and their implications for the use of AI in science production. Additionally, participants will be guided in crafting strategies and action plans for future Research Integrity trainers, tailored to their specific professional environments.

This course is designed to cater to a diverse audience, including researchers, librarians, publishers, faculty/scholars, policymakers, and research management administrators. Upon completion, participants will have the opportunity to receive a certificate issued by The Embassy of Good Science.

V13: No More Dream: Making Transparent Music Scholarly Communications a Reality

Late: 4PM Pacific (UTC-7), 1.5 hours

Instructors:

  • Kathleen DeLaurenti 
  • Matthew Vest 

Abstract:

The commercial music industry is generally acknowledged as opaque, driven by confidential contracts and arcane legal systems. The global music marketplace is governed by a series of complicated and drastically different national laws.This history of the commercial music business does not just exist outside of academic institutions. Our music colleagues are often balancing multiple competing issues in their scholarly communication concerns: their scholarly output often looks very different from colleagues in the sciences and other humanities and they often rely more on commercial success as an important component of their personal income than a scientist running a lab.

However, at the same time that the academy is building momentum towards increased transparency and open access, the music industry is making it more difficult for musicians to access publishing and dissemination systems. This can have an outsized effect on musicians in our institutions who may have performance and research practices that do not align with creating a hit record. While musician-scholars have historically lacked trust in collaborating with academic institutions because of concerns that it would have a economic impact on their work, they are now often being excluded from the emerging digital infrastructure that has failed to realize a more democratic music market.

This crossroads creates an opportunity, but also an imperative, for our institutions to create systems that allow our musician-scholars to maintain control of their work, retain academic freedom with their creative research, and engage in transparent, open practices without risking their livelihood.

In this course, we will introduce participants to the unique challenges that researchers in music face and explore ways that institutions can address these unique needs when designing sustainable, transparent scholarly communication ecosystems.

Participants in this course will:

  • Engage with unique aspects of the music scholarly communications landscape in order to compare these systems of scholarly communication more traditional scholarly disciplines.
  • Expand their ideas of transparency to in scholarly communications to extend to disciplines that have different economic incentive systems that traditional scientific publishing
  • Evaluate their existing scholarly communication services to identify ways in which they can address the needs of musicians for transparent scholarly communication.

V14: Preserving the Digital Future: Advanced Strategies for Web Archiving and Combating Reference Rot

Late: 5PM Pacific (UTC-7), 2.5 hours

Instructors:

  • Rosario Rogel-Salazar 
  • Alan Colin-Arce 

Abstract:

In today’s digital era, the preservation of online information has emerged as a critical challenge in scholarly communication, most notably addressed in the findable, accessible, interoperable and reusable (FAIR) principles. One threat to findable research is reference rot, a phenomenon where links to web-based resources gradually become obsolete. This is an issue that has been identified in disciplines ranging from medicine to political science, and it limits integrity and transparency in research because the claims and data found in the referenced websites cannot be accessed anymore.

Therefore, this course will cover the strategies to combat the pervasive issue of reference rot, the importance of digital preservation, and the fundamental principles of web archiving. The course will be divided into three components:

  1. An overview of the importance of web archiving to preserve online information and an exploration of some of the main web archives in the world.
  2. The possibilities and limitations of creating archival versions of web content using the Internet Archive’s Wayback Machine, Webrecorder, Archive-It, and Perma.cc.
  3. Strategies to prevent reference rot as authors, editors, and librarians.

V15: Managing Data for Research Transparency and Reproducibility: A Collaborative Class for Basic and Advanced Practitioners

Late: 5PM Pacific (UTC-7), 1.5 hours

Instructor:

  • John Borghi 

Abstract:

Data management is foundational to good research practice. But the link between many data-related topics and the day-to-day practice of conducting research is not always immediately clear. This course will provide new learners, advanced practitioners, and other data stakeholders with information, strategies, and resources to facilitate data management over short, medium, and long term while also addressing issues related to research transparency and reproducibility. Consisting of a mix of introductory presentations on topics drawn from both the data curation and research communities, discussion, and collaborative work, this course will cover three areas: 

  1. Introduction to data management, research transparency, and reproducibility 
  2. Establishing (re)usability through day-to-day data management 
  3. Making data available to others: Open and restricted data sharing 

At the conclusion of this course, learners will leave with both an understanding of the basic principles of and relationships between data management, transparency, and open science and also with a package of collaboratively developed templates, educational materials, and other tools that they will be able to adapt to their own needs.

V16: Assessing Open Access Journals Using DOAJ Criteria: An Interactive Course For All Levels

Late: 5PM Pacific (UTC-7), 2 hours

Instructors:

  • Ivonne Lujano
  • Muhammad Imtiaz Subhani

Abstract:

This course aims to learn about open-access journal indexing from the perspective of the Directory of Open Access Journals (DOAJ). Participants will engage in discussions and hands-on activities about evaluating journals based worldwide from an equitable and inclusive standpoint.

The DOAJ is a global directory that indexes over 20,000 OA journals, regardless of discipline, geography, or language. For 20 years, the DOAJ has worked to increase the visibility, accessibility, reputation, usage, and impact of quality, peer-reviewed, scholarly journals globally. To that extent, the Directory established a set of criteria that have become a gold standard for OA publishing worldwide. In this course, we will navigate through the list of criteria on day one to discuss how they contribute to building trustworthiness in OA journals.

DOAJ criteria have changed over time. The organization has adapted the list of indexing requirements to address some of the community requirements and concerns, especially regarding ethical issues or using technological tools that support journals’ content dissemination. DOAJ recently responded to concerns around the proliferation of special issues in scholarly publishing and added some additional criteria around the use of special issues. The use of preservation services is another example of best practices that are supported by DOAJ’s criteria for inclusion. On day two, participants in this course will practically analyze cases using the DOAJ criteria. They will compare the list with the tool Think.Check.Submit, to discuss some potentially questionable publishing practices that have arisen in the landscape. DOAJ promotes best practices in OA journals globally. Yet, at the same time, it is very much aware that publishing standards can also function as a neocolonial way to promote the Western worldview of how knowledge should be disseminated. Thus, on day three, we will discuss some of the global inequalities that DOAJ has been addressing through outreach and community-led initiatives and how participants may implement inclusive and equitable OA initiatives in their contexts.

C01: The Science of Collaboration: Creating synergies to solve and report solutions to complex research problems

Early

Instructor:

  • Ronald Margolis 

Abstract:

Research problems, particularly in the life and natural sciences, are characterized by increasingly complex and technically issues that require challenging strategies. Often it takes a cross- or trans-disciplinary approach to first understand and then solve such problems. To do so, teams of investigators drawn from several disciplines may find they need to come together to bring their separate, sometimes overlapping, expertise to bear on the problem. 

This course will explore the concept of a team approach to solving a complex problem and help participants to see both how they might fit into such a concept and how they might seek to initiate a team science approach to attack a complex and unfulfilled problem. 

Activities will include determining whether the research problem requires a team science approach; finding and focusing on the needs of such an approach; identifying the expertise needed to address the problem and whether and which disciplines may be needed; and deciding how to identify, recruit, and meld a team of investigators. Keys to success include how to organize the team around the goals set for addressing the problem, how to manage a team, how the team can report out its findings and credit/reward participants.

C02: PREreview Open Reviewers Workshop: A Hands-On Ethical Peer Review Training Program For Researchers of All Career Levels

Early

Instructors:

  • Daniela Saderi 
  • Vanessa Fairhurst

Abstract:

Peer review plays a pivotal role in determining which research projects receive funding, which findings get published, and ultimately, which knowledge is disseminated and utilized by the scientific community and the broader public. Despite its critical importance, reviewers often undergo minimal training for this crucial task. Furthermore, that training rarely focuses on mitigating the biases that are ingrained in the peer review process. At PREreview, we work to change this.

Open Reviewers is a three-part, interactive, and hands-on workshop designed for researchers at all career levels who are interested in engaging in ethical and constructive manuscript peer review. With a focus on promoting equity, diversity, and inclusion, the workshop provides participants with the necessary skills and knowledge to conduct equitable peer review using materials from The Open Reviewers Toolkit (https://prereview.org/resources).

Target Audience: Researchers of all career levels. The content is particularly useful for Early Career Researchers looking to participate in the peer review process, but can also be useful for more experienced reviewers looking to gain a new perspective and continue (or begin) on a journey of personal reflection. 

Learning Objectives: Upon completion of the workshop participants will have gained:

  • A general understanding of journal-organized and independent review processes
  • A detailed understanding of how systems of oppression manifest in the manuscript review process
  • Strategies to self-assess and mitigate bias in the context of manuscript review
  • An in-depth understanding of and practical experience with peer reviewing a manuscript in a way that minimizes bias, striving for constructive, clear, and actionable feedback
  • An opportunity to put learning into practice by collaboratively reviewing a preprint and publishing the resulting preprint review on https://prereview.org

C03: Making Research More Transparent with Quarto and RStudio

Early

Instructors:

  • Bella Ratmelia 
  • Dong Danping 

Abstract:

In academic research, the journey from raw data to published findings often lacks transparency, posing significant challenges to reproducibility and trust in scientific research. This situation highlights the need for enhanced transparency and traceability within research methodologies.

The primary objective of this course is to equip participants with the necessary skills to use Quarto and RStudio to publish research in a transparent and reproducible way. These tools enable an integrated research workflow encompassing data cleaning, analysis, visualization, and publishing, producing verifiable research outputs and artifacts. They facilitate validation and replication of research findings, thereby enhancing the integrity and credibility of published work.

Quarto, an open-source scientific publishing system, allows researchers to weave together narrative text, code, mathematical formulas (using LATeX), and even citations to produce elegantly formatted output as documents, web pages, blog posts, books and other formats to cater to diverse publishing needs. While Quarto is also compatible with Python and Julia, the course will predominantly focus on its application in R programming. Prior experience with R is recommended for this course.

The course will cover: 

  • A foundational overview of R, RStudio, and Quarto, including a brief overview of their roles in enhancing research openness and transparency. 
  • Utilizing Quarto to seamlessly combine R code, citations, visualization, and research insights in one place. 
  • Converting Quarto documents to other document format (HTML or Word) or to a presentation (RevealJS or Power Point). 
  • Leveraging on Quarto Pub or GitHub Pages to create a website to communicate research insights to a broader audience.

C04: Unlocking Knowledge: Exploring Open Research Infrastructure and FAIR Principles

Early

Instructors:

  • Gabriela Mejias
  • Xiaoli Chen 

Abstract:

A robust and resilient research infrastructure — one that supports the tools, services, and systems that researchers rely on — is essential to ensuring that the research process is as efficient and effective as possible. Open infrastructure, means it is open for the research community to use, contribute to, participate in, and benefit from. They are usually available globally but often unevenly adopted. It is down to the Open infrastructure organizations, typically communally supported non-profit organizations, to maintain the technologies and drive adoption. 

Persistent identifiers (PIDs) and the metadata associated with them are considered the building blocks of open research infrastructure. Through standardized and interoperable metadata schema and prevalent implementation across research performing and publishing organizations, PIDs facilitate seamless exchange of information in the scholarly ecosystem to supplement discovery, reuse, and reporting workflows of researchers, institutions, and funders.

In this course we will introduce and discuss how PIDs and metadata can help make research more open on a global scale. We will share progress and challenges of DataCite DOI infrastructure adoption, discuss the FAIR principles and the how implementing PIDs and metadata workflows in the research process will make the research FAIR.

In this course we will discuss:

  • Why is it important that infrastructure is open and what does that openness look like?
  • What options do you have to use open infrastructure as part of your daily activities?
  • What is metadata and why is it important for establishing evidence and provenance?
  • How can you create and use metadata to make connections?
  • How can you increase visibility and trust through the use of persistent identifiers and metadata?
  • How to make your research process FAIRer?

C05: Forensic Scientometrics: Safeguarding Scientific Integrity and Trust in Research through Forensic Investigations

Early

Instructors:

  • Leslie McIntosh 
  • Suze Kundu 

Abstract:

The erosion of public trust in science, scientific research, and science policy poses a significant threat to the acceptance and implementation of critical policies, technological innovations, clinical outcomes, and national and global economic and security interests. This three-part course aims to address the pressing issues surrounding the integrity and security of scientific research in the face of technological advancements and recent high-profile cases of misconduct and security breaches.

In the first part of the course, participants will delve into the multifaceted challenges affecting public trust in science. Understanding the current landscape, including the role of public forums such as PubPeer and social media in highlighting research discrepancies, will be a key focus. The session will also explore the limitations of the existing system and the implications for policy, technology innovation, and societal well-being.

The second part will shift the focus towards policies, tools, and common practices that fortify research integrity and security. Participants will examine existing frameworks, explore case studies of successful interventions, and engage in discussions on the ethical dimensions of conducting research in rapidly advancing technological environments. Special attention will be given to recent advancements in detecting research misconduct, such as identifying nefarious networks, image manipulation, and linguistic analysis of scholarly content.

The final part of the course will explore proactive measures to counter scholarly disinformation and uphold the integrity of scientific research. Drawing on insights from platforms like RetractionWatch, participants will learn about the evolution of monitoring, collecting, and reporting retractions. The session will also address the ethical responsibilities of researchers, institutions, and the broader scientific community in fostering a culture of trustworthiness and transparency.

Throughout the course, emphasis will be placed on the collaborative efforts needed within the research community to address these challenges. Participants will gain practical knowledge and skills to contribute to the ongoing work of upholding the integrity of scientific research. By the end of the course, participants will be equipped to navigate the complexities of ethical research conducted in the digital age and contribute to rebuilding and maintaining public trust in science.

Course Outline:

Part 1: Understanding the Landscape of Trust in Science (3 hrs)

Introduction to the erosion of public trust in science

Exploration of public forums and their role in highlighting research discrepancies

Analysis of the limitations of the current system and its implications for policy and societal impact

Part 2: Fortifying Research Integrity and Security (3 hrs)

Examination of existing policies, tools, and common practices

Case studies of successful interventions in addressing research misconduct

Ethical considerations in conducting research in rapidly advancing technological environments

Part 3: Countering Scholarly Disinformation and Fostering Trustworthy Science (3 hrs)

Evolution of monitoring, collecting, and reporting retractions

Insights from platforms like RetractionWatch

Ethical responsibilities of researchers, institutions, funders, and publishers  in fostering a culture of trustworthiness and transparency

Collaborative efforts within the research community to address challenges and rebuild public trust

C06: Analyzing Public Access Policy to Guide Scholarly Communications Outreach

Early

Instructors:

  • Jonathan Grunert 
  • Nina Exner 

Abstract:

As research grants increasingly require public access as a component of grant compliance, there are many opportunities for scholarly communication specialists to educate researchers while encouraging and analyzing compliance with public access mandates. Responsibilities in these areas vary across institutions, as do the personnel given those responsibilities: librarians, research offices, administrators, and the researchers themselves.How can these specialists— not only librarians!— approach public access preparedness and compliance among researchers? And how can they analyze compliance to ensure that their institutions do not lose future funding opportunities? 

This course will explore some tactics that scholarly communication advocates can take in educating researchers about grant requirements, while demystifying some of the fears about public access. We will discuss outreach planning, workshop pedagogies, introductory analysis strategies, and static (e.g., LibGuide) presentation. The examples in this course will be drawn primarily from U.S. federal grant funders’ public access policies, especially the 2022 Nelson memo (White House Office of Science and Technology Policy) and 2023 NIH Data Management and Sharing Policy.  By the end of the course, participants will have a plan for engaging with researchers and research administration offices to measure public access compliance and increase awareness of such policies.

C07: Getting Attention and Bringing Others on Board: Applying Basics in Marketing and Communications to Advance Open Research

Late

Instructor:

  • Jennifer Gibson

Abstract:

Getting the attention of faculty, students, decision-makers, and others and convincing them to break out of long-established habits to try something new is a defining aspect of work in scholarly communications. The future of open research is dependent on our ability to change behaviors. 

Putting compelling messages in front of the right audiences is a practiced art and science in marketing and communications. The world’s biggest brands are masters at convincing us that our shampoo is bad for our hair and that we need to buy more sugary soda.

Social marketing, which long precedes social media, is the application of commercial marketing principles and practices to effect social and behavioral change. The same systems for understanding an individual’s needs and pains, for communicating to them in their world, on their terms, and convincing them to attempt a change in behavior can be used to promote adoption of open research practices as well as purchases of bacon double cheeseburgers.

This course will explore the basics of marketing strategy and their application in the research environment – to advance open research or any other type of behavior change. 

Participants will learn how to:

  • Communicate powerfully by separating audiences according to their different interests.
  • Get the most out of an outreach program by prioritizing specific audiences.
  • Build a compelling offering by aligning the service with the audience’s needs and available choices.
  • Cut through the noise by creating messages in the audience’s voice.
  • Develop a comprehensive, impactful outreach program that gets attention from the right people.
  • Monitor the program and make regular improvements to try to increase impact.

Audience: Individuals with the responsibility to promote and advocate for open research practices in the academic community, targeting faculty, students, librarians, publishers, administrators, and disciplinary communities. These may include librarians, community managers, start-ups, publishing staffers, and others.

C08: Good Governance for AI in Scientific Publications: Developing Policy for Reliability, Ethics, and Integrity

Late

Instructors:

  • Francis Crawley 
  • (more to be announced)

Abstract:

This course is designed to introduce a global audience to the challenges of developing governance frameworks for artificial intelligence (AI) in scientific publications, with a particular focus on open science platforms. The course has been created by a faculty of international experts in scientific publications, publication policy & ethics, data and AI ethics and integrity, data and AI policy, and open science platforms. Several organizations have contributed to the development of the topics, syllabus, and course materials, including the CODATA International Data Policy Committee (IDPC), the EOSC-Future/RDA Artificial Intelligence & Data Visitation Working Group (AIDV-WG), and CoARA ERIP: Ethics and Research Integrity Policy in Responsible Research Assessment for Data and Artificial Intelligence. We also work with a number of publishers, universities, and science organizations in developing the course content and materials. Guiding documents include UNESCO’s Recommendation on Open Science as well as its Recommendation on the Ethics of Artificial Intelligence.

This course offers a global perspective on developing effective governance policies for AI in scientific research, with a focus on appreciating not only a broad, international context, but also examining regional and local contexts for science publications. It aims to foster an understanding of how international standards can be created while considering regional and national differences in approaches to ethical, reliable, and integrity AI practices in scientific publications.

The course examines the challenges AI brings to the publication of science, whether its contribution to scientific outcomes or in interpreting, reporting, and communicating of science. During a period when scientists, publishers, and policymakers are examining the role and governance of AI in scientific publications, this course examines the role of governance in establishing and promoting reliable and trustworthy open science frameworks for knowledge creation and citizen benefit in our emerging digital societies. The course discusses the ethical, regulatory, and policy implications arising from the development of AI in the publication of science the following areas:

the publication of algorithms, machine learning (ML) software, and other AI-related tools used for the advancement and development of science;

the use of AI and ML in scientific publications as it informs research and writing while also contributing to and/or challenging the integrity, robustness, and accountability of scientific publications and communications;

the increasing need to establish rules or modes of governance for the use of AI in scientific publication and research proposals;

the need to develop standards for valuing digital contributions to science/knowledge in research assessment procedures and programmes, having regard to differences within and across disciplines; and

the use of data/AI as tools for scientific publications and research assessment

The course leverages both the in person and online learning environments built on interactive frameworks and designed to be inclusive of diverse international audiences of researchers, publishers, and professionals with an interest in scholarly communication and open science practices. The course sessions use presentations to introduce the latest developments in AI ethics and its role in open scholarly publications communication while also providing background to ethics, integrity, policy, and governance frameworks for publications.

The emphasis is on discussion and sharing that builds throughout the sessions to refine our understanding of AI ethics in contexts of open scholarship and how this impacts knowledge sharing and the wellbeing of individuals and communities globally. Each course node will develop its own discussion on AI governance within a specific context that relates to current and ongoing revisions to publication practices, publication ethics, AI governance, and AI in research assessment.

The course builds on open scholarship platforms for knowledge sharing and the exploration of the role of ethics in ensuring the positive impact of scientific publications and communications for individuals and their communities. It draws on the experience of the faculty’s participation in global initiatives, including the Strategic Initiative for Developing Capacity in Ethical Review (SIDCER), Preparedness Planning for Clinical Research During Public Health Emergencies (PREP), Force 11, Virus Outbreak Data Network (VODAN) – GOFAIR, the EOSC-Future / RDA Artificial Intelligence & Data Visitation Working Group (AIDV-WG), and the CODATA International Data Policy Committee (IDPC).

This FSCI course presents an international context rooted in a variety of national/regional that examine both local and international trends in how AI ethics is emerging as a critical element in the governance of open scholarship and scientific communication against the background of an increasingly digitalized economic, cultural, and geo-political world.

C09: Understanding, Benchmarking, and Tracking Equity and Inclusion in Open Access and Open Science

Late

Instructor:

  • Micah Altman 

Abstract:

Who participates in open-access publications and open-science research? This course — based on ongoing IMLS and Mellon Foundation supported research and education projects — is for researchers, practitioners and administrators wishing to understand, interpret, analyze, or measure participation in open scholarly activities. 

Over three sessions, we will examine quantitative measures of open science and open access outputs;  measures of international diversity; and measures of gender bias. Each session will include a discussion of core concepts and measures, key summary reports and databases, and quality and reliability measures.

Each session will be divided into three parts — so that attendees can choose to engage the subject at the depth appropriate to their needs. The first part of each session — for all attendees — will cover core concepts and summary sources. This part is sufficient for those who wish to locate, understanding and interpreting existing summary reports and interactive web-sites to identify benchmarks and trends.  The second hour of each section will focus on exercises analysis using interactive R notebooks to analyze participation data retrieved from open API’s. This part will be of interest to those with an interest in conducting their own data analyses. The third part of the course is intended for those planning to actively collect new data within their own institutions or projects, and will focus on specific data-collection scenarios — based on a pre-course survey of enrolled participants.

C10: Evaluating Open Access Journals: Moving from Provocative to Practical in Characterizing Journal Practices

Late

Instructor:

  • Karen Gutzman

Abstract:

In today’s scholarly publishing ecosystem, researchers, librarians, academic institutions, funders, and even publishers have difficulty in identifying and tracking journals that engage in practices ranging from fraudulent and deceptive to questionable and unethical. 

In this course, we will define these specious practices, avoiding the binary “predatory” and “legitimate” classification by exploring the nuances of journal practices and how these practices developed as unintended consequences of the current academic publishing model. We will investigate tools for evaluating journal quality and discuss relevant case studies that will provide helpful context. Finally, we will review recommendations for raising awareness and promoting good practices in scholarly communications. 

This course aims to prepare librarians and other support personnel to offer training and support for researchers in how to understand the norms in open access publishing and how to avoid deceptive or low-quality journals. We will cover useful tools for mitigating the likelihood of publishing in these journals and discuss steps to take to assist researchers who believe they may have published in such a journal. 

This course will take place over three hours with each hour containing a mixture of lecture and discussion based on a case study or investigation of a tool for evaluating journal quality. We encourage students to engage in discussions and share their own experiences.

C11: The Butterfly Effect – Understanding the Big Picture Research Ecosystem to Help Open Practice

Late

Instructor:

  • Danny Kingsley 

Abstract:

The concept of the butterfly effect – that the world is deeply interconnected, such that one small occurrence can influence a much larger complex system – can be directly applied to the research ecosystem. Everything is interconnected, interdependent and interrelated. This course is an attempt to articulate these connections and identify areas where change might be possible.

Many aspects of research operate in isolation from each other yet are part of an interdependent whole. Areas such as research culture, research assessment, open scholarship, research integrity, research support, research infrastructure and research impact can be managed by completely different agents within a research institution (if at all). The training we offer all members of the research endeavour does not currently take a holistic view, which makes effective decision-making and positive change deeply challenging.

In this course classes will be two hours on each of the three days and will build on (a small amount of) pre reading/watching. The classes will combine direct instruction with discussion, small group work and whole group activity. The goal for each lesson will be to collectively develop some resources (digital, physical and conceptual) to help articulate these concepts and issues to a broader audience.

Each day of the course will focus on a different area: 

  • How bad is it really? A dive into the accelerating increase in poor research practice, paper mills, fraud, retractions … (yes this is the depressing lesson!)
  • How did we get to this? Looking back to see forward. What has been happening over the past 20 years in terms of changing research assessment, the ownership of research infrastructure, and the rise of the Open movement
  • What’s working? A celebration of – and critical look at – initiatives that are shifting the dial. The common thread is the connection across different aspects of the ecosystem. A (possibly lofty) goal of the course is to collectively develop a visual representation of ‘the fundamental interconnectedness of things’

This course is aimed at all participants in the research endeavour – from researchers to research administrators, librarians, third space professionals – everyone. It is intended to be interactive and constructive where all participants can contribute to the process. 

C12: Learn to bring an “infinite game” mindset to your daily work to build trustful, generous, effective research collaborations

Late

Instructor:

  • Bruce Caron 

Abstract:

This class will explore scientific research and academic work as an example of infinite game play, and contrast this with those perverse, finite games we now play in the academy. 

Together we will explore how external incentives crowd-out science’s own internal motivations. We will interrogate the toxic impacts that these have on your own work and career. We will discuss the role of kindness and generosity within the logic of abundance that powers open science. We will discover the value of conversation, and the need for slow science. 

We will venture into the future, and look back on the impact that infinite game play has created on our teams, our universities, and on the academy.

Together we will explore the practical wisdom, the joy, the fun, and love of discovery internal to open science.

Background:

Open science is more than the struggle over the perverse incentives driving commercial scientific publishing. Open science looks to reconnect science with its long- neglected internal motivations, with its required ethical sincerity, and necessary generosity. The more we compete with one another, the less we control our own research. One way to rethink science going forward is through James Carse’s notion of an “infinite game” mindset. 

The notion of the infinite play of science may seem foreign to scientists coached to win finite games to secure their careers. And yet all attempts to capture the normative culture of science hint at an underlying, non-finite game home for science. What we find today is an academy trapped in the contradictions between these two mindsets: the poetry of discovery, the awe of nature, the joy of intellection, and the satisfactions of mentoring have been pushed aside, displaced by the rush for reputation in a now-harshly-competitive system of scarce resources and narrow opportunities. 

These contradictions have been noted for decades in articles and books that contrast science’s putative norms with the observable organizational practices of science. Sociologists and critics of science practice point to the realities of doing science in today’s world. “Science claims X, but in practice we find Y.” Ziman (2002) makes this contrast more than seventy times. These observations now share the discourse with a chorus of observations about “bad science:” unreproducible findings, plagiarized and repetitive science articles, ersatz statistics (p-hacking, etc.), systemic biases and conflicts of interest in funding and advancement, public distrust of science findings, and a profiteering publishing industry. 

The reality of doing science today seems fundamentally out of step with how good science needs to happen. “Real science” is still distinguished by normative behaviors and values that are regularly called upon to counter deviation into “bad science”. But when the incentives are perverse and pervasive, resistance is a challenge that overlays and undermines the challenges of doing infinite-game science.

Required Reading:

Chapter 5 of the Open Scientist Handbook (41 pages):

The Infinite Play of Science: Learn how playing the infinite game of science will improve your research, your collaborations, your career, and your life.

Link: https://doi.org/10.21428/8bbb7f85.86723dd6

Suggested Reading: Finite and Infinite Games by James P. Carse

Available at the Internet Archive: 

https://ia601905.us.archive.org/18/items/james-p-carse-finite-and-infinite-games/James%20P%20carse%20Finite%20and%20Infinite%20Games.pdf

C13: Navigating the Open Access Book Landscape

Late

Instructors:

  • Michael Ladisch
  • Jennifer Chan 

Abstract:

As open access publishing of scholarly articles has entered the mainstream, publishing open access monographs is still somehow behind, but catching up fast. An increasing number of  scholarly monographs are published open access, new models of funding, and new projects for an OA books infrastructure are developed.

This course aims to provide an overview of the open access book landscape as it is at the moment. Day 1 will focus on the perspective of libraries (and other funders), who provide support for OA monographs, but have to make decisions on where to allocate their usually limited resources. New open access monograph initiatives and programs by publishers are launched regularly and no library is able to support them all. Course participants are encouraged to discuss a few of those initiatives, the pros and cons, and also share experiences and guidelines their institutions may already have.

On Day 2 we will look at benefits and challenges for publishers when they make monographs available for free. The final report of the TOME Initiative (that had strong university press participation) provides some insights about the impact of open access publishing on distribution and sales. We hope to invite a publisher representative for a short statement. We will then talk about the discoverability of open access publications, and the challenges for libraries to integrate them into their collections. Participants should again provide examples and experiences from their home institutions.

The final Day 3 is to a large extent dedicated to the author. Why should a scholar publish a monograph open access? What are the benefits for the author, for readers outside their community, for the wider world? Again, we hope to invite an open access book author to join us and speak briefly about the decision to select this form of publishing and the impact of the monograph. Closely related, we will then look at tools and services to track usage of open access books, which is equally interesting for all stakeholders.

At the end we encourage a more general discussion about the future directions for open access monograph publishing.