What and how we know things about ourselves, each other, and the world around us represents the foundation of our values as members of complex communities of aligned interests. These shared epistemologies shape and define cultures. The languages we use, the stories we tell, the histories we keep, and the beliefs to which we assign value represent the daily, practical basis for belonging.
The way we protect our languages, stories, histories, and beliefs goes beyond the artifacts and objects we preserve in centres of cultural preservation like galleries, libraries, archives, and museums. The way that we protect cultural works and cultural workers in the twenty-first century is increasingly urgent, as an ongoing process that cannot accept failure. When the safety and security of our cultural legacies are at stake, failure is not an option. Security practices are a largely implicit feature of community values and belonging, but protecting digital cultures from online threats requires a strict, formal attention to what and how we protect digital cultural assets. A shared understanding of the relative risks associated with the people, things, and data needing protection is an important expression of values and common interests. This shared threat awareness also helps signal opportunities for broader collaborations and a refined solidarity through security cultures.
Academic culture accretes community values in a similar manner, though the pace of change now runs in step with the rapid evolution of digital technology. Such a broad opening critical frame is warranted given the ubiquitous and emergent quality of digital culture and the internet in general. The economic logic of protecting bank records or corporate secrets often overshadows the cultural, historical logic of protecting digital archives. Lisa Gitelman in Always Already New describes historical logic this way: “The history of emergent media, in other words, is partly the history of history, of what (and who) gets preserved—written down, printed up, recorded, filmed, taped, or scanned—and why” (2006, 26). Although digital archiving practices and research data management processes are well established in many disciplines, the evolving and emergent quality of cybersecurity risks require an holistic and continual reappraisal of the research life cycle. This reassessment should encompass both technical systems and infrastructure as well as human factors.1
Digital culture can be readily characterized by its emergent qualities, embracing iterative processes that recombine and repurpose existing materials while incrementally adding wholly new ideas. Academic cultures and the knowledge they produce are defined by similar terms and further refined through the practices of peer review and scholarly discourse. There are many practical, workaday activities necessary to manage and support successful scholarly work. Open and social scholarship is predicated on access to research materials, yet openness must be counterbalanced by security considerations. In March 2021, the Government of Canada underscored the importance of research security policies by identifying research security as a strategic priority, emphasizing the need to “integrate national security considerations into the evaluation and funding of research partnerships.”2 Increasingly, a measure of success must include the security of research data as well as researchers. As institutional risk to cybersecurity incidents continues to grow, security best practices are now a simple reality of development and operating a scholarly project or research institution.
The internet's fundamental role in numerous research practices renders the core infrastructure of knowledge production, storage, and dissemination susceptible to risk and vulnerability in the event of a cybersecurity incident. Threat modelling and risk assessments must become a condition of both funding and ongoing safe operations of complex, public-facing research groups. With the increasing threats posed by criminal and nation-state threat actors to critical research and development infrastructure, security culture must also become a key facet of sharing what we know and validating the integrity of our knowledge systems and infrastructure. While archivists have long managed this kind of risk, the social and engaged quality of scholarship today broadens researcher attack surfaces beyond secure data storage.3
For this reason, I aim to situate security best practices within a digital humanities (DH) research sensibility, towards archival risk management and ethical collaboration. Data integrity can be defined for our purposes here as digital archiving protocols like LOCKKS (Lots of Copies Keep Stuff Safe) or Research Data Management (RDM) processes described by the formerly titled Portage Network (now covered under the four pillars of The Digital Research Alliance of Canada).4 Software assurance is the practice of ensuring software is free from vulnerability or defect that might disrupt research practices anywhere along the research life cycle. Both proprietary and open-source software present unique challenges in verifying the software supply chain and managing complex dependencies that could potentially invalidate, corrupt, or otherwise compromise research outputs.5 Data integrity and software assurance are just two categories within the purview of security practices that will inform fundamental considerations in our research methods. From this perspective, operational security becomes a guiding principle in all research practices.
When taken seriously, security practices require the incorporation of core principles such as data integrity and software assurance into our commitment to collaborative ethics, openness and transparency, preservation, and physical safety. While cybersecurity provides the impetus for integrating formal threat assessments in our research cultures, physical security must also account for the security of researchers, research participants, physical artifacts, and the built environment. The breadth of the risks facing researchers must match the urgency of the problem. It is possible to make a specific claim beyond the figurative hesitancy of my title: “Security Culture is an Expression of Values.” Researchers must be ready to reflect on disciplinary values and how best to enact those values in securing our work. By augmenting current lab-based research methods with robust security-first research methods, it is possible to express the ethics at work in digital projects. As a humanities-based scholar of digital media, aligning security best practices with humanistic values is not easy or simple.
For instance, managing and deploying infrastructure is fundamentally about enforcing authenticated access. It is necessary to describe people, places, and things as assets, risks, and threats. A reductive or simplistic assessment of assets, risks, and threats only serves to increase our risk. As such, suspicion and paranoia are recast as the work of critique within a holistic security practice. Suspicion and paranoia are not generally regarded as productive critical approaches in the humanities or any other research culture (Nachreiner 2014). Scholarly curiosity is a source of new discovery and insight, whereas suspicion and paranoia are often associated with a type of prejudice or pre-judgement. For example, a realistic account of nation-state level risks may be regarded with incredulity or subject to charges of cultural insensitivity or outright racism.6 Anticipating threats requires forethought and speculation about potential threats, wherein national political forces set the regulatory and legal frameworks.7 By contrast, a hypothesis might emerge to align some of these attitudinal differences that complement each other through the mechanisms of discourse, discussion, and debate. Bridging the gap between security and research cultures has already become an existential question for the validity and access to useful research.8 If productive points of overlap can be found between security best practices and digital research practices, scholarship will remain available and have greater impacts over time, while security researchers benefit by better understanding how their systems and behaviours impact online cultures.
A robust security-oriented research culture must also be capable of protecting knowledge stakeholders across the research life cycle. Research participants must be given reasonable and informed consent regarding the security of their contributions, researchers of all ranks must be assured of physical security in the face of increasing politicization of disciplines across the university sector, and citizens must retain access to publicly funded research regardless of external forces, such as science denialism, political extremism, cultural fundamentalism, or criminal profiteering. In such a context, our research security cultures must enact our values in a way that conveys the importance of the people, objects, and ideas that must come together to achieve some new insight or make the next great discovery.
The first articulation of this broader notion that security represents a “goals and values alignment” comes from Eugene Spafford (2019). The alignment of goals and values in security emerges from a holistic notion of trust, wherein researchers align the priorities of developers and users across the supply chain of both software and ideas. Spafford articulates this in his remarks: “When we’re going to say we’re going to trust something in the system, we have to be sure we understand what trust means to each of us as individuals, who have values and goals about the things we want to do and the information we have” (2019, 26:49). Spafford also describes how our research institutions have goals and values that may be out of alignment with our own as researchers. When extrapolated further, within the domain of the vast amount of research occurring globally online, the goals and values of our institutions, governments, colleagues, departments, funding agencies, and others will not necessarily align with our own priorities as individual research groups.
For example, the International Memorial project, first founded in Russia in 1992, has been liquidated by the Russian Supreme Court for its work in preserving the human rights abuses of the USSR and the current Russian government under Vladimir Putin. Because this project runs counter to the Russian government’s cultural narrative and latest imperial ambitions, it was expedient for the corrupt government and courts to destroy the project within Russia.9 Given the cultural and cyber dimensions of the war currently underway in Ukraine, it is important to remember the privileged position national governments possess in destroying politically inconvenient histories (Mauro 2021; 2022). Researchers cannot defend against all adversaries, and new threats will emerge from both technological and political change. New autocratic governments may supplant democratic ones, resulting in a hostile environment literally overnight. The guerrilla archiving that occurred in the wake of the election of Donald Trump in 2016 is an example of defensive measures precipitated by a change in government.10 Set within the tumultuous first decades of the twenty-first century, researchers must also remember that security best practices are not a durable methodology in themselves because they are mutable practices predicated on a complex network of technological, political, and cultural realities.
The security of research infrastructure and assets may run counter to the proposed ethics espoused by humanistic research in general. What is to be done if our ethics are out of line with a necessarily restrictive security posture? We can first identify areas where security practices do, in fact, align well with the methodological norms of our disciplines. In this case, DH has long represented a widening of the methodological scope of the humanities, which has forced a decades-long re-evaluation of the motivations and intentions of humanistic inquiry. As a discipline, DH has been voracious in its re-evaluation of key questions in academic work—including peer review, credit allocation, labour rights, positionality, learning through failure, prototyping, and other process-based experiential learning—which, of course, still says nothing of computation, the internet, or any number of multimedia approaches that are expanding what and how humanities scholars make discoveries and add to the public discourse on human culture in general. The scope and scale of DH as a scholarly project is vast and evolving, which is why security best practices must now be added to this list of procedural, technical, and organizational adjustments.
In closing, it may be productive to have a provisional list of scholarly values that are likely to conflict with security practices. These points listed below are not intended as imperatives to follow, but rather serve as just a few of the most salient and broadly applicable points of friction between possible assumptions of researchers and the assumptions of security professionals. The values of many academic researchers might be summarized as an emphasis on openness, transparency, collaboration, and sharing among others:
-
Threat modelling and risk assessments: Security best practices must become a condition of both funding and ongoing safe operations of complex, public-facing research groups. The human factors of research security must protect against the misuse of project systems, while also protecting the privacy and physical security of researchers, students, and research participants. Engaging in a threat modelling process that is ongoing and evolving would require an inventory of assets and a risk assessment related to the protection of researchers and research materials, including software and data.
-
Validating open source: Understanding the limits of the open-source movement in security is critical for the operational security of public facing digital research projects. While there is no security through obscurity, open-source tooling matches well with a scholarly predilection towards open access. However, an unexamined use of open-source tooling may occur because of scholarly values of openness. Validating our software supply chain may represent an enormous cost to research projects, but threat assessment processes may prove that such protections are necessary. Validating research tooling may need to become a community effort among researchers using shared tooling, which would require new approaches to reporting and collaboration on security related findings.
-
Breaking things: DH researchers, at least since Jerome McGann’s (2001) Radiant Textuality, have been interested in “making things.” Researchers may need to be involved in breaking things as a means of enacting a form of civil disobedience for digital citizens. Encrypting, destroying, and moving data are all important mechanisms to manage risk from this perspective. If data poses a risk to individuals, it may be necessary to simply delete it. However, we may also be compelled to download and share data without permission. Hack and dump operations may be the only means to preserve incriminating data that risks being destroyed by those eager to cover their tracks. Researchers may find new allies with activist archivists, like Archive Team or Distributed Denial of Secrets.11 Breaking things will require a new form of collegial solidarity to support and protect researchers engaged in quasi-legal data collection in the course of their research.
-
Failure is not an option: There is little room for learning through failure in a security-oriented research culture. Defensive security is inherently asymmetrical. Defenders must be completely successful in their defensive measures, all the time. An attacker need only succeed once to compromise a digital research project. A project may then be taken offline, deleted, poisoned, or otherwise subverted for political or financial reasons. Shawn Graham’s (2019) Failing Gloriously and other Essays is an excellent history of this embrace of failure in humanities research. Failing, however gloriously, may not be an option for many researchers in the increasingly hostile threat environment emerging online. Failure must instead be replaced by a more responsive framework that includes identifying risks, protecting systems, detecting breaches, responding effectively, and recovering operations.12 New opportunities for collaboration may exist among researchers interested in validating security measures through penetration testing.
These four points of overlap between research methods and security practices demonstrate the opportunities and the urgency of this project. These examples say little about the cultural values that security practices may impart in our research practices. Cultural criticism is often a highly engaged, activist research practice. Public-facing research is also increasingly visible to antagonistic forces who are hostile towards institutions of higher education.
University-based researchers are facing increasingly sophisticated, often automated, security threats (Mauro 2022). Cybersecurity best practices for researchers represent a highly technical set of challenges, which are added to an already broad set of expertise areas. The fact remains that the processes and policies followed by researchers and research participants are critical in mitigating a range of risks and vulnerabilities. The security culture that is precipitated by social scholarship would be transparent and participatory. Through a participatory style of security policy adoption and training, researchers at every level can make better choices that impact the security of research projects. Because security threats evolve quickly, institutions with deeply integrated security policies will be able to improve situational awareness and protect researchers, their data, and institutional infrastructure.
The consequences for reviewing and revising the array of concerns encompassed by open social scholarship are significant: to ensure the security of research practices, project goals should now include a comprehensive definition of research security practices address the security of individuals involved in research, the assets utilized or generated during the research process, and the software and technical infrastructure that supports research activities. This definition of research security practices should be continuously measured and assessed at each stage of the research life cycle. Research security policies should be user-centric rather than system-centric. All participants in knowledge creation, including researchers, participants, institutions, and users, should have a ready understanding of the security practices followed during the creation, preservation, and dissemination of research objects. To capture performance validation and verification of these practices, appropriate tools will be needed to securely implement new knowledge environments. These tools must reflect the values, goals, and needs of researchers and other participants, thereby fostering a security culture that aligns with the broader objectives of scholarly inquiry.
Bibliography:
Birsan, Alex. 2021. “Dependency Confusion: How I Hacked into Apple, Microsoft and Dozens of Other Companies.” Medium. Accessed September 27, 2023. https://medium.com/@alex.birsan/dependency-confusion-4a5d60fec610.
Currie, Morgan, and Britt S. Paris. 2017. “How the ‘Guerrilla Archivists’ Saved History – and are Doing it Again Under Trump.” The Conversation. Accessed September 27, 2023. https://theconversation.com/how-the-guerrilla-archivists-saved-history-and-are-doing-it-again-under-trump-72346.
Dewar, Elaine. 2021. On the Origin of the Deadliest Pandemic in 100 Years: An Investigation. Windsor, ON: Biblioasis.
Gitelman, Lisa. 2006. Always Already New: Media, History, and the Data of Culture. Cambridge, MA: MIT Press.
Graham, Shawn. 2019. Failing Gloriously and Other Essays. Grand Forks: The Digital Press at the University of North Dakota.
Mauro, Aaron. 2021. “Review of False Mirrors: The Weaponization of Social Media in Russia’s Operation to Annex Crimea.” Canadian Slavonic Papers 64 (4): 523–24.
———. 2022. Hacking in the Humanities: Cybersecurity, Speculative Fiction, and Navigating a Digital Future. London: Bloomsbury.
———. 2022. “Ukrainian Cultural Artifacts Are at Risk During the Russian Invasion, but Digitizing Them May Offer Some protection.” The Conversation. https://theconversation.com/ukrainian-cultural-artifacts-are-at-risk-during-the-russian-invasion-but-digitizing-them-may-offer-some-protection-185673.
McGann, Jerome. 2001. Radiant Textuality: Literary Studies After the World Wide Web. New York: Palgrave MacMillan.
Nachreiner, Corey. 2014. “The Perfect InfoSec Mindset: Paranoia + Skepticism.” DarkReading. Accessed September 27, 2023. https://www.darkreading.com/operations/the-perfect-infosec-mindset-paranoia-skepticism.
Pauls, Karen, and Kimberly Ivany. 2021. “Mystery Around 2 Fired Scientists Points to Larger Issues at Canada’s High-security Lab, Former Colleagues Say.” CBC. Accessed September 27, 2023. https://www.cbc.ca/news/canada/manitoba/nml-scientists-speak-out-1.6090188.
Spafford, Eugene. 2019. “Rethinking Cyber Security.” CERIAS Security Seminar. https://www.youtube.com/watch?v=MI6pq4zIBx0.
-
See my Hacking in the Humanities: Cybersecurity, Speculative Fiction, and Navigating a Digital Future (London: Bloomsbury Publishing, 2022) for a more fulsome treatment of these issues. ↩
-
See the Government of Canada’s “National Security Guidelines for Research Partnerships” for the evolving federal policies: https://science.gc.ca/site/science/en/safeguarding-your-research/guidelines-and-tools-implement-research-security/national-security-guidelines-research-partnerships. ↩
-
For example, see the United Nations and US National Archives frameworks for digital preservation: https://archives.un.org/content/managing-risk and https://www.archives.gov/preservation/digital-preservation/risk. ↩
-
See https://www.lockss.org/ and [https://alliancecan.ca/en/services/research-data-management](https://alliancecan.ca/en/services/research-data-management, respectively. ↩
-
Proprietary software is not inherently more secure, since security through obscurity is simply not a viable means to assure software reliability. Open-source software faces supply-chain attacks in the open, particularly in software repositories like NPM, PyPI, RubyGems, and others. Alex Birsan has coined the term dependency confusion in a 2021 blog post that has done well to describe the scope of the problem. ↩
-
Take for example the case of Drs. Xiangguo Qiu and Keding Cheng, who were stripped of their security clearance at Canada’s National Microbiology Lab in July of 2019. While shrouded in secrecy and conflicting reports, CBC reporting implied national security and commercial secrets as possible concerns for their dismissal (Pauls and Ivany 2021). Elaine Dewar (2021) goes into much more detail about the complex issues present in expelling foreign nationals during a national emergency. https://www.cbc.ca/news/canada/manitoba/nml-scientists-speak-out-1.6090188 ↩
-
The US Patriot Act is an excellent example of the legal contexts that may contract research security cultures in the humanities because of the sweeping surveillance powers it affords the US government. ↩
-
Consider the security reviews required to collaborate with researchers in Communist held China. The Canadian Federal Government, through Innovation, Science and Economic Development Canada, issued new guidelines and procedures, National Security Guidelines for Research Partnerships, which describe conditions necessary to receive government support: https://science.gc.ca/site/science/en/safeguarding-your-research/guidelines-and-tools-implement-research-security/national-security-guidelines-research-partnerships. These guidelines have been drafted under principles such as academic freedom; institutional autonomy; freedom of expression; equity, diversity, and inclusion; research in the public interest; transparency; integrity; and collaboration. These principles match well with academic values in Canadian institutions, with the notable omission of decolonization. However, these principles do not easily match with a risk assessments and threat models that must assume hostile actions by some potential collaborators. ↩
-
Please see https://www.memo.ru/en-us/. ↩
-
See Morgan Currie and Britt S. Paris, “How the ‘guerrilla archivists’ saved history – and are doing it again under Trump,” The Conversation, February 27, 2017, https://theconversation.com/how-the-guerrilla-archivists-saved-history-and-are-doing-it-again-under-trump-72346. ↩
-
See respectively https://wiki.archiveteam.org/ and https://ddosecrets.com/wiki/Distributed_Denial_of_Secrets. ↩
-
See the NIST Cybersecurity Framework: https://www.nist.gov/cyberframework. ↩