My new book, Modernist Parasites: Bioethics, Dependency, and Literature, Post-1900, analyzes biological and social parasites in the political, scientific, and literary imagination. With the rise of Darwinism, eugenics, and parasitology in the late nineteenth century, Sebastian Williams posits that the “parasite” came to be humanity’s ultimate other—a dangerous antagonist. But many authors such as Isaac Rosenberg, John Steinbeck, Franz Kafka, Clarice Lispector, Nella Larsen, and George Orwell reconsider parasitism. Ultimately, parasites inherently depend on others for their survival, illustrating the limits of ethical models that privilege the discrete individual above interdependent communities.
We tend to think of parasites as greedy, self-serving figures. Yet Modernist Parasites explores what these figures have given to, rather than taken from, literary writers. From the First World War poetry of Isaac Rosenberg to the experimental writing of Brazilian novelist Clarice Lispector, Williams argues that the figure of the parasite is central to modernist writers’ imagining of a relational, interdependent model of selfhood – one that productively troubles the liberal humanist conception of the self as bounded, singular, and autonomous. Readers of modernism, animal studies, and posthumanism will find much to draw on this generous and generative study of literary parasitism.
— Rachel Murray, University of Bristol
Modernist Parasites: Bioethics, Dependency, and Literature is a welcome contribution to the current critical conversation on modernism and the posthuman. The analysis moves deftly between national, historical and socio-economic contexts. From the unimaginable squalor of everyday life on the Western Front, through the choked landscapes of the American dust bowl, to the contradictions of twentieth-century metropolitan culture, parasitism defines modernist biopolitics.
The study amasses a wealth of textual evidence and theoretical argument, drawing from the American and the European traditions (Wilfred Owen, Isaac Rosenberg, John Steinbeck, Erskine Caldwell, Franz Kafka, Clarice Lispector, Mina Loy, Nella Larsen and George Orwell) to show that the parasite inhabits the century’s most vivid expressions of the marginalised and the abject.
Instructors can use several methods to detect whether a text has been authored by a chat bot such as ChatGPT, including both heuristic approaches and GPTZero. Additionally, instructors can limit the use of ChatGPT and other chat bots by modifying assignment prompts, asking for pre-writing (e.g., outlines) and early drafts, or by requiring more handwritten work.
Ultimately, however, faculty should try to integrate these technologies into the learning process. ChatGPT does not fully eliminate the need for problem solving or navigating complex issues, and, in many cases, it frees up writers to worry less about grammar or style than higher order issues such as developing original claims or expressing an individual position.
Heuristic Methods for Detection
ChatGPT can be detected by many writing instructors based on organization, word choice, and the originality of ideas.
For example, when asked to write a close reading of William Wordsworth’s “The World is Too Much with Us,” ChatGPT authored a relatively formulaic response that moved from the start of the poem to the end (i.e., chronologically) and was unable to produce a coherent argument when asked to organize topically instead. (See Appendix for the sample text created by Chat GPT.)
This reflects another tendency in the current iteration of ChatGPT to organize short essays using a tripartite structure. For example, ChatGPT now tends to write “First, X occurred [. . .] Then, Y happened [. . .] Finally, Z was present.” Though machine-learning programs are able to adapt over time, some writing experts have noted a simple, formulaic organization method (“X, Y, Z”). This may only be true if a user does not modify an original response (e.g., by asking ChatGPT to write more).[1]
Other basic issues indicated that the text was automated: the AI-generated text misquotes the final line of Wordsworth’s poem as “the sea that bares her bosom to the moon; / The winds that will be howling at all hours.” The bot includes two lines (not a single line) from the third quatrain and replaces “This Sea” (with a capital “S”) with “the sea.” Plus, the actual final line of the sonnet is significantly different; these surface-level problems caught my attention even before submitting it to GPTZero.
The chat bot also tends to elide in-text citations (though, again, this may change as the technology adapts). And, perhaps most notably for contemporary writing instructors, the essay reiterates only basic ideas or the most accepted interpretations similar to those on Sparknotes, Cliff Notes, and Wikipedia. I.e., it offers the most accepted reading and the main points easily found elsewhere, and, in terms of argumentation, the close reading is not an original or unique interpretation. The sample essay probably would not score an “A” based on expectations for college-level students to go beyond the commonplace reading, balance multiple interpretations at once, or to “read against the grain.”
Formulaic organization, simple interpretations, and basic errors when identifying the lines of a poem are not definitive proof that cheating has occurred, however. A student (or parent) might easily point out that the issues above are reasonable for a student writer.
GPTZero
GPTZero detects AI-generated text based on “perplexity” and “burstiness.” Perplexity refers to a probabilistic detection of unique words within natural language, including randomness and complexity. It also measures “burstiness,” or what rhetoricians commonly refer to as sentence variation (including random variations between complex, compound, simple, and compound complex sentences). GPTZero detects patterns in word choice and sentence length based on massive databases of stored information. In sum, human writers are often more random than a chat bot, though, obviously ChatGPT is not a static technology – it “learns” over time based on inputs.
GPTZero is currently free to use and generates a descriptive report that not only evaluates a sample text but also explains terminology.
In the sample text, GPTZero detected that the close reading of Wordsworth’s poem had a comparatively low rate of perplexity in several instances (see Figure 1) – an indication that certain parts of the text were not written by a human.
Fig 1 The perplexity evaluates the unique word choices that writers make. AI-generated text is less perplex, but, as the program notes, it’s also important to note perplexity across sentences (i.e., relative to the individual text more so than general word use).
Repeated use of GPTZero with the same text found the same results:
Perplexity: 16
Perplexity (across sentences): 46.1
The line with the highest perplexity: 151
GPT Zero wrongly concluded that the text was human-generated (see Figure 2)
Fig 2 The green text indicates that false-negative conclusion of GPTZero in this first attempt.
The last two bullet points are related. The most unique line in the essay came from a direct quote of the poem (it is actually two sentences, but it is missing punctuation). In other words, the AI-generated text included several quotes from a Romantic poet whose work is highly distinct (see Figure 3). When summarizing the results, this led to a false-negative report. The tool was fooled into believing all of the text was written by a human because several parts were in fact written by Wordsworth.
Fig 3 The most perplex sentences were written by Wordsworth, which, given their frequency in the essay, ultimately led the GPTZero bot to make a false conclusion.
Submitting a Text without Direct Quotes
I submitted a close reading of the poem without direct quotations. This involved deleting quotes while still maintaining the integrity of the syntax in the original essay. The results were promising:
Perplexity: 10
Perplexity (across sentences): 32.8.
The line with the highest perplexity: 58.
GPT Zero correctly concluded that the text was AI-generated.
Fig 4 When eliminating direct quotations, GPTZero came to the correct conclusion.
This indicates that GPTZero is an effective tool for recognizing when text is AI-generated. However, when using the tool, instructors need to be aware that false-negatives are highly possible in a sample text that quotes other material.
Finally, GPTZero users should acknowledge two important factors. First, GPTZero was “trained” on a previous iteration of ChatGPT and will likely become less effective over time. Second, GPTZero uses a limited, probabilistic method and cannot confirm its results with 100-percent accuracy. Students accused of using ChatGPT could easily point out that GPTZero is subject to errors in several cases.
Other Ways of Addressing AI-Generated Text
Instructors should also recognize that they may need to modify their assignment prompts, ask for pre‑writing or drafts, or require more handwritten work to diminish use of ChatGPT.
For example, ChatGPT is less effective when writing about events, texts, or issues within the last few years (post-2020).[2] So, asking a student to analyze a historical document and then to connect it to current events may impact their ability to use a chat bot in place of original work.
Alternatively, asking students to write assignments by hand or to turn in pre-writing and drafts will likely offset the use of ChatGPT. But students can obviously generate an automated text and work backward from it, either copying it by hand or creating an outline after the fact (which is something many students admit to doing in composition classes anyway).
In short, these are not fool-proof methods.
Conclusion: ChatGPT in the Classroom
Perhaps the best method to address automated text in classrooms is to change the mindset that instructors have when approaching the issue. Rather than viewing ChatGPT as yet another tool for plagiarism, instructors can adopt methods for integrating it into classwork.
For example, in a previous writing course, I asked students to generate poetry using Google Verse, which is a machine-learning chat bot similar to ChatGPT that is designed to write poetry specifically. One assignment required students to generate a poem using AI and then to analyze what is omitted in the process. The assignment recognizes that these technologies are highly effective, yet it shows students the value of individual thought, critical analysis, and creativity.
In a literary studies class, a similar assignment might ask students to read a ChatGPT-generated close reading of Wordsworth’s poetry. Then, students could write about what the chat bot has omitted (I have used a similar assignment with SparkNotes and Wikipedia entries). This acknowledges the value of balancing multiple, often contradictory, interpretations of a literary text while still finding meaning in imaginative writing. Most importantly, it does not necessarily downplay the value of chat bots in organizing information – especially regarding dominant interpretations or common knowledge.
Ultimately, such technology will likely allow students and instructors to create more complex assignments and courses overall. Rather than viewing ChatGPT as a floodgate for new issues, instructors should use technologies to their advantage to eliminate some of the more tedious aspects of assignments and course design.
As a concluding remark, it seems worth mentioning that the text I chose to sample, Wordsworth’s “The World is Too Much with Us,” is itself a meditation on how technology changes the human imagination. But a twenty-first-century interpretation of the poem might recognize that what Wordsworth laments is not unique to the Industrial Revolution, and that the persona is ultimately critiquing how humankind uses and responds to technology – not necessarily the technology itself.
With the recent announcement that researchers have achieved nuclear fusion, atomic energy is once again in the news. But debates surrounding nuclear energy have actually existed before fission was fully realized. As H. Bruce Franklin notes, “for fifty years, the first atomic explosion in Robert Cromie’s 1895 novel The Crack of Doom until 1945, nuclear weapons existed nowhere but in science fiction” (131). H. G. Wells was the first to coin the phrase the “splitting of the atom,” and writers such as Karel Čapek explored the impact of atomic energy on sustainability and human progress.
Put differently, many of the debates surrounding nuclear energy — its benefits and dangers — are deeply impacted by literature, film, and other cultural products.
This is important to understand because, as Ann Stouffer Bisconti notes, one of the greatest factors shaping the public opinion about nuclear energy is media representation — whether real or fictional stories. For example, something like the Chernobyl disaster is relatively rare for nuclear reactors, but it received extensive media coverage and is still widely discussed today. This includes fictional stories about Chernobyl, such as the five-part HBO series by the same name. In short, stories matter, especially when it comes to environmental discourse.
Sci-Fi and Nuclear Disaster
Several scholars explore the cultural fascination with nuclear apocalypse as well as indirect reactions to the Atomic Age. For example, Cyndy Hendershot notes that many of the B-movies from the 1950s and 1960s — such as It Came from Outer Space (1953), The Day the Earth Stood Still (1951), and Them! (1954) — may be a sort of psychological displacement about the horrors of nuclear annihilation. In other words, people struggle to deal with the fact that humans have created world-ending technologies, so “monster flicks” are a way of confronting our fears — at least indirectly.
Susan Sontag similarly wrote about this issue in a now-famous essay called “The Imagination of Disaster.” Why are people so captivated by disaster films, especially in science-fiction? Sontag theorizes that there’s a convergence between “high-brow” philosophical issues about the ethics of nuclear power (and nuclear weapons) and the “low-brow” B-movie genre. We should analyze film as an expression of cultural anxiety, for Sontag, precisely because it’s not rigid, academic discourse.
This view has had some effect, as many film viewers can’t see Godzilla as anything other than an allegory for nuclear war. But looking to cultural products like science-fiction is an important way to continue studying — including writers like Karel Čapek, who may not be a household name for Americans (yet!).
The Radium Age: Nuclear Before Nuclear
Cultural artifacts such as film and literature offer insights about how we discuss nuclear power today, because many of these films and books deal with the same issues. But these early films and novels are perhaps more notable, because they had these conversations before fission technology even existed.
Joshua Glenn describes an entire movement or “era” that pre-dates fission as the “Radium Age.” H. G. Wells, Robert Cromie, and others knew about radioactivity (radium, in particular, was widely discussed in the early twentieth century), and therefore imagined the many things that could be made from it. Cromie saw disaster as imminent; Wells imagined a utopic vision of a world forced into pacifism by the threat of nuclear holocaust.
For me, though, one writer stands out: Karel Čapek (pronounced cha-pek). Čapek wrote two novels around the same time dealing with nuclear power, The Absolute at Large (1922) and Krakatit (1924). And the former is really about what happens when we do achieve sustainability. Is it really enough to have the technology, or is there perhaps more at play, like politics and rhetoric?
Čapek’s Portrayal of Nuclear Energy
Čapek is perhaps best-known today for introducing the term “robot” along with his brother Josef in the 1920 play R. U. R. (which stands for “Rossum’s Universal Robots”). Čapek wrote extensively, and is one of the most influential Czech-language writers in history. His journalism and nonfiction are just as powerful as his novels and plays, particularly his writings about anti-fascism and anti-authoritarianism.
The Absolute at Large follows the development of nuclear energy through the creation of the “Karburator,” a technology vaguely described in the book as using atomic energy to create virtually unlimited power. There’s one horrible side effect: the technology releases the “Absolute,” a transcendent, possibly spiritual, force that drives humanity to fanaticism.
Čapek uses nuclear energy as a metaphor for political division and authoritarianism, a problem he recognized would become increasingly dire in the twentieth century. But his story’s reliance on atomic energy roughly 15 years before fission was fully realized is important because it shows the role of imagination in shaping public opinion.
In many ways, the technology works well: it creates energy like it should. The engineer who creates the Karburator realizes that there’s a side effect when you destroy matter (at least in this fictional world). The primordial memories of the universe, as well as the power of the Absolute (which seems to have a mind of its own), is no longer constricted by matter. Therefore, Čapek raises a fundamental question: simply because you can create a technology, does it mean you should?
The Absolute at Large is a good read for any sci-fi buff. It’s an interesting concept, and Čapek occasionally uses experimental writing techniques, especially toward the end of the book. He asks big questions about historiography (how we write history after major world events) and what he views as the key problem of environmental sustainability: Sustainability is not simply a technological problem, but it is also an ethical one.
We need humanists, novelists, journalists, and philosophers, to help us navigate the ethical and rhetorical dilemmas posed by sustainability. More people should study and read Radium Age fiction if they really want to understand the nuances of the nuclear debate (and also find some really good science fiction), and Čapek is a great place to start.
Sources
Bisconti, Ann Stouffer, “Changing public attitudes toward nuclear energy.” Progress in Nuclear Energy, vol. 102, 2018, pp. 103-113, doi.org/10.1016/j.pnucene.2017.07.002.
Čapek, Karel. The Absolute at Large. University of Nebraska Press, 2005.
Franklin, H. Bruce. War Stars: The Superweapon and the American Imagination. Oxford UP, 1988.
Glenn, J. “Science Fiction: The Radium Age.” Nature, vol. 489, no. 1, 2012, pp. 204–205.
Hendershot, Cyndy. “From Trauma to Paranoia: Nuclear Weapons, Science Fiction, and History.” Mosaic, vol. 32, no. 4, 1999. Reproduced in Mosaic, 54.2 (2021): 37-54.
Sontag, Susan. “The Imagination of Disaster.” Against Interpretation and Other Essays, Ancho, 1966, pp. 201-25.
The Parasite by Michel Serres combines information theory, poststructuralism, and posthumanism. Originally published in 1980 as Le Parasite, it was translated to English and published by Johns Hopkins University Press in 1982. In the early 2000s, however, the book gained renewed attention when it was picked up and reissued as part of the “Posthumanities Series” (from the University of Minnesota Press), a special book series edited by Cary Wolfe. It is an important work for two major reasons. First, The Parasite challenges the idea of the humanist subject; that is, Serres argues that ideas such as independence, autonomy, and agency obscure the interdependency of humans with their environment. Second, Serres writes in an experimental style, as he aims to write a “new” type of philosophy that’s not hindered by academic conventions. The book is rigorous and playful, and it is sometimes affectionately referred to as “the book of books,” because Serres relies so heavily on intertextual references to structure his writing style.
To summarize the text is quite difficult, but you might simply say that this book is about the role of the parasite in social, biological, and informational systems. Serres suggests that the parasite is not simply a key facet of any system, but that it also serves as a “thermal exciter,” as a catalyst for changing the very nature of any system.
In this sense, Serres deconstructs the relationship between host and parasite (also note that l’ hôte means both “guest” and “host” in French). The parasite in this sense is less a “drain” on the energy of a given system or organism, but rather something that changes the very nature of the host. And, the radical implication of this idea is that the parasite isn’t necessarily negative—as we commonly conceptualize it today—as it opens a new range of possibilities. So, for example, minority groups who are commonly deemed “parasitic” can make “pests” of themselves to bring about social, political, and other forms of change.
Types of Parasites
There are three types of parasites:
Biological – a parasite is an organism that lives in a body or under the skin. It harms the host by draining energy (i.e., blood or nutrients) without providing any benefits. In the seventeenth century, “parasite” was likely used to refer to flora exclusively (e.g., mistletoe is a common form of parasitic plant).
Social – a so-called social parasite is a person who drains resources from a society without giving anything in return. Originally, parasitos referred to a specific character trope in Greek drama, and the word “parasite” was later co-opted by biologists.
Informational – le parasite is “static” or “noise” in a system. For Serres, an organized system exists in opposition to noise.
Serres is talking about three different types of parasites, but he emphasizes that these different types are not mutually exclusive. Arguably, one of the key aims of the book is to show that the use of the same word in three different contexts is not coincidental; instead, it informs the cohesion of the theory Serres aims to advance. For example, the biological term comes from the social context, and, in the early twentieth century, the biological context was often used (problematically) to refer to marginalized social groups, such as the Jewish people, black people, women, or migrant farmers. There is an inter-relationship between the biological and social; that is, parasite is always already a biosocial idea.
The final iteration is likely unfamiliar to most English readers, as it refers to the French word for “static” in information theory. Le parasite in this sense refers to an interruption in a signal, a break in a chain of communication, and so forth. So, in the first parable Serres discusses about the city and county rats, the noise that frightens the rats is itself a sort of parasite.
Relationship to Posthumanism
Though published before posthumanism was a distinct approach within literary studies, philosophy, and culture studies, The Parasite provides an early example of posthumanist concepts. On the one hand, it openly destabilizes the ways we conceptualize nonhuman animals, such as rats and so forth, but on the other hand it is much more significant in that it undermines the idea of an autonomous, individuated subject.
Key to liberal humanism (which posthumanism critiques) is the notion that individuals are autonomous and self-contained; Serres works to push back against this idea to suggest that parasites are inherent in any system, that relationality and dependency are much more widespread than we openly admit. Furthermore, such dependency often isn’t a negative thing; it represents a catalyst, a “thermal exciter,” a possibility for change.
To put it differently, it was quite common in the early twentieth century to describe “undesirables” as parasitic, which happened frequently in eugenics discourse, sociological theories, and so forth. Ergo, a person could be a humanist and still claim that certain groups (e.g., impoverished individuals) were parasites on the so-called productive class (think, for example, of capitalistic critiques of social welfare that so often rely on the image of independence as paramount). What Serres proposes that fits so well with posthumanist discourse is instead the idea that so-called parasites aren’t all bad, that independence is really more a product of problematic discourses than a reality, and that we need to think more critically about the potential limits of humanism.
Bioethics is a broad term used to refer to the ways we assign value to living organisms (human or otherwise). Most commonly used in medical institutions, the term bioethics is gaining wider usage outside of hospitals and medical schools. In particular, bioethical concerns in the humanities and social sciences often revolve around disability studies, animal studies, ecocriticism, and the various ways medical and biological discourses are intimately connected to culture.
In this sense, we might think of bioethics as an important link between the humanities and sciences, one that combines central topics in science and medicine with the major methodologies and critical theories most closely associated with the humanities. In fields such as disability studies and animal studies, many scholars insist on ethical approaches that have little to do with the hospital review boards that are most commonly signified by the word “bioethics.” For instance, scholars in recent years have written about how we represent physical disability in popular literature and culture, the commonplace philosophical assumptions that reinforce the human-animal divide, or the ways environmental science is often subject to popular cultural trends.
Science and Culture
There are sometimes arbitrary distinctions between science and culture, many of which concern the methods and approaches used by various scholars. A biologist and a philosopher might both try to answer the question of what it means to be human, but the former will likely consider the nature of the body or physical evolution while the latter might look at how definitions of the human change over historical periods. Neither is “more right” than the other; instead, what distinguishes the two is largely due to their approach.
When we consider bioethics in literary and cultural studies, we might keep the above example in mind. If we think about how to define “disability,” for instance, a medical practitioner will likely be able to develop a very specific definition; however, literary scholars such as Lennard Davis—to name one among many—will show that “disability” is a relatively new concept, one intimately related to the rise of “normalcy” and the novel in the nineteenth century. (For more information, see Davis’s Enforcing Normalcy.)
The same might be said of animal studies or environmental studies; scientists are hard at work investigating how certain animals feel pain, while others are developing objective methods for showing the devastating effects of the current climate crisis. However, many humanities scholars are aware that when discussing animal experience, we need to investigate the ways language and metaphor often limit our understanding. Or, in the context of climate change, we should consider the rhetorical strategies needed to convince people that we should act now to enact public policy. Facts and figures are important, but many people don’t make decisions based on matters of fact—that’s where rhetorical argumentation (a field of focus in English) becomes an important skill to leverage.
To suggest that science and the humanities (i.e., culture) are mutually exclusive is to ignore the ways discourse operates across disciplinary boundaries. And, the humanities scholar who ignores science or the scientist who ignores the humanities are less likely to make valuable contributions to our society.
Bioethics in the Text: Jekyll and Hyde
So where does ethics come into play? Recent trends in literary and cultural studies have shifted focus to questions of the body as well as questions of ethics (sometimes called “the ethical turn”). Philosophers such as Martha Nussabaum, in Love’s Knowledge, have reinforced the idea that literature is a valuable site through which we can frame our ethical queries. Literature shows us the real-world complexities of life through which we can raise questions about ethics, an approach that is much more helpful than oversimplified thought-experiments. And, considering the impact of authors such as Judith Butler—who revolutionized definitions of gender and performativity with books like Gender Trouble and Bodies that Matter—we now know that bodies are never just there, that they are instead embedded in discourse.
When considering bioethics in literary texts, we might look to Robert Louis Stevenson’s famous book The Strange Case of Dr. Jekyll and Mr. Hyde. (See my essay on medical quackery and Stevenson’s book here.) RLS’s book is commonly taught as an allegory for good and evil–the idea is that we all have this struggle between the two within us. While this is an easy interpretation to digest for most high schoolers, it’s also one that doesn’t address the historical context of the book. And, the good/evil reading is also a bit oversimplified; Stevenson’s writing is much too nuanced for such a cut-and-dry dichotomy.
Stevenson wrote the novella following a series of medical and pharmaceutical acts in the late nineteenth century, and many of the book’s themes reflect a growing ambivalence toward medical institutionalization. For example, Jekyll doesn’t become evil after ingesting his concoction; instead, he seeks an “avatar” to hide his unsavory tendencies from the start. And, Jekyll never actually knows how he created the drug that transformed him into Hyde—it’s implied that it is actually the result of a contaminated shipment of salts. So, the expert doctor in this book is really more of an evil quack.
While there is some deeply harmful skepticism toward medical professionals today (most of which is the result of outlandish conspiracy theories), what Stevenson reveals in Jekyll and Hyde is that mistrust in medical expertise has been around for years. Many medical and pharmaceutical regulations in Britain in the nineteenth century were partly the result of professional spats and political debates (consider, for instance, the establishment of the Pharmaceutical Society in Britain to displace the authority of apothecaries, or the delayed outlawing of opium as a ‘schedule one’ poison), enacting a disciplinary regime that exists to this day. Stevenson exaggerates the mistrust between the public and medical institutions in Jekyll and Hyde, but what he also points to is a growing rift between science and the humanities.
A “bioethical” approach to Stevenson’s iconic story—a novella that’s now been adapted countless times—helps to foreground the ways in which science and the humanities are deeply interconnected. Stevenson traces a growing ambivalence toward medical professionals in the late nineteenth century following a string of medical laws, raising important questions about the power dynamics that often exist between the patient and doctor, between medical institutions and the general public. What we can learn from Stevenson’s book is the ways public perception toward science and medicine is shaped, as well as the ways in which language and law in turn configure science and medicine.
Stevenson is at times vague about the specific medical and pharmaceutical practices of Dr. Jekyll, but what The Strange Case does emphasize is the need for ethical thought in medical and scientific practice. Jekyll’s research is notoriously self-serving; while Hyde is most clearly the villain, Jekyll isn’t much better. Stevenson doesn’t necessarily propose a clear ethical framework in the language of philosophy (e.g., deontology, virtue epistemology, normative ethics, utilitarianism), but what he does provide is a critique of medicine that’s been divorced from humanistic inquiry.
Modernism is a term used to refer to a collection of aesthetic, philosophical, and (in some cases) scientific movements in the late nineteenth and early twentieth centuries. For example, in the arts, movements such as surrealism, expressionism, imagism, and vorticism are usually included under the umbrella term modernism. Today, many scholars use the term generally, but, since the inception of New Modernist Studies in the late 1990s, many agree that modernism is really more of a loose term than a rigid definition.
For many, modernism encapsulates some of the most significant events in recent history, such as the First World War and the Second World War (and consequently, the Holocaust), as well as events such as Einstein’s discovery of quantum physics, the invention of atomic warfare, the shift from primarily rural to urban populations (in the US), and the women’s suffrage movement (in the US and Britain). The list goes on.
Naturally, this is not to suggest the modernist period is more important than, say, the medieval or early modern periods (i.e., the Renaissance), but it is necessary to note that Western and even non-Western societies changed drastically during the modernist era.
Why Should I Care About Modernism?
For starters, modernists address many of the same issues that we still deal with today. One of the key ideas I’ll address here is the alienation of the modern world, which authors in the modernist period sought to critique. Let’s take Franz Kafka as an example.
Kafka was a Czech-born German Jewish writer who never really received literary fame until after he died. Today, we have the word Kafkaesque to describe the alienating, seemingly pointless (and often bureaucratic) way of life we experience on a day-to-day basis, and Kafka appeals to so many because he foregrounds precisely the things we might try to hide or avoid: our sense of awkwardness in social situations, our frustrations with work, failed romances, and the chaos of the modern world. Kafka teaches us that its OK to feel out of place, that there’s beauty in trivial circumstances, and that there are very real and unexpected threats to our freedom.
One of his best-known novels, The Trial (Der Prozess, or “The Process” in German) follows the protagonist, K., as he’s unexpectedly arrested for a crime he didn’t commit. As far as we know, at least, he seems innocent. As the story unfolds, K. meets a cast of strange characters, like a painter, a priest, and a lawyer, who seem to know the ins and outs of the system, yet no one can tell K. precisely everything he seeks to know. K. never finds the answers he’s looking for, and he ultimately resigns to his fate: two men (who originally arrested him) come to his apartment to carry out a death sentence. K. dies “like a dog” in the final pages, and we’re left with a sense of hopelessness.
What Kafka advances in The Trial is a theory of contingency to describe the modern world. In other words, K. has an ordinary job at a bank, and he lives his life in a rather ordinary fashion; he is, for all intents and purposes, a sort of “everyman” (or woman). But, the novel is about what happens when our expectations, our understanding of the world, is disrupted. For Kafka, the modern world is chaos, even though we might try to make some sense out of it. Kafka jolts us out of complacency, impelling readers to experience the harsh realities of life; that is, K. feels secure before the events of The Trial, but that security is misplaced. The Trial not only forces its readers to consider the contingency of everyday life, but also the ways modernity creates an alienating environment. K. gets caught up in his trial like a wave, like an unceasing and incomprehensible process.
While this story might seem depressing, it also serves as a wake-up call. Kafka was writing in the years leading up to the Nazi rise to power (though he died in 1924, before Hitler took control of Germany), and, as critics like Walter Sokel have illustrated, Kafka was also writing in a period when thinkers like Freud questioned the idea that we have conscious control over our actions. Kafka seems to be trying to liberate his readers, to grab them by the shoulders and insist they question everything they take for granted.
Where Do We Go From Here?
In the twenty-first century, we’re still dealing with many of the same issues Kafka raises in The Trial. That’s perhaps one reason in the recent film Blade Runner 2049 the protagonist is named K. (a transparent reference to Kafka’s work). It’s easy to be complacent, to settle for our jobs, for the current state of affairs, but what happens when that’s disrupted? Or, as Blade Runner 2049 asks, what if we’re not who we thought we were?
Kafka has a particularly bleak outlook, but some modernists, like Hermann Hesse, T.S. Eliot, and Virginia Woolf, offer some answers. For Hesse, the alienation of the modern world leads to enlightenment, while Eliot insists (in works like “Tradition and the Individual Talent”) that in drawing from history we necessarily change it. Woolf shows us that there’s beauty in the quotidian, everyday stuff of the world, even if its something as simple as stopping in a garden or shopping on a busy street.
Modernism is important because it fundamentally asks us to change our perspective, whether it’s to question our surroundings or to simply stop and appreciate the world around us. For philosopher Martin Jay, in modernism, there are many viewpoints of the world–not just one. And this idea, for many, can be quite liberating.
The humanities refers to that group of fields—literature, history, philosophy, and so forth—that investigate the nature of meaning, or how we make sense of the world around us. Unlike some scientific fields, the humanities are less interested in the mechanics of the physical world and are more focused on questions of quality, value, language, comparison, law, and religion. While an engineer might develop a ramp to make a building more accessible for wheelchair users, a humanist would be more likely to investigate the assumptions of the architect who designed an inaccessible building in the first place.
Though it’s common in the public and among academics to say, “Naturally, there’s an inherent value to the humanities,” few people can offer indisputable reasons for explaining why this is true. To be fair, many of the sciences run up against the same problem when we press the issue—it seems odd to say that number theory has more “real-world” application than professional writing. But in a social climate in which job training is what administrators push, it can be difficult to convince a hiring manager that something like a history or philosophy degree has value that isn’t simply intrinsic, beyond explanation. Philosophy requires a high degree of reading comprehension, analytic skills, and often mathematical knowledge, but you’ll rarely see a job post for a “philosopher” position on LinkedIn.
Helen Small’s 2013 book The Value of the Humanities offers some insights on this question of value. It’s a little different than other books with similar titles, primarily because Small’s approach to the question of value is not a polemic rant—it’s an analysis. There are some common myths or arguments she’s taking a look at to gain some clarity on the issue, many of which have to do with what we mean when we claim something is “valuable.”
Intrinsic Value?
One common argument goes like this: The humanities have intrinsic value; once you ask about use-value, you automatically frame the humanities in a neoliberal, capitalistic frame with which they are at odds.
This is kind of true if you’re a critic of neoliberalism (and anyone familiar with that word is probably already a critic), but it’s also a very limiting answer that doesn’t get at the question of value. A moderate approach to this idea might suggest that the humanities do in fact teach you skills, or that the arts have a functional use-value. The argument that something has an intrinsic value is problematic because it ignores the ways (1) value is embedded in social meaning and (2) the ways humanists can apply their knowledge in broad ways.
The common argument against focusing on use-value is intelligent in that it considers ideological frameworks, but it also tends to shut down conversation without actually addressing the point. Yes, concepts like “utility” are derived from eighteenth-century notions of capital, but the humanities also contribute significantly to the GDP and cultural development as well—whether you want them to or not.
Self-Culture and Individual Happiness
Common argument #2: The humanities contribute to social and individual happiness and well-being; it’s a way of understanding the human holistically.
This argument has its roots largely in nineteenth-century thinking, particularly John Stuart Mills’s theory of utilitarianism. But the problem with this idea, as Small notes, is that the humanities won’t necessarily make you happier; instead, they help you better understand what happiness means at any given moment.
Kathryn Hamilton Warren’s essay on “Self-Culture and the Private Value of the Humanities” (2018) touches on this issue, but from a slightly different, less utilitarian angle. Building on the transcendentalists such as Henry David Thoreau, Warren argues that the humanities should emphasize self-criticism and examination. Studying books won’t necessarily bring joy into your world, but it could give you insights about what you value in your life.
The position of “self-growth” is a bit controversial because it’s not always clear how you should make a living when you’re “growing”—Thoreau famously had his mother do his laundry while at Walden Pond, living in a cabin that was built on land borrowed from Emerson—but it goes back to the first question about how we frame value. As Small suggests, the emphasis on happiness and utility in the humanities is the product of social conditioning—something toward which we should retain a healthy dose of skepticism.
Better Citizens—or Elitism?
Common argument #3: Democracy needs the humanities. “Democracy is good; therefore the humanities are good”—that’s the way this argument goes. But, there’s a small bit that often gets little scrutiny, which is how the humanities (arguably) make us more democratic.
This idea is rooted in the liberal-arts tradition, and it dates to Socrates’ Apology: “I am the gadfly of the Athenian people, given to them by God.” The “gadfly” argument is that the philosopher is a supreme check on the politicians; the problem, though, is that Socrates thinks a little too highly of himself. The major critique of the notion that the humanities are central to democracy is that it tends to be elitist—who, after all, gets to be a humanist and shape our society?
Additionally, for those in post-secondary education, this argument is mostly for primary or secondary school. There’s an access issue here once we move to college demographics: only 12 to 15 percent of people study humanities in higher education, so the argument tends toward stewardship, a deeply paternalistic model. Socrates, after all, thinks he was “given by God” to steer the political life of the Athenians.
This argument also puts politics first and the humanities second—the goal of the humanities is, per this logic, to support the political realm. It’s an argument that skips past the humanities directly and emphasizes the political world without clarifying how the two are related.
Martha Nussbaum has a unique take on this argument, and many others suggest that the key to making the humanities good for democracy is universalizing them, making them accessible to everyone. This avoids the paternalistic “sent by God” model, but we also need to recognize that the humanities have always had a troubling history of racism, imperialism, sexism, ableism, and various other types of “isms.”
What’s the Answer?
We should avoid saying that the humanities are absolute needs, per Small, in the rudimentary sense of the word “needs,” even though some argue “creative expression” is a human right. Suggesting the humanities are an “absolute need” tends to have more symbolic than real significance, especially from a rhetorical perspective. In the same way that the argument of “art for art’s sake” tends to ignore social context, the idea that the humanities are an “absolute necessity” tends to change the meaning of “absolute need” for the assertion to stick.
That said, the value of the humanities can be found in some version of the common arguments above. They all essentially revolve around the idea that we need to change our perspective when it comes to value, and that change in perspective is partly what the humanities offers. We can claim that the humanities are necessary for democracy, as long as we critically investigate that idea. And, the same goes for the idea that the humanities can’t be measured by use-value, as long as we recognize that, in fact, the humanities do have use-value. Self-culture is important, too, if we don’t fool ourselves into thinking happiness is a consequence of humanities scholarship, or as long as we’re aware we need some degree of privilege to be able to use our time for contemplation.
Small, Helen. The Value of the Humanities. Oxford University Press, 2013.
Warren, Kathryn Hamilton. “Self-Culture and the Private Value of the Humanities.” College Literature, vol. 45, no. 4, 2018, pp. 587–595.