Posthumanism and Michel Serres’s The Parasite

The Parasite: Overview

Michel Serres’s 1982 book The Parasite, originally published by Johns Hopkins University Press and later republished by the University of Minnesota Press as part of the Posthumanities Series, is a key philosophical text for understanding poststructuralism and its relationship to posthumanism. Serres writes in a disjointed and often cryptic fashion, with prose loaded with puns and wordplay (which often don’t translate to English), yet his goal is to develop a theory of systems.

To summarize the text is quite difficult, but you might simply say that this book is about the role of the parasite in social, biological, and informational systems. Serres suggests that the parasite is not simply a key facet of any system, but also that it serves as a “thermal exciter,” as a catalyst for changing the very nature of any system.

In this sense, Serres deconstructs (much like Derrida) the relationship between host and parasite (he also gets a kick out of the fact that the word for “guest” and “host” is the same in French, l’ hôte). The parasite in this sense is less a “drain” on the energy of a given system or organism, but rather something that changes the very nature of the host. And, the radical implication of this idea is that the parasite isn’t necessarily negative—as we commonly conceptualize it today—as it opens a new range of possibilities (especially in social contexts). So, minority groups commonly deemed “parasitic” can make “pests” of themselves to bring about social, political, and other forms of change.

 

Types of Parasites

Serres is talking about three different types of parasites, but he emphasizes in the first chapter of his book that these uses are not mutually exclusive. Arguably, one of the key aims of the book is to show that the use of the same word in three different contexts is not coincidental; instead, it informs the cohesion of the theory Serres aims to advance. (A common critique of The Parasite is that it’s overwhelmingly general, but Serres makes a strong case for why his theory is so general in the opening pages.)

The first use refers to biological parasites. In parasitology, a parasite typically refers to organisms that live inside or under the skin of a host and drain their energy (blood, etc.). Hence, their symbiotic relationship is one that harms the host (as opposed to mutualism). In this sense, there are no mammalian parasites, so Serres’s use of the term “parasite” to refer to rats, for instance, isn’t “correct” according to most biologists. However, Serres concedes this “misuse” and claims a looser use of the word.

The second iteration refers to social parasites, which is actually the original context for the word (one reason Serres is able to use “parasite” so loosely in biological contexts is because he gives an effective etymology to show that “parasite” was in a sense co-opted by biologists—in a way, any use of “parasite” is always already an anthropomorphism). In the first chapter he uses the example of a tax collector as a socio-economic parasite, but in later chapters he also uses the original Greek parasitos (a character trope or guest who “repays” the host with storytelling), house guests, etc.

The final iteration is likely unfamiliar to most English readers, as it refers to the French word for “static” in information theory. “Parasite” in this sense refers to an interruption in a signal, a break in a chain of communication, and so forth. So, in the first parable Serres discusses about the city and county rats, the noise that frightens the rats is itself a sort of parasite.

 

Relationship to Posthumanism

Though published before posthumanism was a distinct approach within literary studies, philosophy, and culture studies, The Parasite provides an early example of posthumanist concepts. On the one hand, it openly destabilizes the ways we conceptualize nonhuman animals, such as rats and so forth, but on the other hand it is much more significant in that it undermines the idea of an autonomous, individuated subject.

Key to liberal humanism (which posthumanism critiques) is the notion that individuals are autonomous and self-contained; Serres works to push back against this idea to suggest that parasites are inherent in any system, that relationality and dependency are much more widespread than we openly admit. Furthermore, such dependency often isn’t a negative thing; it represents a catalyst, a “thermal exciter,” a possibility for change.

To put it differently, it was quite common in the early twentieth century to describe “undesirables” as parasitic, which happened frequently in eugenics discourse, sociological theories, and so forth. Ergo, a person could be a humanist and still claim that certain groups (e.g., impoverished individuals) were parasites on the so-called productive class (think, for example, of capitalistic critiques of social welfare that so often rely on the image of independence as paramount). What Serres proposes that fits so well with posthumanist discourse is instead the idea that so-called parasites aren’t all bad, that independence is really more a product of problematic discourses than a reality, and that we need to think more critically about the potential limits of humanism.

 

 

Source: https://www.upress.umn.edu/book-division/books/the-parasite 

Bioethics, Books, and Robert Louis Stevenson

Bioethics is a broad term used to refer to the ways we assign value to living organisms (human or otherwise). Most commonly used in medical institutions, the term bioethics is gaining wider usage outside of hospitals and medical schools. In particular, bioethical concerns in the humanities and social sciences often revolve around disability studies, animal studies, ecocriticism, and the various ways medical and biological discourses are intimately connected to culture.

In this sense, we might think of bioethics as an important link between the humanities and sciences, one that combines central topics in science and medicine with the major methodologies and critical theories most closely associated with the humanities. In fields such as disability studies and animal studies, many scholars insist on ethical approaches that have little to do with the hospital review boards that are most commonly signified by the word “bioethics.” For instance, scholars in recent years have written about how we represent physical disability in popular literature and culture, the commonplace philosophical assumptions that reinforce the human-animal divide, or the ways environmental science is often subject to popular cultural trends.

Science and Culture

There are sometimes arbitrary distinctions between science and culture, many of which concern the methods and approaches used by various scholars. A biologist and a philosopher might both try to answer the question of what it means to be human, but the former will likely consider the nature of the body or physical evolution while the latter might look at how definitions of the human change over historical periods. Neither is “more right” than the other; instead, what distinguishes the two is largely due to their approach.

When we consider bioethics in literary and cultural studies, we might keep the above example in mind. If we think about how to define “disability,” for instance, a medical practitioner will likely be able to develop a very specific definition; however, literary scholars such as Lennard Davis—to name one among many—will show that “disability” is a relatively new concept, one intimately related to the rise of “normalcy” and the novel in the nineteenth century. (For more information, see Davis’s Enforcing Normalcy.)

The same might be said of animal studies or environmental studies; scientists are hard at work investigating how certain animals feel pain, while others are developing objective methods for showing the devastating effects of the current climate crisis. However, many humanities scholars are aware that when discussing animal experience, we need to investigate the ways language and metaphor often limit our understanding. Or, in the context of climate change, we should consider the rhetorical strategies needed to convince people that we should act now to enact public policy. Facts and figures are important, but many people don’t make decisions based on matters of fact—that’s where rhetorical argumentation (a field of focus in English) becomes an important skill to leverage.

To suggest that science and the humanities (i.e., culture) are mutually exclusive is to ignore the ways discourse operates across disciplinary boundaries. And, the humanities scholar who ignores science or the scientist who ignores the humanities are less likely to make valuable contributions to our society.

Bioethics in the Text: Jekyll and Hyde

So where does ethics come into play? Recent trends in literary and cultural studies have shifted focus to questions of the body as well as questions of ethics (sometimes called “the ethical turn”). Philosophers such as Martha Nussabaum, in Love’s Knowledge, have reinforced the idea that literature is a valuable site through which we can frame our ethical queries. Literature shows us the real-world complexities of life through which we can raise questions about ethics, an approach that is much more helpful than oversimplified thought-experiments. And, considering the impact of authors such as Judith Butler—who revolutionized definitions of gender and performativity with books like Gender Trouble and Bodies that Matter—we now know that bodies are never just there, that they are instead embedded in discourse.

When considering bioethics in literary texts, we might look to Robert Louis Stevenson’s famous book The Strange Case of Dr. Jekyll and Mr. Hyde. (See my essay on medical quackery and Stevenson’s book here.) RLS’s book is commonly taught as an allegory for good and evil–the idea is that we all have this struggle between the two within us. While this is an easy interpretation to digest for most high schoolers, it’s also one that doesn’t address the historical context of the book. And, the good/evil reading is also a bit oversimplified; Stevenson’s writing is much too nuanced for such a cut-and-dry dichotomy.

Stevenson wrote the novella following a series of medical and pharmaceutical acts in the late nineteenth century, and many of the book’s themes reflect a growing ambivalence toward medical institutionalization. For example, Jekyll doesn’t become evil after ingesting his concoction; instead, he seeks an “avatar” to hide his unsavory tendencies from the start. And, Jekyll never actually knows how he created the drug that transformed him into Hyde—it’s implied that it is actually the result of a contaminated shipment of salts. So, the expert doctor in this book is really more of an evil quack.

While there is some deeply harmful skepticism toward medical professionals today (most of which is the result of outlandish conspiracy theories), what Stevenson reveals in Jekyll and Hyde is that mistrust in medical expertise has been around for years. Many medical and pharmaceutical regulations in Britain in the nineteenth century were partly the result of professional spats and political debates (consider, for instance, the establishment of the Pharmaceutical Society in Britain to displace the authority of apothecaries, or the delayed outlawing of opium as a ‘schedule one’ poison), enacting a disciplinary regime that exists to this day. Stevenson exaggerates the mistrust between the public and medical institutions in Jekyll and Hyde, but what he also points to is a growing rift between science and the humanities.

A “bioethical” approach to Stevenson’s iconic story—a novella that’s now been adapted countless times—helps to foreground the ways in which science and the humanities are deeply interconnected. Stevenson traces a growing ambivalence toward medical professionals in the late nineteenth century following a string of medical laws, raising important questions about the power dynamics that often exist between the patient and doctor, between medical institutions and the general public. What we can learn from Stevenson’s book is the ways public perception toward science and medicine is shaped, as well as the ways in which language and law in turn configure science and medicine.

Stevenson is at times vague about the specific medical and pharmaceutical practices of Dr. Jekyll, but what The Strange Case does emphasize is the need for ethical thought in medical and scientific practice. Jekyll’s research is notoriously self-serving; while Hyde is most clearly the villain, Jekyll isn’t much better. Stevenson doesn’t necessarily propose a clear ethical framework in the language of philosophy (e.g., deontology, virtue epistemology, normative ethics, utilitarianism), but what he does provide is a critique of medicine that’s been divorced from humanistic inquiry.

 

 

Why Does Modernism Matter?

What is Modernism?

Modernism is a term used to refer to a collection of aesthetic, philosophical, and (in some cases) scientific movements in the late nineteenth and early twentieth centuries. For example, in the arts, movements like surrealism, expressionism, imagism, and vorticism are usually included under the umbrella term modernism. Today, many scholars use the term generally, but, since the inception of New Modernist Studies in the late 1990s, many agree that modernism is really more of a loose term than a rigid definition.

For many, modernism encapsulates some of the most significant events in recent history, like the First World War and the Second World War (and consequently, the Holocaust), as well as events like Einstein’s discovery of quantum physics, the invention of atomic warfare, the shift from primarily rural to urban populations (in the US), and the women’s suffrage movement (in the US and Britain). The list goes on.

Naturally, this is not to suggest the modernist period is more important than, say, the medieval or early modern periods (i.e., the Renaissance), but it is necessary to note that Western and even non-Western societies changed drastically during the modernist era.

Why Should I Care About Modernism?

For starters, modernists address many of the same issues that we still deal with today. One of the key ideas I’ll address here is the alienation of the modern world, which authors in the modernist period sought to critique. Let’s take Franz Kafka as an example.

Kafka was a Czech-born German Jewish writer who never really received literary fame until after he died. Today, we have the word Kafkaesque to describe the alienating, seemingly pointless (and often bureaucratic) way of life we experience on a day-to-day basis, and Kafka appeals to so many because he foregrounds precisely the things we might try to hide or avoid: our sense of awkwardness in social situations, our frustrations with work, failed romances, and the chaos of the modern world. Kafka teaches us that its OK to feel out of place, that there’s beauty in trivial circumstances, and that there are very real and unexpected threats to our freedom.

One of his best-known novels, The Trial (Der Prozess, or “The Process” in German) follows the protagonist, K., as he’s unexpectedly arrested for a crime he didn’t commit. As far as we know, at least, he seems innocent. As the story unfolds, K. meets a cast of strange characters, like a painter, a priest, and a lawyer, who seem to know the ins and outs of the system, yet no one can tell K. precisely everything he seeks to know. K. never finds the answers he’s looking for, and he ultimately resigns to his fate: two men (who originally arrested him) come to his apartment to carry out a death sentence. K. dies “like a dog” in the final pages, and we’re left with a sense of hopelessness.

What Kafka advances in The Trial is a theory of contingency to describe the modern world. In other words, K. has an ordinary job at a bank, and he lives his life in a rather ordinary fashion; he is, for all intents and purposes, a sort of “everyman” (or woman). But, the novel is about what happens when our expectations, our understanding of the world, is disrupted. For Kafka, the modern world is chaos, even though we might try to make some sense out of it. Kafka jolts us out of complacency, impelling readers to experience the harsh realities of life; that is, K. feels secure before the events of The Trial, but that security is misplaced. The Trial not only forces its readers to consider the contingency of everyday life, but also the ways modernity creates an alienating environment. K. gets caught up in his trial like a wave, like an unceasing and incomprehensible process.

While this story might seem depressing, it also serves as a wake-up call. Kafka was writing in the years leading up to the Nazi rise to power (though he died in 1924, before Hitler took control of Germany), and, as critics like Walter Sokel have illustrated, Kafka was also writing in a period when thinkers like Freud questioned the idea that we have conscious control over our actions. Kafka seems to be trying to liberate his readers, to grab them by the shoulders and insist they question everything they take for granted.

Where Do We Go From Here?

In the twenty-first century, we’re still dealing with many of the same issues Kafka raises in The Trial. That’s perhaps one reason in the recent film Blade Runner 2049 the protagonist is named K. (a transparent reference to Kafka’s work). It’s easy to be complacent, to settle for our jobs, for the current state of affairs, but what happens when that’s disrupted? Or, as Blade Runner 2049 asks, what if we’re not who we thought we were?

Kafka has a particularly bleak outlook, but some modernists, like Hermann Hesse, T.S. Eliot, and Virginia Woolf, offer some answers. For Hesse, the alienation of the modern world leads to enlightenment, while Eliot insists (in works like “Tradition and the Individual Talent”) that in drawing from history we necessarily change it. Woolf shows us that there’s beauty in the quotidian, everyday stuff of the world, even if its something as simple as stopping in a garden or shopping on a busy street.

Modernism is important because it fundamentally asks us to change our perspective, whether it’s to question our surroundings or to simply stop and appreciate the world around us. For philosopher Martin Jay, in modernism, there are many viewpoints of the world–not just one. And this idea, for many, can be quite liberating.

What is the Value of the Humanities?

The humanities are that group of fields—literature, history, philosophy, and so forth—that investigate the nature of meaning-making, or how we make sense of the world around us. Unlike some scientific fields, the humanities are less interested in the mechanics of the physical world and are more focused on questions of quality, value, comparison, law, and religion. While an engineer might develop a ramp to make a building more accessible for wheelchair users, a humanist would be more likely to investigate critically the assumptions of the architect who designed an inaccessible building in the first place.

Though it’s common in the public and among academics to say, “Naturally, there’s an inherent value to the humanities,” few people can offer indisputable reasons for explaining why this is true. To be fair, the sciences run up against the same problem when we press the issue—it’s hard to see how abstract mathematics has more “real-world” application than professional writing. But in a social climate in which job training is what administrators push, it can be difficult to convince a hiring manager that a philosophy degree has value that isn’t simply intrinsic, beyond explanation. Philosophy requires a high degree of reading comprehension, analytic skills, and often mathematical knowledge, but you’ll rarely see a job post for a “philosopher” position on LinkedIn.

Helen Small’s 2013 book The Value of the Humanities offers some insights on this question of value. It’s a little different than other books with similar titles, primarily because Small’s approach to the question of value is not a polemic rant—it’s an analysis. There are some common myths or arguments she’s taking a look at to gain some clarity on the issue, many of which have to do with what we mean when we claim something is “valuable.”

Intrinsic Value?

One common argument goes like this: The humanities have intrinsic value; once you ask about use-value, you automatically frame the humanities in a neoliberal, capitalistic frame with which they are at odds. 

This is kind of true if you’re a critic of neoliberalism (and anyone familiar with that word is probably already a critic), but it’s also a very limiting answer that doesn’t get at the question of value. A moderate approach to this idea might suggest that the humanities do in fact teach you skills, or that the arts have a functional use-value. The argument that something has an intrinsic value is problematic because it ignores the ways (1) value is embedded in social meaning and (2) the ways humanists can apply their knowledge in broad ways.

The common argument against focusing on use-value is intelligent in that it considers ideological frameworks, but it also tends to shut down conversation without actually addressing the point. Yes, concepts like “utility” are derived from eighteenth-century notions of capital, but the humanities also contribute significantly to the GDP and cultural development as well—whether you want them to or not.

Self-Culture and Individual Happiness

Common argument #2: The humanities contribute to social and individual happiness and well-being; it’s a way of understanding the human holistically.

This argument has its roots largely in nineteenth-century thinking, particularly John Stuart Mills’s theory of utilitarianism. But the problem with this idea, as Small notes, is that the humanities won’t necessarily make you happier; instead, they help you better understand what happiness means at any given moment.

Kathryn Hamilton Warren’s essay on “Self-Culture and the Private Value of the Humanities” (2018) touches on this issue, but from a slightly different, less utilitarian angle. Building on the transcendentalists such as Henry David Thoreau, Warren argues that the humanities should emphasize self-criticism and examination. Studying books won’t necessarily bring joy into your world, but it could give you insights about what you value in your life.

The position of “self-growth” is a bit controversial because it’s not always clear how you should make a living when you’re “growing”—Thoreau famously had his mother do his laundry while at Walden, which was built on land borrowed from Emerson—but it goes back to the first question about how we frame value. As Small suggests, the emphasis on happiness and utility in the humanities is the product of social conditioning—something toward which we should retain a healthy dose of skepticism.

Better Citizens—or Elitism?

Common argument #3:  Democracy needs the humanities. “Democracy is good; therefore the humanities are good”—that’s the way this argument goes. But, there’s a small bit that often gets little scrutiny, which is how the humanities (arguably) make us more democratic.

This idea is rooted in the liberal-arts tradition, and it dates to Socrates’ Apology: “I am the gadfly of the Athenian people, given to them by God.” The “gadfly” argument is that the philosopher is a supreme check on the politicians; the problem, though, is that Socrates thinks a little too highly of himself. The major critique of the notion that the humanities are central to democracy is that it tends to be elitist—who, after all, gets to be a humanist and shape our society?

Additionally, for those in post-secondary education, this argument is mostly for primary or secondary school. There’s an access issue here once we move to college demographics: only 12 to 15 percent of people study humanities in higher education, so the argument tends toward stewardship, a deeply paternalistic model. Socrates, after all, thinks he was “given by God” to steer the political life of the Athenians.

This argument also puts politics first and the humanities second—the goal of the humanities is, per this logic, to support the political realm. It’s an argument that skips past the humanities directly and emphasizes the political world without clarifying how the two are related.

Martha Nussbaum has a unique take on this argument, and many others suggest that the key to making the humanities good for democracy is universalizing them, making them accessible to everyone. This avoids the paternalistic “sent by God” model, but we also need to recognize that the humanities have always had a troubling history of racism, imperialism, sexism, ableism, and various other types of “isms.”

What’s the Answer?

We should avoid saying that the humanities are absolute needs, per Small, in the rudimentary sense of the word “needs,” even though some argue “creative expression” is a human right. Suggesting the humanities are an “absolute need” tends to have more symbolic than real significance, especially from a rhetorical perspective. In the same way that the argument of “art for art’s sake” tends to ignore social context, the idea that the humanities are an “absolute necessity” tends to change the meaning of “absolute need” for the assertion to stick.

That said, the value of the humanities can be found in some version of the common arguments above. They all essentially revolve around the idea that we need to change our perspective when it comes to value, and that change in perspective is partly what the humanities offers. We can claim that the humanities are necessary for democracy, as long as we critically investigate that idea. And, the same goes for the idea that the humanities can’t be measured by use-value, as long as we recognize that, in fact, the humanities do have use-value. Self-culture is important, too, if we don’t fool ourselves into thinking happiness is a consequence of humanities scholarship, or as long as we’re aware we need some degree of privilege to be able to use our time for contemplation.

Small, Helen. The Value of the Humanities. Oxford University Press, 2013.

Warren, Kathryn Hamilton. “Self-Culture and the Private Value of the Humanities.” College Literature, vol. 45, no. 4, 2018, pp. 587–595.