What’s in a Genre?

The Uses of Genre: Is There an “Adam Smith Question”?

by Ryan Healey, Ewan Jones, Paul Nulty, Gabriel Recchia, John Regan, and Peter de Bolla

The essay begins:

This paper sets out a novel computational method of testing the uses to which generic membership can be put to help us understand large-scale movements in the history of ideas. It does so by taking a well-known test case, the so-called Adam Smith Question, as an easily identifiable (and well-researched) problem in generic consistency. In brief, the problem is this: Smith proposes one version of human nature based on sympathy in his Theory of Moral Sentiments (TMS) and another, completely orthogonal to it, based on self-interest in his Wealth of Nations (WN). This incoherence (if one assumes that Smith worked hard at creating a unified theory of human nature, which in itself is contestable) is said to be one of genre, the difference between moral philosophy and political economy.

The wider context of Smith’s intellectual project—let us say the second half of the British eighteenth century—also provides us with a background in which the question of genre is itself problematic or undergoing conceptual construction. It has long been recognized that over the course of the century the contours of emerging genres—for example, prose fiction, political economy, moral philosophy, aesthetics—would ossify around different and sometimes contradictory sets of moral, social, and epistemological premises. Literary critics have largely investigated this generic instability via the conspicuously hybrid genre of the novel, with particular attention to the early novel’s seeming inattention to modern distinctions of “fact” and “fiction,” within what Mary Poovey calls the “fact/fiction continuum.”

A significant fellow traveler in this epistemological crisis can be identified in the uneven and incompatible development of concepts of economic morality across different genres that might be termed the “self-society continuum.” In Commerce, Morality, and the Eighteenth-Century Novel, Liz Bellamy argues that economic texts privileged the second term, “society,” as they compressed individuals and their personal faculties into an indiscriminate mass of homo economicus, while, conversely, contemporaneous texts of moral philosophy addressed a unique individual steeped in elite civic humanist rhetoric that exempted him or her from the rational maximization of money and naked self-interest. The new commercial morality was understood as peculiar, destructive, and “far from being overwhelmingly accepted or embraced” by ruling classes whose traditionalism could not easily comprehend and accommodate the burgeoning intangible property of financial instruments. This unease was then reflected then reflected in a parallel discordance between works of moral philosophy and the fledgling genre of economics. Bellamy explicitly identifies this inconsistency in generically separate works by David Hume and Smith, who seem to void the ethical directives of their moral philosophy with their economic texts and vice versa.

Yet “negates” is perhaps too strong a way of putting things. After thirty years without significantly revising the work, Smith began to alter TMS in the last year of his life, adding a sixth part titled “Of the Character of Virtue” that “appealed to the citizenry to place the interests of society ahead of the interest of any faction to which they might be attached.” Crucially, however, this revision did not radically alter the precepts of the theory to align more explicitly with the selfish personality exhibited by WN. For Smith, there seems never to have been an urgent need to bring the two works into dialogue. To complicate matters still further, the alleged contradiction may well arise, at least in part, from the anachronistic imposition of generic differences that at the time were not perceivable. What would only later be recognized as political economy was still, at the point of Smith’s writing, in the process of coming into being. As Margaret Schabas notes, “Economic phenomena were viewed as contiguous with physical nature” up until the mid-nineteenth century, when the notion of “the economy” as a delimited entity first arose.

It is in large part due to these complex compositional, generic, and historical contexts that scholars have, over the past three decades, increasingly tired of the Adam Smith Problem, with its binary options. In 1998, Amos Witztum declared briskly that, “for modern readers this is not a real problem.” More recently, David Wilson and William Dixon claimed, “The old Das Adam Smith Problem is no longer tenable. Few today believe that Smith postulates two contradictory principles of human action.” Close readings of the concepts of sympathy, prudence, and self-interest in TMS and WN have led critics to conclude that Smith does not openly recommend two polar opposite theories of human motivation, albeit “there is still no widely agreed version of what it is that links these two texts, aside from their common author; no widely agreed version of how, if at all, Smith’s postulation of self-interest as the organizing principle of economic activity fits in with his wider moral-ethical concerns.”

This paper applies a novel computational mode of analysis to the large question of genre and to the more specific issues that arise in Smith’s work. We do so, however, not to flog the dead horse of das Adam Smith Problem; we do not believe that such a debate could ever be decisively “settled” one way or another. We do, however, believe that the computational analysis of large corpora (and subcorpora) permit us to discern both the continuities and discontinuities of conceptual usage across different works—continuities and discontinuities to which more standard modes of intellectual history, for all their many virtues, remain blind. We thus interrogate two interlocking questions: first, to what extent does the sympathy so cardinal to TMS, and the self-interest so essential to WN, participate in broader conceptual networks, whose existence is statistically verifiable? Second, to what extent do such local continuities or discontinuities prove representative of broader generic differences in the culture at large? Chief among the virtues of such a computational approach, we believe, is the critical vantage it offers with regard to genre. Rather than simply accepting the markers that authors or publishers apply to the texts at hand (“political economy,” and so on), we use patterns of lexical collocation to investigate whether such distinctions are indeed valid. Continue reading …

In this article authors Ryan Healey, Ewan Jones, Paul Nulty, Gabriel Recchia, and John Regan join Peter de Bolla in using the methodologies of de Bolla’s The Architecture of Concepts to uncover the complex conceptual networks in which lexical items are embedded. For de Bolla, because concepts are “units of ‘thinking’ that cannot be expressed in words without remainder,” they may be stretched across constellations of collocations circulating in a “common unshareable” domain of the textual culture at large.

PETER DE BOLLA is Professor of Cultural History and Aesthetics at the University of Cambridge where he also directed the Cambridge Concept Lab. His most recent monograph is The Architecture of Concepts: The Historical Formation of Human Rights (Fordham University Press, 2013), and he is currently writing a book on the artist Pierre Bonnard.

 

Distant Reading and the Blurry Edges of Genre

Ted Underwood, contributor to Representations‘ recent special forum Search, continues his engagement with digital questions on his own blog, The Stone and the Shell, with Maria_Mitchell

Distant Reading and the Blurry Edges of Genre. 

Having just spent two years attempting to subdivide an entire digital library by genre, Underwood encountered some interesting problems. “In particular, the problem of ‘dividing a library by genre,’” he says, “has made me realize that literary studies is constituted by exclusions that are a bit larger and more arbitrary than I used to think.”

Underwood’s contribution to the Representations Search forum, Theorizing Research Practices We Forgot to Theorize Twenty Years Ago, asks what it means to say that search plays an “evidentiary role in scholarship”:

“Quantitative methods have been central to the humanities since scholars began relying on full-text search to map archives. But the intellectual implications of search technology are rendered opaque by humanists’ habit of considering algorithms as arbitrary tools. To reflect more philosophically, and creatively, on the hermeneutic options available to us, humanists may need to converse with disciplines that understand algorithms as principled epistemological theories. We need computer science, in other words, not as a source of tools but as a theoretical interlocutor.”

Full text of this article can be found here.