The
  Cornell
    Journal
      of
        Architecture
11
Parametricism, Digital Scholasticism, and the Decline of Visuality[1]



Mario Carpo teaches architectural history and theory at the School of Architecture of Paris-La Villette, at the Yale School of Architecture, and at the Georgia Institute of Technology (Atlanta, GA). Carpo’s research and publications focus on the relationship among architectural theory, cultural history, and the history of media and information technology. Among his publications, Architecture in the Age of Printing (The MIT Press, 2001) has been translated into several languages. His latest monograph, The Alphabet and the Algorithm, was published by The MIT Press in 2011.
Porcelain Teapot with Oriental Ornamentation, Manufacturer Ginori Doccia, 1750–1755. Photo: A. Dagli Orti © DeA Picture Library /Art Resource, NY
Porcelain Teapot with Oriental Ornamentation, Manufacturer Ginori Doccia, 1750–1755. Photo: A. Dagli Orti © DeA Picture Library /Art Resource, NY


Human-made objects are no longer what they used to be. Digital parametricism has changed the way we make objects, what we make with them and what we make of them. For the last few centuries, and until recently, we had two ways of making objects: hand-making, and mechanical machine-making. Objects that are made by hand tend to be visually different from one another, even when they are serially reproduced, because that is the way freehand making, and free, unconstrained human bodies, work. Think of a signature: no two signatures made by the same hand are identical, even though all signatures made by the same person are expected to be similar (otherwise they could not be recognized as autographs). Then came mechanical machines. Mechanical machines use casts, molds, stamps, or matrixes, and all imprints of the same matrix are the same. But mechanical matrixes are expensive to make, and once made, their cost must be amortized by using them as many times as possible. This is the world of mass-production, economies of scale, and standardization; at its core, this is the technical logic of industrial modernity.

Digital making does not work that way. Digital notations are in a permanent state of drift, and they can change all the time randomly, automatically, or by the intervention of some external, unpredictable agency. Indeed, digital scripts are increasingly designed for variability, or open-endedness, right from the start. And digital fabrication in most cases does not use mechanical matrixes, hence in a non-standard series variations can be mass-produced, in theory, at no extra cost. In short, in a digital design and fabrication chain, product standardization is technically and culturally irrelevant; and in a digital design environment, modern authorship is replaced by some new format of hybrid or participatory agency—a joint venture of sorts among any combination of human and technical actors and networks.[2]

Whether we like it or not, this “hybridization of agency” is an essential aspect of digital parametricism, because it is embedded in its very technical nature. In mathematics, a parametric function is a function where the value of some parameters can vary. Likewise, parametric notations in digital design and fabrication contain terms that are left indeterminate—only the limits of their variations are set. These open values can be determined, or finalized, by the same person who wrote the original script, or by others. Some values may also evolve all alone, or almost
they may emerge, adapt, and self-organize. The possibilities are infinite: but most of them are deliberately left outside the control of the authors of the first, original script. A parametric function is an open-ended algorithm; a generative, incomplete notation. At the beginning of the digital turn, Gilles Deleuze and Bernard Cache invented a special term to define this new kind of technical object—they called it an objectile.[3] In philosophical terms, an objectile is a generic object: it is a general script that defines an open-ended class of individual or specific objects, which will all be different, as individuals, but also all similar, as they all have something in common. What they have in common is the code or script that was used to make them, and which is, in a sense, inscribed in them.

The morphogenetic metaphor in this theory is as evident as is its Aristotelian and Scholastic provenance. In parametric making, the script is the genus, and the sets of individual objects are species; the script is the definition, and the series of events it creates is the extension. Scholasticism has already been related to one architectural style. Gottfried Semper was famously the first to define the Gothic as “petrified Scholasticism,”[4] which the great classicist did not mean as a compliment; with different nuances, the notion was taken over by Wilhelm Worringer,[5] and many others, and is today a commonplace. In 1951, Erwin Panofsky outlined several types of isomorphisms between Gothic architecture and Scholasticism:[6] first and foremost, Panofsky argued, both the Gothic and the Scholastic minds cherished the intricacy of articulation (in logic, an arborescence of definitions and divisions), and both Gothic and Scholastic articulations are a game of variations within the same class (e.g., all capitals at the same level in the nave of the same cathedral are often similar, but seldom identical). But the analogies between the Gothic, Scholasticism, and today’s parametricism are vast and wide-ranging, and are not limited to form, or style. Scholasticism was the most successful “comprehensive unified theory” (in Patrick Schumacher’s words)[7] in the history of the West; additionally, partici-patory, collective, and often anonymous making, that mantra of today’s Web 2.0, was the rule on most medieval building sites, even though medieval builders could not see it that way, because the modern idea of individual authorship did not yet exist at the time. As Lars Spuybroek has eloquently pointed out, today’s parametricism is indeed in many ways a revival of the Gothic Revival—or a John Ruskin 2.0.[8]


Yet another pioneer of the digital turn, the already mentioned Bernard Cache, has recently concluded a thorough investigation of Vitruvius’s design method, and proven with compelling arguments that the method of Vitruvius was also quintessentially parametric.[9] Yet Vitruvius was no Gothic. The game here is a bit different, because in classical architecture most capitals in the same row in the same Greek temple, for example, are often quite identical to one another; but all Doric capitals manufactured following the instructions that Vitruvius outlines in his Book IV,[10] for example, can only be, once again, similar (but not identical) to one another. They all belong to the same class—they are all defined as “Doric Capitals,” and they can be visually recognized as such. But Vitruvius’s algorithmic design process does not control the final finished shape of each individual Doric Capital; each maker will need to add something at will—and at the same time in compliance with Vitruvius’s rules.

Indeed, based on this evidence, one would be tempted to conclude that all premechanical theories of manufacturing are bound to be parametric; just as most postmechanical, digital theories of manufacturing are. Evidently, hand-produced variations are slower to make and fewer in number than those we can mass-produce today using digital tools. But, with the exception of speed and performance, and a few other details, both hand-making and digital making generate designed variations because, when making by script, both follow rule-based, generative notations (alphabetic or algorithmic, based on text or on computation, logocentric or logiconcentric). This is the opposite of mechanical making, which mostly reproduces identical copies of archetypal visual models using mechanical matrixes, stamps, casts, or molds.

This parallel between hand-making and digital making, and between the alphabet and the algorithm, may be a truism—too evident to be meaningful; but if it is true, its consequences, including aesthetic consequences, are staggering. For it appears that we are now moving out of a visual universe of exactly repeatable visual imprints and into a new world of endlessly changing, invisible algorithms: either designed for change (customization, or evolution) from the start, or not, but then often tweaked and changed all the same. In this new world, objects and their outward and visible forms are only the occasional and ephemeral epiphany of a script embedded in them, of which every manifestation can be different, randomly or automatically or by design—or by the design of some random human agent. In the mechanical age, our apprehension of human-made objects used to be based on the identification of identical copies: in a world of mechanical prints, either one visual form is identical to another, and then it has the same meaning, or it is different, and then it has another meaning, or no meaning at all. In a parametric environment, on the contrary, identification is not based on identicality, but on similarity and resemblance—just like in nature, where the reproduction of similarities is the rule (as in the classical topos of the resemblance between father and son), and identical copies are the exception (as in the case of two monozygotic twins).

Alessi Tea and Coffee Pots, Plans. © Greg Lynn/FORM.
Alessi Tea and Coffee Pots, Plans. © Greg Lynn/FORM.



In the old mechanical world, indexical signs, and some categories of iconic signs, were particularly powerful, because they were predicated on immediate visual identification—on our capacity to relate the copy to the original or archetype that it reproduced. In the new parametric world, only some form of social consensus can bestow conventional meaning on endlessly variable visual signs. Charles Sanders Peirce famously called this class of signs “symbols,” and once again, not coincidentally, symbolic meaning was powerful in premechanical times, for example in the Middle Ages, when only society at large could attach stable meaning to unstable signs. In his famous study of Medieval imitation, Richard Krautheimer tried to account for the medieval capacity to identify objects of all sorts and shapes as equally valid copies of the same model—for example, the Shrine of the Sepulchre in Jerusalem, of which each medieval city had one or more look-alikes; except that none of these look-alikes “looked alike,” as most copies were different from one another and all were different from the original, which by the way no one at the time had seen. Krautheimer concluded, with some perplexity, that, by our standards, that is, by modern standards, medieval visuality appeared to be “almost emphatically non-visual.”[11]

The same also applies, and will increasingly apply, to today’s postmodern visuality, born of the new digital and parametric environment. In a visual environment where visual forms can change so fast and so often, appearances will count for less and less, and our capacity to interpret them will count for more and more. The meaning of parametrically generated, variable forms will be increasingly contingent on our capacity to see them in context, and to relate them to others, by comparison and selection, generalization and abstraction, recognizing variations, inferring patterns, and sorting events into classes; choosing some, discarding others, and making some sense out of them—trying to make cosmos out of chaos.


This may sound disturbing, but it is not far-fetched. After all, parametricism is only a mild and much domesticated version of the universal paradigm of variability, which is inherent in all things digital. Unlike media objects, like texts, images, sounds, or software, which can morph and change endlessly and effortlessly, physical objects must cope with physical constraints. Parametricism is a way (and, incidentally, not the only one)[12] to enforce some form of curatorship and supervision onto the otherwise open-ended drifting of digital notations: parametric systems only allow variations within given limits; versioning must stop at some point in time, so that fabrication may ensue; and these limits in scope and time are set by authors. But media objects have already transcended these contingent limitations. And we are already striving to come to terms with the formal and functional consequences of unbridled and systemic “aggregatory” versioning. We are learning, by trial and error, that digital images (that is, the totality of today’s reproduced images) are no longer indexical traces of the originals they represent; that all electronic documents can be edited anytime and often anonymously by almost anyone; and that digital texts and documents, technical and literary alike, are increasingly destined from the start to an endless meandering of unpredictable accruals, deletions, and revisions. We know that any Wikipedia entry can read differently today from what it read yesterday, but we also know that if the next change is a blunder, someone, somehow, will soon set it right. Likewise, we know that the software we are using today may work a bit differently tomorrow, after the next, inevitable, automatic “update”—which we never signed on for, but which there seems to be no way to unsubscribe from. But we also know that changes will be incremental and most of the time, not catastrophic. And the same applies to all that is run or managed through networked digital systems—that is, to the quasi-totality of our technical environment. This environment being ever more based on permanent versioning by participatory scripting, or even by automatic self-adjustment, we are getting used to that inevitable, diffuse, quintessential ricketiness which is inherent in all things that can change all the time because they are scripted by many and by no one in particular—a new, evolutionary, participatory, messy, confusing digital style of many hands. We can try to make it look like something we are familiar with, and which we have the feeling we can control using the same good old authorial tools we knew. Historically speaking, this is seldom a winning strategy.


Endnotes

1. An earlier version of this text was presented at the symposium “The Eclipse of Beauty: Parametric Beauty,” Harvard GSD, March 9, 2011.

2. See Carpo, The Alphabet and the Algorithm (Cambridge, MA: MIT Press, 2011), in particular chapter 4, “Split Agency,” 123–128, and Carpo, “The Craftsman and the Curator,” Perspecta 44, Domain, T. Gharagozlou and D. Sadighian (eds.) (2011): 86–91.

3. Gilles Deleuze, Le pli: Leibniz et le baroque (Paris: Editions de Minuit, 1988); The Fold: Leibniz and the Baroque, Tom Conley (trans.) (Minneapolis: University of Minnesota Press, 1993).

4. Gottfried Semper, Der Stil in den technischen und tektonischen Künsten oder praktische Ästhetik (Frankfurt a.M.: Verlag für Kunst und Wissenschaft, 1860), XIX (“Eben so war der Gothische Bau die lapidarische Uebertragung der scholastichen Philosophie des 12. und 13. Jahrhunderts.”) and 509 footnote 1 (the Gothic as “steinerne Scholastik”).

5. Wilhelm Worringer, Formprobleme der Gotik (Munich: R. Pieper, 1912); translated as: Form Problems of the Gothic (New York: G.E. Stechert, 1920; London, G.P. Putnam’s Sons, 1927), Form in Gothic, Sir H. Read (authorized trans.) (New York: Schocken, [1957] 1964; London: Tiranti, 1957).

6. Erwin Panofsky, Gothic Architecture and Scholasticism (Latrobe, PA: Saint Vincent Archabbey, 1951).

7. Patrik Schumacher, “Parametricism and the Autopoiesis of Architecture,” Log 21 (2011): 63–79; see. in particular. 63. See also Schumacher, The Autopoiesis of Architecture: A New Framework for Architecture, vol. I (London: Wiley, 2011).

8. See Lars Spuybroek, The Sympathy of Things: Ruskin and the Ecology of Design (Rotterdam: NAi, 2011).

9. Bernard Cache’s doctoral dissertation on Vitruvius’s design method, “Fortuito supra acanthi radicem,” was defended in January 2009, and is unpublished to this day. See also: “Vitruvius Machinator Terminator,” in Cache, Projectiles (London: Architectural Association, 2011), 119–139.

10. Vitruvius, De Architectura. IV: 3, 4.

11. Richard Krautheimer and Trude Krautheimer-Hess, Lorenzo Ghiberti (Princeton, NJ: Princeton University Press, 1956), 294 (“To Petrarch […] it mattered little whether or not a site was commemorated by a monument, or merely haunted by memories. His approach was entirely literary, almost emphatically nonvisual.”) See also the famous (and controversial) notion of a “non-visual” form of imitation in the Middle Ages in Krautheimer, “Introduction to an ‘Iconography of Mediaeval Architecture,’” Journal of the Warburg and Courtauld Institutes 5 (1942): 1–33, in particular, 17–20. Reprinted in Studies in Early Christian, Medieval, and Renaissance Art (New York: New York University Press, 1969), 115–151; see, in particular, 117–127 and nn. 82–86.

12. See Carpo, “Digital Style,” Log 23 (2011): 41–52.




Go back to 9: Mathematics