Mechanism and Holism in Modernity

A Holistic Approach to Making Sense of the Modern World

            While science has been an important avenue through which humans have attempted to explore and understand their surroundings since the time of the Greeks, it was not until the late nineteenth-century that its methods, across the increasingly specialized and defined scientific disciplines, began to take on a single, well-defined appearance. The mechanical worldview — I use the world worldview here because, as this essay will examine, its basic components began to appear in more and more aspects of human life — is characterized by attempts to reduce and simplify the universe into quantitative units, and then to analyze and use those units to understand and manipulate nature (and later, people) in ways previously impossible. The method’s success in the “harder” sciences in the eighteenth and nineteenth centuries — physics, chemistry, and some aspects of biology — led many scientists to attempt to apply it also to other areas of human inquiry. As the twenty first-century approached, however, the mechanistic outlook’s inability to deal with the complex problems of the life and social sciences became increasingly apparent.

In this essay, I want to examine the mechanistic methodology’s entrance into the softer sciences, and I want to discuss the problems inherent in such a reductionist approach to the complicated questions life and social sciences attempt to answer. How did it influence the types of questions that scientists asked, and what would alternative questions (with a more holistic basis) have looked like? And finally, I want to end with a brief discussion of how humanity is still firmly in the grip of the mechanistic worldview, and how it continues to shape the way we understand our surroundings and ourselves. The questions that scientists ask, I want to argue, are influenced by the methods (and philosophical understandings of those methods) to which they ascribe, and the implications of this association for the kind of science being done affects far more than just the scientific community.

Scientific management provides a good point of departure in a discussion about mechanically-influenced social science. Frederick Winslow Taylor’s 1911 book on the subject, The Principles of Scientific Management, elucidates his ideas on the topic; he proposes a managerial system in which the knowledge of the worker is systematized so that the manager can plan his laborers’ tasks in the most efficient way possible. By scientifically calculating the most productive a man can be, the manager can optimize his employee’s productivity and, Taylor asserts, also the worker’s satisfaction and happiness. An obsession with gathering knowledge, quantifying, and analyzing it, so characteristic of mechanical methodology, underlies Taylor’s solution to inefficient work. The workingman is objectified, made into a machine whose output can be optimized. Although Taylor stresses the individuality of each worker in the sense that every man has different working strengths, he certainly is not implying that what makes a man individual are his ideas or preferences; like any machine, every man was made for a certain kind of task.

Taylor is attempting to answer the question, how can labor be made the most efficient, the most mechanical, possible? How can humans be optimized? The humanity of the humans being reduced so as to be made efficient is not a factor in his carefully outlined methodology, just as it is not addressed in his question. Mechanical thought had no room for humanity because many parts of humanity are difficult, if not impossible, to quantify. As a result, Taylor did not ask the question, how can we improve the quality of life of workers and managers in the workplace? or how can we further individualize the workplace experience so that everyone feels equally valued and appreciated? Instead of asking questions aimed at improving the lives and experiences of human actors in a human production, Taylor’s questions focused on how to make man the machine more efficient. His resolutions also fell victim to the methods of mechanization because they were framed by the questions they were attempting to answer.

The same fundamental mistake — removing the humanity from very human endeavors in order to simplify and control them — was made in an experiment conducted by Herbert S. Terrace and his research team in the 1970s. The researchers were venturing to find an answer to the question, embedded in a much larger scientific quest to discover what exactly makes humans human, of whether or not language acquisition was possible for chimpanzees if raised in a human environment. Researchers involved in the project spent extended time with the chimpanzee, Nim, and they naturally developed relationships with the intelligent test subject. Problems arose in the research program from many different sources; certain members of the team had different ideas of how the research should be conducted, attachment to Nim and other members of the team problematized the “objectivity” of the research, and Nim’s growth and development rendered him volatile and dangerous as he reached adolescence.[1]

The issues, again, began with the type of questions asked and the methods implicit in mechanistic research programs. The very fundamental lines of inquiry in Project Nim — What is it exactly that makes humans human? Is language what distinguishes humans from other animals? Can we define humanity by the ability to verbally communicate? — assume that the mechanical experimental process can provide definitive answers to such questions. Humanity, Terrace et al. believed, could be quantifiably explained by how many vocabulary words could be memorized and placed in the context of sentences. These researchers, taking mechanistic thought to a level beyond even Taylor’s scientific management, were trying to simplify and categorize humanity itself instead of simply leaving it out of consideration. Reducing humanity’s complexity to language abilities, they failed to see the human being as a complex and multi-faceted whole. Questions outside of the mechanistic way of thinking could have been more along the lines of, how do different aspects of humanity interact with one another, and how does this create a uniquely human experience? How do the complex interactions of language, technology development, and emotional capacity affect humanity’s interaction with its surroundings? The experiment conducted by Terrace and his team would not have a purpose in this line of inquiry because the need to reduce and define humanity is absent, and in its place an emphasis on interactions of parts of wholes takes precedence.

Through the brief analyses of Taylor and Terrace’s mechanistically-informed work, we can understand how mechanical thinking manifested itself in the types of questions asked in twentieth century science, and by asking questions more informed by a holistic worldview, we can see that mechanistic thinking is far from all-encompassing or inevitable. But I want to take that point a step further and discuss how this method of research still influences the way not only science is conducted, but the way institutions are run and how this filters down into the everyday human experience. This will also support a claim I made earlier; the kinds of questions that scientists ask, informed as they are by their methodological foundations, can (and often do) have major effects on human thinking as a whole.

The modern example I am most able to grapple with — as I have been participating in it for almost the entirety of my twenty-two years — is the education system. Beginning in the twentieth century, educators and psychologists introduced standardized testing as a way to, initially, provide individuals with education most suited to their needs.[2] The testing craze quickly evolved into something far more institutionalized, however, and became the basis for creating aptitude hierarchies of students all vying for increasingly competitive places in schools and universities. Like Taylor’s ideas on scientific management, the tests have provided a way for the education system to be streamlined and made more efficient; based on ACT scores, institutions and individuals can quickly decide who is and is not worth the time and money required for a college education. Certain machines are best suited for certain tasks, after all. Resources spent on trying to make a deep fryer capable of space travel would indeed be wasted.

By treating students like cogs in the machine of efficient production, the educational system has largely removed human individuality in favor of a single, mechanistic idea of what intelligence and aptitude are. Standardized testing is a solution to the issue of, how can we homogenize intelligence? How can we quickly and efficiently decide who is and is not worth the resources of education? A different approach, one more informed by thinking of humans as dynamic and individual, might grapple with the issues of education in a different manner; what sort of educational environment (including the people involved, the goals set, the curriculum followed) is most conducive to making students excel? How can we create a more inclusive educational experience? By embracing human variety and interconnectivity, these sorts of questions might offer up very different means of addressing the question of who should or should not be given the benefit of educational privilege.

            In a die-hard quest to optimize our world, we have repeatedly employed a reductionist, mechanistic approach to understanding and shaping our surroundings. It has clearly permeated the realm of the scientific, as evidenced by the kind of work done by Frederick Winslow Taylor and Herbert S. Terrace. Equally evident in their studies is the increasing tendency of these ideas to encroach upon the more social and soft sciences closer to humans’ understandings of themselves. Finally, mechanistic worldview’s influence on institutionalized education provides a modern example of how the questions and methodologies employed have shaped human lives and individuals’ beliefs about themselves and their capabilities. In all of these instances, as I hope to have shown through posing questions from an alternate standpoint, reductionism was not inevitable, nor has it been/is it the best approach available for the more nuanced issues with which softer sciences concern themselves.

[1] Project Nim, directed by James Marsh (2011).

[2] Katherine Pandora, “Disciplining Science in the Search for the Control of Nature,” (lecture, HSCI 5533, Norman, OK, October 27, 2016).

Darwin Demystified

Darwin Demystified: Two Constructivist Analyses of the “Revolutionary” Evolutionist

            Peter J. Bowler’s biography of Charles Darwin betrays its relatively unique approach in its physical appearance before the reader ever opens it. The book itself is small, an irregularity anyone familiar with the Darwin industry would find immediately anomalistic. The cover shows a picture of young Charles, before he had acquired his iconic, wizardly white beard. Who is this awkward twenty-something-year-old Victorian? Certainly not the grandfatherly and wise-looking Charles Darwin perpetuated by most biographers.

The general editor’s preface elucidates what exactly will be different about this evidently atypical biography. Part of the Cambridge Science Biographies series, Bowler’s work will concentrate on placing Charles Darwin in his context. Instead of focusing on Darwin the man — which is admittedly part of every biography and will not be completely eliminated — the book will pay special attention to those before him and his influence immediately after The Origin’s publication on into the modern world. It will thus attempt to take Darwin the legend, the exceptional genius who single-handedly revolutionized biology and provided the base for an entire scholarly industry, and place him in his time, surrounded by his influences. It will show that Darwin is not who we have been told he is; he is not the godlike, bearded messiah of biological enlightenment. He, like all of us, was a product of his time, and his ideas were not entirely his own, but built off of a complex network of cultural, social, economic, and scholarly influences. Both as an intellectual being before The Origin and afterward by those that told his story, Darwin was created. Charles Darwin was constructed like science was and is constructed — and both are undeniably human. Peter Bowler will deconstruct Darwin and reveal his humanity.

By presenting Darwin as someone both influential and influenced, Bowler opens up an avenue through which he can explore the scientific environment leading up to The Origin and how the community was changed after its publication. He does this by first laying out the basics of Darwin’s life at the beginning of each chapter, and then delving into the Victorian and scientific trends that played into Darwin’s thoughts and the reception — or condemnation — of them. As an example, in chapter three, “Young Darwin,” after an outline of available sources helpful in analyzing Darwin’s life, Bowler gives a brief synopsis of the Darwin family (highlighting Erasmus’s contributions to early thought on transmutation) and Darwin’s early schooling before moving straight into a discussion on the state of natural history during the young Darwin’s training.

The author begins his investigation by discussing the reason many natural philosophers, physicists, anatomists, physicians, and a plethora of other scientifically-inclined professions (at a time before “scientist” was an occupation) felt driven to raise questions concerning species generation or fixity. An outline of the fossil record had been constructed in the first half of the century, a development that required explanation. Some members of the scientific community, namely Jean-Baptiste Lamarck, publishing in 1809, Robert Chambers in in 1844, and Robert Grant around early- and mid-century, proposed the radical idea of “transmutation” — that is, species changing over time. This stood in stark ideological contrast to the previous and prevailing opinion of the fixity of species; God created each organism at the beginning of time, and they had remained in the same form to the present. Bowler argues that the fossil record forced even those most fundamentally minded in regards to the fixity of species to alter their beliefs. Instead of every species being exactly as they were when God created them at the beginning, they began to admit that extinction was a reality. Therefore, in a series of catastrophic extinction events, earlier species were destroyed and replaced by God in a new round of life forms.

In talking about Darwin’s predecessors, Bowler delves not only into the scientific issues with which Victorians were concerning themselves; these problems touched on political matters pertinent to the general populace as well. A time of political anxiety, Victorian England was a place very concerned with hierarchy. The French Revolution had overthrown the oldest and most structured European form of social order, and political revolutionaries of the same inclination were attempting to depose the British aristocracy. As is so often the case, science provided an avenue through which these anxieties were manifested in the form of attempted control. If a natural, stable hierarchy could be proven in the context of species fixity, an argument could be made that every human has his or her place, and in that place they must stay. It would be unnatural, for example, for a deer, a lower form of life, to transcend natural barriers and become a mighty lion, powerful and authoritative. In the same way, it was unheard of for a poor farmer to eclipse social impediments to become a duke. It was not, and could not, for the sake of social order, be possible. When Robert Grant or Robert Chambers suggested the transmutation of species, it was the transmutation of humans in the social hierarchy that struck fear in many Victorians. If transmutation were made natural, even imperative for survival, the political implications for the aristocracy were very real and very disagreeable.

By deconstructing Darwin, Peter Bowler paints a picture of Victorian England that addresses many of the issues its citizens were tackling. This insightful project will no doubt prove useful to historians of nineteenth century science and culture by contextualizing ideas, trends, and practices developed and accepted or rejected in the Victorian atmosphere. Rebecca Stott, in the first three chapters of her book Darwin and the Barnacle, offers a similar approach; using Charles Darwin’s time at Edinburgh, she artfully brings to the fore important aspects of the Victorian scientific landscape that created and influenced its many scientific men. She does this in a distinct way, however, disparate from Bowler’s methods. Instead of a deconstruction, Stott offers a construction. She starts from the influences and works her way into Darwin, in much the same way the natural philosopher himself would have experienced it. While Bowler’s biography is far broader, covering Darwin’s entire life and even the intellectual playing field before and after his major contribution, Stott’s focus is more on his formative years at Edinburgh and their effect on his studies thereafter. Using more of a microhistorical approach, Stott spends a great deal of time setting the scene; she describes the beaches and tide pools where Darwin and his colleagues in the Plinain Society found specimens, the relationships Darwin had with Robert Grant and John Coldstream, and his sickly episodes on The Beagle. Instead of deconstructing Darwin to find his influences, she uses his influences to create the researcher and theorist he would become.

The focus of Grant and later Darwin and Coldstream on the sea and its less notable inhabitants (to most modern readers at least) at first seems odd. Stott quickly brings the reader into the mind of a Victorian naturalist, however, when she describes the theories they would have been reading about — that the origins of life could have emerged from a “primordial ocean” — and the fascination with which they examined sea sponges in order to understand them. These ambiguous creatures existed on the fringes of classification, neither truly plant nor truly animal. The fossil record, the major impetus according to Bowler, certainly stimulated research into how species developed and populated the planet. But these organisms, defying classification before the very eye of the scientist, promoted impassioned and novel thoughts about biodiversity and the geographical distribution of species. Were these strange sea creatures reminiscent of the earliest life forms? Did the sea, the theoretical breeding ground for the earth’s first life, still contain its secrets? What were these creatures, and how did they fit into the strict categorization of earthly life forms? By transcending the natural boundaries in a much more concrete way than fossils, these organisms demanded attention and ultimately a theory to explain their problematic existence. Stott solidifies the powerful influence of Darwin’s time at Edinburgh by taking note of the fact that a good portion of his work aboard The Beagle was devoted to the gather and study of sea creatures.

Both authors treat Darwin’s time at Edinburgh as formative, but Stott uses these years and the professional relationships acquired during them to highlight research trends and thought patterns that would later prove influential not only to Darwin but to natural historians everywhere — and also, to some extent, to the general population. The sea (especially the deep, previously completely inaccessible part of it) was taking on an important role in the nineteenth century as the last earthly frontier, and the organisms that made this vast body of little understood water their home were of particular interest to many.[1] Stott’s approach offers a more minute, in-depth look at why and how sea creatures inspired evolutionary thought, a useful observation for understanding the part the ocean played in promoting scientific investigation into species formation, extinction, and differentiation. Her work illuminates the scientific process on a more personal, understandable level. Bowler, in contrast, hits the main points of Darwin’s Edinburgh experience, namely his professional associations and interest in oceanic life as research material, but primarily uses other periods of Darwin’s life for his purposes of elucidating the Victorian scientific atmosphere. His approach is more all-encompassing, focusing on major ideological trends and how they played into the generation and reception of scientific work, but in its extensity, it feels more mechanical. It is certainly useful in understanding Victorian English science, but some of the intrinsic motivations of those who practiced it remain mysterious.

By constructing Darwin, specifically during his time at Edinburgh, Rebecca Stott presents a way of experiencing the philosopher’s influences through his own eyes. We can understand why he constructed his own theory by seeing and experiencing what he did. By deconstructing him, Peter Bowler provides a bird’s eye view of Darwin’s life, placing him in context from an outsider’s perspective. We can understand how, in the context of his time, he came to produce a work like The Origin. Both authors focus on what makes Charles Darwin, and both approaches demystify the man, making him once again human. They attack the assumption, so often inherent in less sophisticated studies of Darwin’s life, that he transcended his time to propose a theory so revolutionary it put him on a pedestal for eternity. Like Newton, he too stood on the shoulders of giants, and like all human beings he was influenced by the culture and scientific environment of his time. This idea, Darwin constructed, brings to light an important point in the history of scientific theory, especially as far as apropos methodology is concerned — the men and women who created these theories were themselves created, and we should never forget that they instill their humanity into their formulations.

[1] David Lebrun, Proteus: A Nineteenth Century Vision, film, directed by David Lebrun (2004; Night Fire Films, 2004).

Alchemy in an HSCI Classroom

Alchemy in a History of Science Classroom

Alchemy has been a contentious space in which historians of science have engaged in the recurrent debate on what exactly constitutes “science,” and because of its spiritual and religious components, alchemy was often placed into the pseudoscientific category. Recent scholarship, however, has reaffirmed its position amongst other Medieval and Early Modern subsets of natural philosophy. Authors have cited its experimental programs of research, theoretical underpinning, and lab-based analysis and synthesis as evidence for its inclusion in the narrative of the history of science, and because of this, I believe that it should not be left out of a history of science survey course. Aside from its “scientific” characteristics, alchemy also provides avenues through which to discuss the interaction between practical and theoretical knowledge, entrepreneurial motivations that influenced natural philosophical inquiry, and the complicated relationship between science and religion.

I would begin a lecture on alchemy by discussing it in its eighteenth century context; this is when it was marginalized in order to legitimate the developing profession of chemistry. I would talk about how previously, the two investigative subsets were part of a larger research program focused on changing and understanding the properties of materials — notably metals, minerals, and other substances. The discussion would touch on who exactly was engaged in the pursuit (many of the “Big Men” made famous by their contributions to other fields, women, and craftsmen), and on the methods employed by these practitioners. Alchemists would read from ancient (and not-so-ancient) alchemical texts, comment on them, perform the experiments, and sometimes make modifications to the recipes, and they worked in labs where they performed distillations and synthesized compounds. I would highlight how these analytical and lab strategies are still used by scientists today.

Next, I would discuss what makes alchemy unique — its convoluted, secretive language and association with the spiritual and religious. While at first this might feel quite anti-scientific to many students, it puts alchemy (and Medieval and Early Modern scientific inquiry) into its context. Knowledge production was not always a secular affair, and alchemy’s engagement with the metaphysical, instead of a weakness, was a strength at the time it was being practiced. People sought deeper meanings for natural phenomena, and the philosophical framework from which they were working (Christianity with heavy Aristotelian influences) encouraged the search for final causes, symbols, and forms. Alchemy provides an excellent conduit for a discussion about what natural philosophy included in its lines of inquiry and elucidates the difference between itself and our modern construction of knowledge-gathering, science.

Alchemy’s significance to the history of science is therefore quite pronounced. Far from pseudoscientific magic, alchemy was a research program with goals, theories, and methods, and its practitioners were widespread and influential. As such, any survey of the history of science should include it and capitalize on the opportunity to discuss the issues alchemy brings to the forefront: early experimentalism, the relationship between practical and theoretical knowledge, and science and religion’s strong association.

Galileo Courtier

Galileo Courtier

Galileo Courtier recasts Galileo Galilei (1564-1642) as a member of the court, a role which allowed him to self-fashion a new socioprofessional identity as a mathematical astronomer/philosopher. Author Mario Biagioli argues that the identity that Galileo created was a new one, and that it was made possible through the social world of patronage systems and Galileo’s skilled maneuvering through them. Biagioli traces Galileo’s trajectory through multiple patronage networks; beginning with Galileo’s time as professor at the University of Padua, Biagioli goes on to explain how the mathematician presented himself and his discoveries to the powerful Medicis in order to gain their support. The latter half of the book covers Galileo’s transition into the Roman court, where different practices and customs made the game of patronage an altogether new one. While Galileo was successful there as well in the beginning, it was a crisis of patronage, Biagioli argues, that ultimately led to his condemnation in 1633. It was the patronage system that brought Galileo professional and financial success, and it was the patronage system that brought about his ruin.

I can find no fault with the first third of Biagioli’s work. The arguments run smoothly, and the evidence is plentiful; the footnotes are well done, and it is obvious that the author did an immense amount of research. If his thesis is problematic, the book must at least have some value in bringing to light many aspects of Galileo’s life previously under-researched. That being said, the rest of the book has some outstanding problems. Chapter four, “The Anthropology of Incommensurability,” seems out of place. It attempts to analyze court disputes in Kuhnian terminology, and what appears to be the conclusion — that scientific bilinguality is unique to proponents of the new “paradigm” — is arguably irrelevant to Biagioli’s narrative. There is additionally the issue that much of Galileo’s most important scientific contributions, including The Two Sciences and his pre-Florentine work in mechanics and mathematics, fall outside the restricted years of analysis that Biagioli sets up.

Also notably missing from Biagioli’s analysis of Galileo’s career as a courtier is the ethical dimension of court life as elucidated by contemporary political commentators. In his chapter on the topic, Robert Harding outlines what was seen as morally correct behavior of patrons, which included a desire for men of noble birth to be placed in the role of client before less noteworthy candidates. Men of power were supposed to perpetuate the social hierarchy as the natural state of affairs. Lesser nobles, or men who found fame through alternate routes (such as Galileo through his discoveries) were given varying degrees of approval by different commentators.[1] Gifts like Galileo’s distribution of telescopes could also reek of corruption if they were meant to entice beneficiaries to away from their “prior loyalties and obligations.”[2] How could these ethical dynamics have influenced Galileo’s career as a courtier, and could they have contributed to his downfall in the more cosmopolitan court of Rome? Could part of the reason he fell so far be that, ethically speaking, he was out of line in being there in the first place?

[1] Robert Harding, “Corruption and the Moral Boundaries of Patronage in the Renaissance,” in Patronage in the Renaissance, ed. Guy Fitch Lytle and Stephen Orgel (Princeton: Princeton University Press, 1981): 54.

[2] Ibid, 56.

The Early Modern Microscope

The Early Modern Microscope

            The invention of the microscope is shrouded in mystery and contention; often overshadowed by its more celebrated colleague the telescope, microscopy was slow to catch on and quick to die off in the seventeenth century (although it would be revived again in the nineteenth-century biological world). In their brief time in the scientific limelight, however, microscopes extended human knowledge in the direction of the miniscule and at the same time contributed to the downfall of the Aristotelian worldview. They provided access to a swarming, active world of “animalcules” that had previously been invisible, and the implications of this admission would be major for the natural sciences for years to come.

Since the Hellenistic era, humans had been using various materials to magnify their world, oftentimes to aid those with poor eyesight. Seneca, in the first century AD, described using water globes to magnify the lettering in texts, and Pliny chronicles Emperor Nero’s use of a concave emerald to enhance his view of gladiator contests. Florentines in the thirteenth-century were using eyeglasses.[1] Because of these examples of early magnification, it is difficult for the historian to distinguish a certain development as representative of the “invention” of a “microscope.” Some attribute its development to the Dutch father and son duo Hans and Zacharias Jansen, and some claim Hans Lippershey deserves the title; either way, it was the lens crafters of Middelburg, Netherlands in the last decade of the sixteenth-century that were the first to produce a new, distinct instrument of magnification potentially worthy of being classified as an early microscope.

Men engaged in the study of the natural world had, up to the seventeenth-century, not put much thought into what might be too small for their senses to glean. C. H. Lüthy describes why in his article on the early microscope’s relation to the telescope; Aristotle was an anti-atomist, believing that “when several elements combine to form further compounds… they lose their individual forms or qualities in favor of one single and homogeneous new form.”[2] With this assumption, magnifying matter would be rather useless and uninformative. It would take peering into the realm of minutia to debunk this belief and return to the atomist, or corpuscularian, theories of antiquity. At a time when many scientists were already questioning Aristotle’s philosophy, microscopic observations provided yet another nail in the coffin.

One such observer was Anton van Leeuwenhoek (1632-1723), a relatively poor Dutch draper with excellent eyesight. His accomplishments to a modern student of biology seem fantastic — he is credited as the first observer of protozoa, algae, yeast, bacteria, and human sperm — and he used very simple, single-lens microscopes that he ground and created himself. Each microscope was created for a single specimen, and at his death, several hundred microscopes with specimens still mounted were among his possessions.[3] Though he spoke only Dutch, he interacted regularly with the Royal Society in London, ensuring his work’s dissemination among the European scientific community.[4]

Although the microscope was not an invention bred of a passionate curiosity to uncover the mysteries of the minute, its rise coincided and reinforced the fall of Aristotle’s dominion over natural philosophy. After the initial discoveries, it quickly fell out of the scientific landscape until its revival in the nineteenth-century, in large part due to the lack of practical applicability it offered medical and natural philosophical men. But its contributions were important and would become moreso in the centuries to come.

[1] William J. Croft, Under the Microscope: A Brief History of Microscopy (Singapore: World Scientific Publishing Pte. Ltd, 2006), 4-5.

[2] C. H. Lüthy, “Atomism, Lynceus, and the Fate of Seventeenth-Century Microscopy,” Early Science and Medicine 1, no. 1 (1996): 12.

[3] A. D. S. Khattab, “Dances with microscopes: Antoni van Leeuwenhoek (1632-1723),” Cytopathology 6, no. 4 (1995): 216.

[4] Ibid.

The Islamic World & the Copernican Revolution

The Islamic Phase of the Copernican Revolution

            The story of the European adaptation of a heliocentric universe is normally told through Western astronomers; Nicolous Copernicus (1473-1543) serves as the beginning of the tale in most cases, his De revolutionibus orbium coelestium of 1543 hailed as a revolutionary tome. Its ideas were primarily original, or at the very least a result of Western influence. Recent work by historians such as Noel Swerdlow, Otto Neugebauer, George Saliba, and F. Jamil Ragep have challenged this interpretation, however, suggesting instead that Copernicus was heavily influenced by a sect of Islamic astronomers known collectively as the Marāgha School.[1] The evidence for such an association is formidable. Many of Copernicus’s techniques, thought processes, and mathematical proofs are strikingly similar to those of his Islamic predecessors.

The most obvious evidence linking Copernicus’s work to that of the Marāgha School exists in the mathematical strategies both use to simplify the Greek, Ptolemaic model, a goal both entities had in common. Nasir al-Din al-Tūsī (1201-1274) proved a mathematical device, known as Tūsī’s Couple, and used it to describe lunar motion (by way of generating linear motion from multiple circular motions) in his 1260-61 Tadhkira fi ‘ilm al-hay’a. Copernicus uses the exact same theorem — and includes a proof of it in the same format, using the same letters, as his Islamic predecessor — in De revolutionibus.[2] Copernicus also makes use of a mathematical technique termed ‘Urḍī’s lemma, named after its inventor, Mu’ayyad al-Din al-‘Urḍī (1200-1266), to eliminate the need to use Ptolemy’s cumbersome equants to account for planetary motion of the upper celestial spheres. Scholars hypothesize that he was exposed to this approach via some rendition or commentary on Ibn al-Shātir’s work, Shātir (1304-1375) being one of the many astronomers who utilized ‘Urḍī’s lemma in his cosmology.[3]

Additional evidence can be found in more the subtle traces of Islamic logical processes extant in Copernicus’s work. Both Copernicus and his Islamic colleagues use comets as a way to explain the possibility of a moving earth in line with observational physics, and, as F. Jamil Ragep argues in an article on the subject, Islamic sources were commenting on the possibility of a moving earth if natural philosophical epistemologies could be produced to explain the physics behind such a notion — an idea that medieval Westerners viewed as impossible.[4] Thus, the groundwork for Copernicus’s theories appear to have been lain not by his Western predecessors but by his Islamic ones.

Based on this evidence, I believe that the Marāgha School, and in particular al-‘Urḍī, al-Tūsī, and al-Shātir, belong in the narrative of the Copernican Revolution, having laid important mathematical and intellectual groundwork for the advances Copernicus and his European colleagues would expand upon. A notable impediment to this interpretation is the lack of a solid connection between Copernicus and the Marāgha School, but more work is being done to elucidate this association, and hopefully new evidence will reveal more than just a methodological link between the two parties.

[1] George Saliba, “Islamic Science and Renaissance Europe: The Copernican Connection,” Islamic Science and the Making of European Renaissance (Cambridge: MIT Press, 2011).

[2] George Saliba, “Islamic Science and Renaissance Europe,” 197-199.

[3] Ibid, 204-205.

[4] F. Jamil Ragep, “Tūsī and Copernicus: The Earth’s Motion in Context,” Science in Context 14 (2001): 160.

Medieval University Medicine

“The Faculty of Medicine,” Nancy Siraisi

            In her summary of the medical faculties of medieval universities, author Nancy Siraisi begins by discussing the various reasons why a unified medical program emerged at universities, while at the same time stressing the fact that university-educated physicians were not the only medical healers, nor were they the only ones with access to the knowledge they held. The influx of Classical and Islamic knowledge in the eleventh century and the demographic growth of twelfth and thirteenth century Europe both proved to be important to the establishment and proliferation of medical faculties; the ancient texts gave them a theoretical backing and the population spike stimulated a propagation of schools and people willing (and able) to pay medical fees. It must be noted, however, that even during this time period and for a long time afterward, university-trained physicians were in the minority of the total practicing population. Often the different classes of physicians served a different caste of clientele, but most sects of practitioners had access to a similar literary repertoire and used many of the same techniques. What was unique to university medicine was its establishment of “institutional and intellectual characteristics that would continue to influence medical education well into the early modern period,” including most notably the establishment of a medical elite.[1]

There were relatively few major centers for medical education, and most of their differences lay not in their curriculum, but in the numbers of students they attracted and their reputations. Salerno was the earliest major center because its location in southern Italy provided it early access to translated ancient medical sources. In the twelfth and thirteenth century, medical authors at Salerno compiled articella, or “short treatises,” that included basic Hippocratic and Galenic tenants — these early textbooks served as an introduction to the study of medicine. Professors at Salerno were also the first to associate medicine closely with natural philosophy. As Salerno faded in prominence in the early thirteenth century, Montpellier was rising as another fashionable medical university, especially after 1220 when the school received papal recognition. The medical faculty at Paris and Bologna came into the picture around the same time, and smaller medical centers, notably Padua, began to pop up in northern Italy in the middle of the thirteenth century.

Siraisi, in her next section on social and economic considerations, begins by stating that university-educated doctors were viewed as the elites of their profession. They earned their living primarily from their practice, although some enjoyed court patronage or professorships (the latter two often found in conjunction). From their new positions of power, these men and the medical faculties in general were given novel responsibilities by the state, such as licensing power and advising the leaders and population on medical matters. Their student bodies were made up of pupils from a variety of locations, and many, as members of the secular clergy, funded their schooling through church institutions. After obtaining their degrees in four or five years of study, university-trained physicians found themselves quite employable, either as practitioners or professors for fledgling medical schools.

In its relation to other faculties at universities, medicine was most closely linked with the arts. Linked traditionally and practically — those studying medicine had to read Latin and have a grasp on logic, astrology, and natural philosophy — the arts and medicine were almost always studied in conjunction (although the arts normally came first). As far as what students of medicine actually studied, Galen and Hippocrates were prominent, but Islamic writings also made up a major component of the curricula. Their ideas were often taught out of articella, and the content was frequently polluted with commentary by later intellectuals. The daily routine, symptoms of disease, and remedies were the main subjects that most medical students studied during their varied years at university; the time required to complete a medical degree ranged from three to six years, most often followed by at least six months of actual practice. Students were also generally exposed to dissections for the purpose of familiarization with the internal structure of the human body. Surgery was a separate entity in medieval times, viewed as a lower form of art due to its lack of theoretical backing, but it nonetheless was taught at many universities (although sometimes not under the medical faculty). The reading and course requirements for surgeons were probably very similar to that of physicians.

Medicine was taught in much the same way as other academic disciplines, through lectio and disputatio. This tradition, according to Siraisi, had a hand in grounding medicine in the philosophical way of thinking. Questions were asked, disputes held, and conclusions were formed based upon the discourse. The linking of natural philosophy and medicine through these methodological similarities helped legitimize medical theory at a time when philosophy held a higher place in the echelons of university faculties.

After establishing medicine as closely linked with natural philosophy, Siraisi goes into a brief description of humoral theory and the theory of complexion. According to the theory, bodies contain four humors in equilibrium, but their natural balance can be disrupted by illness or trauma. It is the doctor’s duty to prescribe regimen/diet changes, surgery, or medicine to restore balance. Medicines were produced that contained elements that together might produce a shift in the balance of hot, dry, wet, and cold in the body. Additionally, it was taught that the movement of the heavens had direct impacts on the body’s equilibrium, and different doctors and medical schools either stressed or merely considered this notion when healing.

Siraisi closes her chapter on medieval medical universities by discussing the implications of the scholastic method of educating medical practitioners and how university- trained medical men differed from their more philosophically inclined colleagues at the universities. She asserts that the methods employed in teaching encouraged students to question ancient medical doctrine and note discrepancies between Classical and Islamic medical scholars. In their hands-on practices, Siraisi believes, doctors also were more likely to compare their experiences to the theories they were taught and modify their beliefs accordingly. Thus, medieval physicians employed empiricism in a more concrete way than those studying more metaphysical subjects. The institutions and teaching methods established in the middle of the thirteenth and early fourteenth century medical faculties had a lasting impact on the medical profession, producing elite medical men that engaged in a book-based, theoretical practice, and forming an educational tradition in the medical field that outlasted even the Black Death.

[1] Nancy Siraisi, “The Faculty of Medicine,” in A History of the University in Europe ed. Hilde de Ridder-Symoens (Cambridge: Cambridge University Press, 1992), 364.