Science of the Marginalized: Women in the Age of Scientific Authority

The nineteenth and twentieth centuries have witnessed a transformation in the status of scientific authority. With authority comes power, and with power comes the ability to dictate what is inside the realm of value and acceptability and what lies outside of that constructed space. When scientific disciplines and the respected members of those disciplines began to gain cohesion and recognizable authority, they began to make distinctions between what and who was and was not a part of their research programs and acceptable practices. Members of the scientific community especially susceptible to exclusion were (and are) those who had historically been viewed as outsiders — the most studied groups being women and people of color.[1] In this essay, I will examine how this systematic marginalization at various points in science’s ascension to greater and greater political, cultural, and intellectual authority has changed the way that women have practiced science, paying special attention to how the subjects of study and questions asked by female scientists are centered around different issues than their male colleagues. A similar study on African American science would be equally valuable but would extend the breadth of this essay beyond what I can reasonably discuss.

Maria Mitchell’s successful career as an astronomer spanned the middle third of the nineteenth century and provides an excellent point of departure. Born in 1818, her training and early work took place in the context of a scientific community still quite fragmented; the big names that would contribute to science’s nineteenth century prestige — Charles Darwin, James Clark Maxwell, Michael Faraday, Louis Pasteur — were a development of mid-century. As her biographer Renée Bergland argues, Mitchell established herself as a scientist at a time when “studying science was ‘womanly,’ safely outside the potentially dangerous ideological realms of law or history or theology.”[2]

Lack of ideological authority placed science in a space that, at the time, was acceptable for females to interact within, and Mitchell’s science reflected her acceptance into the community. Like her male colleagues, she scanned the night sky for comets and made her entrance into the astronomical discourse with her discovery of one in 1847.[3] She was given credit for it and felt that she could become “a woman scientist… who could chart out her own course of research,” unlike her heroine Caroline Herschel who constantly diverted credit to her brother.[4] She acquired a job as the computer of Venus and published her astronomical work in various journals.[5] Mitchell was thus a scientist in her own rite, asking her own questions that reflected her relatively secure position within the discipline of astronomy. She needed neither to justify her participation in knowledge-production nor rely on a man’s help to solidify her position in the community.

Major changes were soon to alter the situation for women in science, however. Around the 1860s, America was professionalizing on many fronts, and science too felt this pull. With their newfound authority, professional scientists began to relocate the practice of science to the university — an institution from which women were usually excluded.[6] They also began to construct a view of the scientist that was uniquely male in order to further assert their professional authority. Women practitioners, they thought, would weaken their professional image.[7] Authority, institutionalization, and increased disciplinary cohesion (brought on by advances in theory and methodology) thus gave a particular class of scientist — advantaged by their social and economic position — the power to create spaces of exclusion that left whole sections of the community outside of scientific discourse. This would have profound implications for female scientists and their work in the twentieth century.

One such scientist was Helen Thompson Woolley. Born in 1874, she would face a far different scientific environment than Maria Mitchell. She graduated with a Ph.D. from the University of Chicago before beginning her research on sex differences; her thesis “compared the performance of 25 men and 25 women on motor, sensory and intellectual tests,” and her subsequent research and reviews centered on the same issues surrounding gender differentials in mental capacities.[8] Her frustration with contemporary scholarship on sex differences is evident in Psychological Literature: A Review of the Recent Literature on the Psychology of Sex, where she reviews recent work and, in a powerful and convincing conclusion, repudiates scientifically many of the arguments made by male scientists for why females do not belong in their profession. “There is perhaps no field aspiring to be scientific where flagrant personal bias, logic martyred in the cause of supporting prejudice, unfounded assertions, and even sentimental rot and drivel, have run riot to such an extent as here,” she states in a particularly ardent passage.

Woolley was fighting against the current that was sweeping many of her female colleagues out of science and into domesticity, and her research reflects her tenuous position. She chose to pursue issues related to her gender’s capacity to reason, and by extent to participate in knowledge-creation. Instead of engaging with other lines of inquiry in her field at the time, Woolley chose to hone in on one in which she had a vested interest; the scientific community’s consensus on whether females were intellectually on par with men would have a direct effect on Woolley’s ability to assert her own authority within her discipline. Therefore, because of the authoritative exclusion of her gender from science, Woolley’s research took on a very particular identity — one connected to her identification as a marginalized professional scientist and one based on legitimizing her participation in scientific discourse.

We have now seen how two female scientists’ work differed before and after the marked rise of scientific authority. Maria Mitchell pursued her own interests, relatively unaffected by her role as a female scientist. Helen Thompson Woolley, on the other hand, pursued a research program that attempted to authorize her participation in science; her identity as a woman in science played a central role in her research interests. As the twentieth century wore on, the situation for women in science improved only marginally. Two more scientists’ work will now elucidate how scientific authority has continued to marginalize women and thus inform their research agendas.

Margaret W. Conkey and Janet D. Spector founded a new field in archaeology — the archeology of gender — in 1984 with a groundbreaking article. In it, they highlighted the propensity for archeologists to make gendered assumptions about past populations. Conkey and Spector found that archeologists maintained gender biases when interpreting symbolism and explaining divisions of labor and social hierarchies, and their solution was to begin “a systematic program of feminist research on questions about women and gender.”[9] While it took seven years for anyone to act on their criticism, conferences began to proliferate in the late 1980s and early 1990s. Alison Wylie links the increasing interest in feminist archeology to “a parallel, and, in most areas, antecedent interest in questions about the roles, status, and contributions of women in archeology.”[10]
In this late twentieth century scenario, female archeologists drew attention to the gender biases rampant in their field. This kind of research was different from Helen Woolley’s in that it did not attempt to legitimize the female in general as a potential authority within a discipline; while female archeologists still suffer from unequal treatment in academia, they had at least affirmed their right to be there (more or less) by the 1980s. Conkey and Spector did, however, wage war against the gender biases still inherent in archeological analytical techniques, pointing out that contemporary methodologies were problematic. Perhaps they pursued these research interests because, as women within a scientific framework that was still masculinized in method, they remained outsiders. The authority of male archeologists, so ingrained in the profession, was still implicit in the way that archeology is practiced. While the role of women in science has improved overall, the barriers to equitably assigned intellectual value have remained strong, though often implicit.

Thus, while scientific authority has come with many benefits, it has also provided the impetus for marginalizing some with effects on the kinds of research they conduct. While I by no means am attempting to make the deterministic argument that all women in science have conducted gender-influenced research — that would be over-simplistic — I am, however, asserting that scientists’ work is profoundly impacted by the socio-scientific environment in which they practice, and the marginalization that has resulted from centralized scientific authority has had implications for some women’s work. I think this idea could be further researched and expanded to include other groups on the fringes; perhaps a comparison of the work produced by scientists occupying different positions in the institutional hierarchy would prove fruitful. In any case, as we have seen, different levels of authority from various time periods have produced distinct research agendas. For women scientists, mounting scientific authority has not always resulted in their work being taken more seriously, and it has left a distinctive mark on their research.

[1] By studied, I mean in the discipline of the history of science. These are two obvious examples, but the list goes on and on: those with disabilities, with alternative religious orientations (even as science’s power was eclipsing that of religion), homosexuals (i.e., Alan Turing), foreigners (some more threatening than others), etc.

[2] Renée Bergland, Maria Mitchell and the Sexing of Science (Boston: Beacon Press, 2008), xvi.

[3] Ibid, 53.

[4] Ibid, 114.

[5] Ibid, 155.

[6]Ibid, 156-157.

[7] Renée Bergland, Maria Mitchell and the Sexing of Science, 174.

[8] Katharine S. Milar, “An Historical View of Some Early Women Psychologists and the Psychology of Women,” Classics in the History of Psychology Special Collections, accessed November 18, 2016,

[9] Alison Wylie, “Doing Social Science as a Feminist: The Engendering of Archaeology,” in Feminism in Twentieth-Century Science, Technology and Medicine, eds. Angela Creager, Elizabeth Lunbeck, and Londa Schiebinger (Chicago: University of Chicago Press, 2001), 24.

[10] Ibid, 25.

Mechanism and Holism in Modernity

A Holistic Approach to Making Sense of the Modern World

            While science has been an important avenue through which humans have attempted to explore and understand their surroundings since the time of the Greeks, it was not until the late nineteenth-century that its methods, across the increasingly specialized and defined scientific disciplines, began to take on a single, well-defined appearance. The mechanical worldview — I use the world worldview here because, as this essay will examine, its basic components began to appear in more and more aspects of human life — is characterized by attempts to reduce and simplify the universe into quantitative units, and then to analyze and use those units to understand and manipulate nature (and later, people) in ways previously impossible. The method’s success in the “harder” sciences in the eighteenth and nineteenth centuries — physics, chemistry, and some aspects of biology — led many scientists to attempt to apply it also to other areas of human inquiry. As the twenty first-century approached, however, the mechanistic outlook’s inability to deal with the complex problems of the life and social sciences became increasingly apparent.

In this essay, I want to examine the mechanistic methodology’s entrance into the softer sciences, and I want to discuss the problems inherent in such a reductionist approach to the complicated questions life and social sciences attempt to answer. How did it influence the types of questions that scientists asked, and what would alternative questions (with a more holistic basis) have looked like? And finally, I want to end with a brief discussion of how humanity is still firmly in the grip of the mechanistic worldview, and how it continues to shape the way we understand our surroundings and ourselves. The questions that scientists ask, I want to argue, are influenced by the methods (and philosophical understandings of those methods) to which they ascribe, and the implications of this association for the kind of science being done affects far more than just the scientific community.

Scientific management provides a good point of departure in a discussion about mechanically-influenced social science. Frederick Winslow Taylor’s 1911 book on the subject, The Principles of Scientific Management, elucidates his ideas on the topic; he proposes a managerial system in which the knowledge of the worker is systematized so that the manager can plan his laborers’ tasks in the most efficient way possible. By scientifically calculating the most productive a man can be, the manager can optimize his employee’s productivity and, Taylor asserts, also the worker’s satisfaction and happiness. An obsession with gathering knowledge, quantifying, and analyzing it, so characteristic of mechanical methodology, underlies Taylor’s solution to inefficient work. The workingman is objectified, made into a machine whose output can be optimized. Although Taylor stresses the individuality of each worker in the sense that every man has different working strengths, he certainly is not implying that what makes a man individual are his ideas or preferences; like any machine, every man was made for a certain kind of task.

Taylor is attempting to answer the question, how can labor be made the most efficient, the most mechanical, possible? How can humans be optimized? The humanity of the humans being reduced so as to be made efficient is not a factor in his carefully outlined methodology, just as it is not addressed in his question. Mechanical thought had no room for humanity because many parts of humanity are difficult, if not impossible, to quantify. As a result, Taylor did not ask the question, how can we improve the quality of life of workers and managers in the workplace? or how can we further individualize the workplace experience so that everyone feels equally valued and appreciated? Instead of asking questions aimed at improving the lives and experiences of human actors in a human production, Taylor’s questions focused on how to make man the machine more efficient. His resolutions also fell victim to the methods of mechanization because they were framed by the questions they were attempting to answer.

The same fundamental mistake — removing the humanity from very human endeavors in order to simplify and control them — was made in an experiment conducted by Herbert S. Terrace and his research team in the 1970s. The researchers were venturing to find an answer to the question, embedded in a much larger scientific quest to discover what exactly makes humans human, of whether or not language acquisition was possible for chimpanzees if raised in a human environment. Researchers involved in the project spent extended time with the chimpanzee, Nim, and they naturally developed relationships with the intelligent test subject. Problems arose in the research program from many different sources; certain members of the team had different ideas of how the research should be conducted, attachment to Nim and other members of the team problematized the “objectivity” of the research, and Nim’s growth and development rendered him volatile and dangerous as he reached adolescence.[1]

The issues, again, began with the type of questions asked and the methods implicit in mechanistic research programs. The very fundamental lines of inquiry in Project Nim — What is it exactly that makes humans human? Is language what distinguishes humans from other animals? Can we define humanity by the ability to verbally communicate? — assume that the mechanical experimental process can provide definitive answers to such questions. Humanity, Terrace et al. believed, could be quantifiably explained by how many vocabulary words could be memorized and placed in the context of sentences. These researchers, taking mechanistic thought to a level beyond even Taylor’s scientific management, were trying to simplify and categorize humanity itself instead of simply leaving it out of consideration. Reducing humanity’s complexity to language abilities, they failed to see the human being as a complex and multi-faceted whole. Questions outside of the mechanistic way of thinking could have been more along the lines of, how do different aspects of humanity interact with one another, and how does this create a uniquely human experience? How do the complex interactions of language, technology development, and emotional capacity affect humanity’s interaction with its surroundings? The experiment conducted by Terrace and his team would not have a purpose in this line of inquiry because the need to reduce and define humanity is absent, and in its place an emphasis on interactions of parts of wholes takes precedence.

Through the brief analyses of Taylor and Terrace’s mechanistically-informed work, we can understand how mechanical thinking manifested itself in the types of questions asked in twentieth century science, and by asking questions more informed by a holistic worldview, we can see that mechanistic thinking is far from all-encompassing or inevitable. But I want to take that point a step further and discuss how this method of research still influences the way not only science is conducted, but the way institutions are run and how this filters down into the everyday human experience. This will also support a claim I made earlier; the kinds of questions that scientists ask, informed as they are by their methodological foundations, can (and often do) have major effects on human thinking as a whole.

The modern example I am most able to grapple with — as I have been participating in it for almost the entirety of my twenty-two years — is the education system. Beginning in the twentieth century, educators and psychologists introduced standardized testing as a way to, initially, provide individuals with education most suited to their needs.[2] The testing craze quickly evolved into something far more institutionalized, however, and became the basis for creating aptitude hierarchies of students all vying for increasingly competitive places in schools and universities. Like Taylor’s ideas on scientific management, the tests have provided a way for the education system to be streamlined and made more efficient; based on ACT scores, institutions and individuals can quickly decide who is and is not worth the time and money required for a college education. Certain machines are best suited for certain tasks, after all. Resources spent on trying to make a deep fryer capable of space travel would indeed be wasted.

By treating students like cogs in the machine of efficient production, the educational system has largely removed human individuality in favor of a single, mechanistic idea of what intelligence and aptitude are. Standardized testing is a solution to the issue of, how can we homogenize intelligence? How can we quickly and efficiently decide who is and is not worth the resources of education? A different approach, one more informed by thinking of humans as dynamic and individual, might grapple with the issues of education in a different manner; what sort of educational environment (including the people involved, the goals set, the curriculum followed) is most conducive to making students excel? How can we create a more inclusive educational experience? By embracing human variety and interconnectivity, these sorts of questions might offer up very different means of addressing the question of who should or should not be given the benefit of educational privilege.

            In a die-hard quest to optimize our world, we have repeatedly employed a reductionist, mechanistic approach to understanding and shaping our surroundings. It has clearly permeated the realm of the scientific, as evidenced by the kind of work done by Frederick Winslow Taylor and Herbert S. Terrace. Equally evident in their studies is the increasing tendency of these ideas to encroach upon the more social and soft sciences closer to humans’ understandings of themselves. Finally, mechanistic worldview’s influence on institutionalized education provides a modern example of how the questions and methodologies employed have shaped human lives and individuals’ beliefs about themselves and their capabilities. In all of these instances, as I hope to have shown through posing questions from an alternate standpoint, reductionism was not inevitable, nor has it been/is it the best approach available for the more nuanced issues with which softer sciences concern themselves.

[1] Project Nim, directed by James Marsh (2011).

[2] Katherine Pandora, “Disciplining Science in the Search for the Control of Nature,” (lecture, HSCI 5533, Norman, OK, October 27, 2016).

Darwin Demystified

Darwin Demystified: Two Constructivist Analyses of the “Revolutionary” Evolutionist

            Peter J. Bowler’s biography of Charles Darwin betrays its relatively unique approach in its physical appearance before the reader ever opens it. The book itself is small, an irregularity anyone familiar with the Darwin industry would find immediately anomalistic. The cover shows a picture of young Charles, before he had acquired his iconic, wizardly white beard. Who is this awkward twenty-something-year-old Victorian? Certainly not the grandfatherly and wise-looking Charles Darwin perpetuated by most biographers.

The general editor’s preface elucidates what exactly will be different about this evidently atypical biography. Part of the Cambridge Science Biographies series, Bowler’s work will concentrate on placing Charles Darwin in his context. Instead of focusing on Darwin the man — which is admittedly part of every biography and will not be completely eliminated — the book will pay special attention to those before him and his influence immediately after The Origin’s publication on into the modern world. It will thus attempt to take Darwin the legend, the exceptional genius who single-handedly revolutionized biology and provided the base for an entire scholarly industry, and place him in his time, surrounded by his influences. It will show that Darwin is not who we have been told he is; he is not the godlike, bearded messiah of biological enlightenment. He, like all of us, was a product of his time, and his ideas were not entirely his own, but built off of a complex network of cultural, social, economic, and scholarly influences. Both as an intellectual being before The Origin and afterward by those that told his story, Darwin was created. Charles Darwin was constructed like science was and is constructed — and both are undeniably human. Peter Bowler will deconstruct Darwin and reveal his humanity.

By presenting Darwin as someone both influential and influenced, Bowler opens up an avenue through which he can explore the scientific environment leading up to The Origin and how the community was changed after its publication. He does this by first laying out the basics of Darwin’s life at the beginning of each chapter, and then delving into the Victorian and scientific trends that played into Darwin’s thoughts and the reception — or condemnation — of them. As an example, in chapter three, “Young Darwin,” after an outline of available sources helpful in analyzing Darwin’s life, Bowler gives a brief synopsis of the Darwin family (highlighting Erasmus’s contributions to early thought on transmutation) and Darwin’s early schooling before moving straight into a discussion on the state of natural history during the young Darwin’s training.

The author begins his investigation by discussing the reason many natural philosophers, physicists, anatomists, physicians, and a plethora of other scientifically-inclined professions (at a time before “scientist” was an occupation) felt driven to raise questions concerning species generation or fixity. An outline of the fossil record had been constructed in the first half of the century, a development that required explanation. Some members of the scientific community, namely Jean-Baptiste Lamarck, publishing in 1809, Robert Chambers in in 1844, and Robert Grant around early- and mid-century, proposed the radical idea of “transmutation” — that is, species changing over time. This stood in stark ideological contrast to the previous and prevailing opinion of the fixity of species; God created each organism at the beginning of time, and they had remained in the same form to the present. Bowler argues that the fossil record forced even those most fundamentally minded in regards to the fixity of species to alter their beliefs. Instead of every species being exactly as they were when God created them at the beginning, they began to admit that extinction was a reality. Therefore, in a series of catastrophic extinction events, earlier species were destroyed and replaced by God in a new round of life forms.

In talking about Darwin’s predecessors, Bowler delves not only into the scientific issues with which Victorians were concerning themselves; these problems touched on political matters pertinent to the general populace as well. A time of political anxiety, Victorian England was a place very concerned with hierarchy. The French Revolution had overthrown the oldest and most structured European form of social order, and political revolutionaries of the same inclination were attempting to depose the British aristocracy. As is so often the case, science provided an avenue through which these anxieties were manifested in the form of attempted control. If a natural, stable hierarchy could be proven in the context of species fixity, an argument could be made that every human has his or her place, and in that place they must stay. It would be unnatural, for example, for a deer, a lower form of life, to transcend natural barriers and become a mighty lion, powerful and authoritative. In the same way, it was unheard of for a poor farmer to eclipse social impediments to become a duke. It was not, and could not, for the sake of social order, be possible. When Robert Grant or Robert Chambers suggested the transmutation of species, it was the transmutation of humans in the social hierarchy that struck fear in many Victorians. If transmutation were made natural, even imperative for survival, the political implications for the aristocracy were very real and very disagreeable.

By deconstructing Darwin, Peter Bowler paints a picture of Victorian England that addresses many of the issues its citizens were tackling. This insightful project will no doubt prove useful to historians of nineteenth century science and culture by contextualizing ideas, trends, and practices developed and accepted or rejected in the Victorian atmosphere. Rebecca Stott, in the first three chapters of her book Darwin and the Barnacle, offers a similar approach; using Charles Darwin’s time at Edinburgh, she artfully brings to the fore important aspects of the Victorian scientific landscape that created and influenced its many scientific men. She does this in a distinct way, however, disparate from Bowler’s methods. Instead of a deconstruction, Stott offers a construction. She starts from the influences and works her way into Darwin, in much the same way the natural philosopher himself would have experienced it. While Bowler’s biography is far broader, covering Darwin’s entire life and even the intellectual playing field before and after his major contribution, Stott’s focus is more on his formative years at Edinburgh and their effect on his studies thereafter. Using more of a microhistorical approach, Stott spends a great deal of time setting the scene; she describes the beaches and tide pools where Darwin and his colleagues in the Plinain Society found specimens, the relationships Darwin had with Robert Grant and John Coldstream, and his sickly episodes on The Beagle. Instead of deconstructing Darwin to find his influences, she uses his influences to create the researcher and theorist he would become.

The focus of Grant and later Darwin and Coldstream on the sea and its less notable inhabitants (to most modern readers at least) at first seems odd. Stott quickly brings the reader into the mind of a Victorian naturalist, however, when she describes the theories they would have been reading about — that the origins of life could have emerged from a “primordial ocean” — and the fascination with which they examined sea sponges in order to understand them. These ambiguous creatures existed on the fringes of classification, neither truly plant nor truly animal. The fossil record, the major impetus according to Bowler, certainly stimulated research into how species developed and populated the planet. But these organisms, defying classification before the very eye of the scientist, promoted impassioned and novel thoughts about biodiversity and the geographical distribution of species. Were these strange sea creatures reminiscent of the earliest life forms? Did the sea, the theoretical breeding ground for the earth’s first life, still contain its secrets? What were these creatures, and how did they fit into the strict categorization of earthly life forms? By transcending the natural boundaries in a much more concrete way than fossils, these organisms demanded attention and ultimately a theory to explain their problematic existence. Stott solidifies the powerful influence of Darwin’s time at Edinburgh by taking note of the fact that a good portion of his work aboard The Beagle was devoted to the gather and study of sea creatures.

Both authors treat Darwin’s time at Edinburgh as formative, but Stott uses these years and the professional relationships acquired during them to highlight research trends and thought patterns that would later prove influential not only to Darwin but to natural historians everywhere — and also, to some extent, to the general population. The sea (especially the deep, previously completely inaccessible part of it) was taking on an important role in the nineteenth century as the last earthly frontier, and the organisms that made this vast body of little understood water their home were of particular interest to many.[1] Stott’s approach offers a more minute, in-depth look at why and how sea creatures inspired evolutionary thought, a useful observation for understanding the part the ocean played in promoting scientific investigation into species formation, extinction, and differentiation. Her work illuminates the scientific process on a more personal, understandable level. Bowler, in contrast, hits the main points of Darwin’s Edinburgh experience, namely his professional associations and interest in oceanic life as research material, but primarily uses other periods of Darwin’s life for his purposes of elucidating the Victorian scientific atmosphere. His approach is more all-encompassing, focusing on major ideological trends and how they played into the generation and reception of scientific work, but in its extensity, it feels more mechanical. It is certainly useful in understanding Victorian English science, but some of the intrinsic motivations of those who practiced it remain mysterious.

By constructing Darwin, specifically during his time at Edinburgh, Rebecca Stott presents a way of experiencing the philosopher’s influences through his own eyes. We can understand why he constructed his own theory by seeing and experiencing what he did. By deconstructing him, Peter Bowler provides a bird’s eye view of Darwin’s life, placing him in context from an outsider’s perspective. We can understand how, in the context of his time, he came to produce a work like The Origin. Both authors focus on what makes Charles Darwin, and both approaches demystify the man, making him once again human. They attack the assumption, so often inherent in less sophisticated studies of Darwin’s life, that he transcended his time to propose a theory so revolutionary it put him on a pedestal for eternity. Like Newton, he too stood on the shoulders of giants, and like all human beings he was influenced by the culture and scientific environment of his time. This idea, Darwin constructed, brings to light an important point in the history of scientific theory, especially as far as apropos methodology is concerned — the men and women who created these theories were themselves created, and we should never forget that they instill their humanity into their formulations.

[1] David Lebrun, Proteus: A Nineteenth Century Vision, film, directed by David Lebrun (2004; Night Fire Films, 2004).

Alchemy in an HSCI Classroom

Alchemy in a History of Science Classroom

Alchemy has been a contentious space in which historians of science have engaged in the recurrent debate on what exactly constitutes “science,” and because of its spiritual and religious components, alchemy was often placed into the pseudoscientific category. Recent scholarship, however, has reaffirmed its position amongst other Medieval and Early Modern subsets of natural philosophy. Authors have cited its experimental programs of research, theoretical underpinning, and lab-based analysis and synthesis as evidence for its inclusion in the narrative of the history of science, and because of this, I believe that it should not be left out of a history of science survey course. Aside from its “scientific” characteristics, alchemy also provides avenues through which to discuss the interaction between practical and theoretical knowledge, entrepreneurial motivations that influenced natural philosophical inquiry, and the complicated relationship between science and religion.

I would begin a lecture on alchemy by discussing it in its eighteenth century context; this is when it was marginalized in order to legitimate the developing profession of chemistry. I would talk about how previously, the two investigative subsets were part of a larger research program focused on changing and understanding the properties of materials — notably metals, minerals, and other substances. The discussion would touch on who exactly was engaged in the pursuit (many of the “Big Men” made famous by their contributions to other fields, women, and craftsmen), and on the methods employed by these practitioners. Alchemists would read from ancient (and not-so-ancient) alchemical texts, comment on them, perform the experiments, and sometimes make modifications to the recipes, and they worked in labs where they performed distillations and synthesized compounds. I would highlight how these analytical and lab strategies are still used by scientists today.

Next, I would discuss what makes alchemy unique — its convoluted, secretive language and association with the spiritual and religious. While at first this might feel quite anti-scientific to many students, it puts alchemy (and Medieval and Early Modern scientific inquiry) into its context. Knowledge production was not always a secular affair, and alchemy’s engagement with the metaphysical, instead of a weakness, was a strength at the time it was being practiced. People sought deeper meanings for natural phenomena, and the philosophical framework from which they were working (Christianity with heavy Aristotelian influences) encouraged the search for final causes, symbols, and forms. Alchemy provides an excellent conduit for a discussion about what natural philosophy included in its lines of inquiry and elucidates the difference between itself and our modern construction of knowledge-gathering, science.

Alchemy’s significance to the history of science is therefore quite pronounced. Far from pseudoscientific magic, alchemy was a research program with goals, theories, and methods, and its practitioners were widespread and influential. As such, any survey of the history of science should include it and capitalize on the opportunity to discuss the issues alchemy brings to the forefront: early experimentalism, the relationship between practical and theoretical knowledge, and science and religion’s strong association.

Galileo Courtier

Galileo Courtier

Galileo Courtier recasts Galileo Galilei (1564-1642) as a member of the court, a role which allowed him to self-fashion a new socioprofessional identity as a mathematical astronomer/philosopher. Author Mario Biagioli argues that the identity that Galileo created was a new one, and that it was made possible through the social world of patronage systems and Galileo’s skilled maneuvering through them. Biagioli traces Galileo’s trajectory through multiple patronage networks; beginning with Galileo’s time as professor at the University of Padua, Biagioli goes on to explain how the mathematician presented himself and his discoveries to the powerful Medicis in order to gain their support. The latter half of the book covers Galileo’s transition into the Roman court, where different practices and customs made the game of patronage an altogether new one. While Galileo was successful there as well in the beginning, it was a crisis of patronage, Biagioli argues, that ultimately led to his condemnation in 1633. It was the patronage system that brought Galileo professional and financial success, and it was the patronage system that brought about his ruin.

I can find no fault with the first third of Biagioli’s work. The arguments run smoothly, and the evidence is plentiful; the footnotes are well done, and it is obvious that the author did an immense amount of research. If his thesis is problematic, the book must at least have some value in bringing to light many aspects of Galileo’s life previously under-researched. That being said, the rest of the book has some outstanding problems. Chapter four, “The Anthropology of Incommensurability,” seems out of place. It attempts to analyze court disputes in Kuhnian terminology, and what appears to be the conclusion — that scientific bilinguality is unique to proponents of the new “paradigm” — is arguably irrelevant to Biagioli’s narrative. There is additionally the issue that much of Galileo’s most important scientific contributions, including The Two Sciences and his pre-Florentine work in mechanics and mathematics, fall outside the restricted years of analysis that Biagioli sets up.

Also notably missing from Biagioli’s analysis of Galileo’s career as a courtier is the ethical dimension of court life as elucidated by contemporary political commentators. In his chapter on the topic, Robert Harding outlines what was seen as morally correct behavior of patrons, which included a desire for men of noble birth to be placed in the role of client before less noteworthy candidates. Men of power were supposed to perpetuate the social hierarchy as the natural state of affairs. Lesser nobles, or men who found fame through alternate routes (such as Galileo through his discoveries) were given varying degrees of approval by different commentators.[1] Gifts like Galileo’s distribution of telescopes could also reek of corruption if they were meant to entice beneficiaries to away from their “prior loyalties and obligations.”[2] How could these ethical dynamics have influenced Galileo’s career as a courtier, and could they have contributed to his downfall in the more cosmopolitan court of Rome? Could part of the reason he fell so far be that, ethically speaking, he was out of line in being there in the first place?

[1] Robert Harding, “Corruption and the Moral Boundaries of Patronage in the Renaissance,” in Patronage in the Renaissance, ed. Guy Fitch Lytle and Stephen Orgel (Princeton: Princeton University Press, 1981): 54.

[2] Ibid, 56.

The Early Modern Microscope

The Early Modern Microscope

            The invention of the microscope is shrouded in mystery and contention; often overshadowed by its more celebrated colleague the telescope, microscopy was slow to catch on and quick to die off in the seventeenth century (although it would be revived again in the nineteenth-century biological world). In their brief time in the scientific limelight, however, microscopes extended human knowledge in the direction of the miniscule and at the same time contributed to the downfall of the Aristotelian worldview. They provided access to a swarming, active world of “animalcules” that had previously been invisible, and the implications of this admission would be major for the natural sciences for years to come.

Since the Hellenistic era, humans had been using various materials to magnify their world, oftentimes to aid those with poor eyesight. Seneca, in the first century AD, described using water globes to magnify the lettering in texts, and Pliny chronicles Emperor Nero’s use of a concave emerald to enhance his view of gladiator contests. Florentines in the thirteenth-century were using eyeglasses.[1] Because of these examples of early magnification, it is difficult for the historian to distinguish a certain development as representative of the “invention” of a “microscope.” Some attribute its development to the Dutch father and son duo Hans and Zacharias Jansen, and some claim Hans Lippershey deserves the title; either way, it was the lens crafters of Middelburg, Netherlands in the last decade of the sixteenth-century that were the first to produce a new, distinct instrument of magnification potentially worthy of being classified as an early microscope.

Men engaged in the study of the natural world had, up to the seventeenth-century, not put much thought into what might be too small for their senses to glean. C. H. Lüthy describes why in his article on the early microscope’s relation to the telescope; Aristotle was an anti-atomist, believing that “when several elements combine to form further compounds… they lose their individual forms or qualities in favor of one single and homogeneous new form.”[2] With this assumption, magnifying matter would be rather useless and uninformative. It would take peering into the realm of minutia to debunk this belief and return to the atomist, or corpuscularian, theories of antiquity. At a time when many scientists were already questioning Aristotle’s philosophy, microscopic observations provided yet another nail in the coffin.

One such observer was Anton van Leeuwenhoek (1632-1723), a relatively poor Dutch draper with excellent eyesight. His accomplishments to a modern student of biology seem fantastic — he is credited as the first observer of protozoa, algae, yeast, bacteria, and human sperm — and he used very simple, single-lens microscopes that he ground and created himself. Each microscope was created for a single specimen, and at his death, several hundred microscopes with specimens still mounted were among his possessions.[3] Though he spoke only Dutch, he interacted regularly with the Royal Society in London, ensuring his work’s dissemination among the European scientific community.[4]

Although the microscope was not an invention bred of a passionate curiosity to uncover the mysteries of the minute, its rise coincided and reinforced the fall of Aristotle’s dominion over natural philosophy. After the initial discoveries, it quickly fell out of the scientific landscape until its revival in the nineteenth-century, in large part due to the lack of practical applicability it offered medical and natural philosophical men. But its contributions were important and would become moreso in the centuries to come.

[1] William J. Croft, Under the Microscope: A Brief History of Microscopy (Singapore: World Scientific Publishing Pte. Ltd, 2006), 4-5.

[2] C. H. Lüthy, “Atomism, Lynceus, and the Fate of Seventeenth-Century Microscopy,” Early Science and Medicine 1, no. 1 (1996): 12.

[3] A. D. S. Khattab, “Dances with microscopes: Antoni van Leeuwenhoek (1632-1723),” Cytopathology 6, no. 4 (1995): 216.

[4] Ibid.

The Islamic World & the Copernican Revolution

The Islamic Phase of the Copernican Revolution

            The story of the European adaptation of a heliocentric universe is normally told through Western astronomers; Nicolous Copernicus (1473-1543) serves as the beginning of the tale in most cases, his De revolutionibus orbium coelestium of 1543 hailed as a revolutionary tome. Its ideas were primarily original, or at the very least a result of Western influence. Recent work by historians such as Noel Swerdlow, Otto Neugebauer, George Saliba, and F. Jamil Ragep have challenged this interpretation, however, suggesting instead that Copernicus was heavily influenced by a sect of Islamic astronomers known collectively as the Marāgha School.[1] The evidence for such an association is formidable. Many of Copernicus’s techniques, thought processes, and mathematical proofs are strikingly similar to those of his Islamic predecessors.

The most obvious evidence linking Copernicus’s work to that of the Marāgha School exists in the mathematical strategies both use to simplify the Greek, Ptolemaic model, a goal both entities had in common. Nasir al-Din al-Tūsī (1201-1274) proved a mathematical device, known as Tūsī’s Couple, and used it to describe lunar motion (by way of generating linear motion from multiple circular motions) in his 1260-61 Tadhkira fi ‘ilm al-hay’a. Copernicus uses the exact same theorem — and includes a proof of it in the same format, using the same letters, as his Islamic predecessor — in De revolutionibus.[2] Copernicus also makes use of a mathematical technique termed ‘Urḍī’s lemma, named after its inventor, Mu’ayyad al-Din al-‘Urḍī (1200-1266), to eliminate the need to use Ptolemy’s cumbersome equants to account for planetary motion of the upper celestial spheres. Scholars hypothesize that he was exposed to this approach via some rendition or commentary on Ibn al-Shātir’s work, Shātir (1304-1375) being one of the many astronomers who utilized ‘Urḍī’s lemma in his cosmology.[3]

Additional evidence can be found in more the subtle traces of Islamic logical processes extant in Copernicus’s work. Both Copernicus and his Islamic colleagues use comets as a way to explain the possibility of a moving earth in line with observational physics, and, as F. Jamil Ragep argues in an article on the subject, Islamic sources were commenting on the possibility of a moving earth if natural philosophical epistemologies could be produced to explain the physics behind such a notion — an idea that medieval Westerners viewed as impossible.[4] Thus, the groundwork for Copernicus’s theories appear to have been lain not by his Western predecessors but by his Islamic ones.

Based on this evidence, I believe that the Marāgha School, and in particular al-‘Urḍī, al-Tūsī, and al-Shātir, belong in the narrative of the Copernican Revolution, having laid important mathematical and intellectual groundwork for the advances Copernicus and his European colleagues would expand upon. A notable impediment to this interpretation is the lack of a solid connection between Copernicus and the Marāgha School, but more work is being done to elucidate this association, and hopefully new evidence will reveal more than just a methodological link between the two parties.

[1] George Saliba, “Islamic Science and Renaissance Europe: The Copernican Connection,” Islamic Science and the Making of European Renaissance (Cambridge: MIT Press, 2011).

[2] George Saliba, “Islamic Science and Renaissance Europe,” 197-199.

[3] Ibid, 204-205.

[4] F. Jamil Ragep, “Tūsī and Copernicus: The Earth’s Motion in Context,” Science in Context 14 (2001): 160.