How to Be a Better Reactionary: Time and Knowledge in Technology Regulation

Lee Vinsel
22 min readOct 17, 2023

--

I wrote this essay for a workshop that will be held at George Washington University this week titled “Politics of Controlling Powerful Technologies.” The workshop has been organized by Professor Jeffrey Ding, and I am very grateful to participate in it. While I have been thinking about many of the ideas in this piece for years, it’s the first time I have played with some of them in writing, so I have decided to post it publicly.

Calling somebody a reactionary usually isn’t very nice. This is especially true in academic circles, where most people fall somewhere on the left side of the political spectrum. Here, “reactionary” is a downright slur. It’s a dysphemism used to tar and feather your enemies. But in this little think piece, I want to argue that, in most cases of technology regulation, the best thing we can possibly hope for is to be high-quality, attentive, responsive, and moral reactionaries, people who respond to problems as they arise rather than pretend they can see them in advance.

High-Resolution Scientific Imaging of an Anticipatory Governor’s Galaxy Brain

Before I get going, I feel the need to say something that should be obvious but, sadly, requires peating and repeating and re-repeating in conversations like this: We live in a world full of tremendous problems, including wars and violence, hunger, death from easily-treatable diseases, people being exposed to the elements because they lack adequate housing, runaway environmental crises, and so many other avoidable forms of pain. (As someone who works in Eastern meditation and spiritual traditions, I make a distinction between pain, which is an unavoidable part of life, and suffering, which arises from our relationship to reality, including our stories and expectations about it.) All of this pain has some sort of material basis. That is, it arises either from using human-made things, also known as “technologies,” in harmful ways or from lacking access to the fruits of these human-made things, like food, shelter, medicines, and systems that remove our pee and poop from our vicinity, systems that so many of us take for granted.

My reflections in this essay will focus on policy-making in the United States so I will give a few concrete examples of material problems in this nation for us to keep in mind: Thousands of American communities have lead in their drinking water, but there is no serious call for large-scale remediation. There are literal homeless camps in many American cities, especially ones in California. Building adequate housing is no technical mystery, but we do not. The MacArthur “genius prize” winning activist Catherine Coleman Flowers, a great moral hero, has taught us that so-called tropical diseases, like hookworm, have reemerged in the US South because citizens, especially black people, there lack access to septic tanks or sewers and, therefore, live in direct contact with raw human waste.¹ This problem requires no high-tech solution but, nevertheless, goes unsolved. Journalists and advocates often draw attention to the lack of basic maternity care for poor and rural people, again especially black people, with the result that black women are three times as likely to die from pregnancy related causes than white women. Environmentalists have been warning us about global climate change for as long as I have been a conscious being, but we have passed no truly serious legislation to address it. United Way’s ALICE program, which studies affordability and financial hardship at the county level, routinely finds that about 40% of American households can barely afford to make ends meet.² Financial stress and poverty have all kinds of dreadful health effects and other negative life outcomes.³ But there is no mass movement today to ensure, for example, that all Americans have access to basic needs.

Material problems get all up in our faces for decades, for literal generations, and, collectively, we do not react. In this context, paying great attention to so-called emerging technologies is often a distraction. At its worst, attending to speculative problems around emerging technologies can be a form of what historian David C. Brock has called “wishful worries,” which are, as he puts it, “problems it would be nice to have” and which he contrasts with actual agonies.⁴ A classic in the wishful worries genre: “Hacked Sex Robots Could Murder People, Security Expert Warns.”⁵ I believe that examining historical examples of the powerful emotional and social energies that surround emerging technologies–yes, including purported but very often intellectually-thin reflections on their potential moral, political, and social ramifications–are typically out of balance with the actual effects those technologies end up having in the world. And–All. The. While. 🙌.–we continue failing, in the most exemplary fashion, to react to the pile of actual agonies.

My embrace of the reactionary way is defined in part in opposition to calls for “anticipation,” “foresight,” and such in the regulation of technologies, especially new or so-called emerging technologies. Such calls have been common in Science and Technology Studies and related fields for at least a decade and probably significantly longer.⁶ To some degree, the reasoning that leads folks to call for anticipation is understandable. A basic version goes something like this: The adoption of technologies has, in some cases, demonstrably led to various harms, including pain and death, for the people who did the adopting, for others (including non-human animals and the natural environment), or for both. Given this–again, totally demonstrable and true–reality, it would be ideal if we could foresee problems before they arise, and, therefore, we should do any number of activities to preempt them.

But there are a number of problems with thinking this way, of which I believe two are the most important. First, we simply cannot foresee most problems that arise from the adoption of technologies. Advocates for anticipation respond, “Yeah, yeah, yeah, we know that, but still yadda, yadda, yadda“ but any such reaction underestimates just how radically ignorant we are about the future. Our fundamental cluelessness about future technological developments is confirmed by a long historical track record of lousy predictions about them. I could go on for a long time here, but I will boil it down to several points:

  • In his book Energy at the Crossroads, Vaclav Smil examined a series of historical energy price projections and found their accuracy to be dreadful.⁷
  • Qualitative examinations of past projections of technological futures–such as the recurring numbered technological revolutions (1st, 2nd, 3rd, 4th, etc.) whose definitions changed over time–finds little relationship between them and actual subsequent developments.⁸
  • Expertise in a technical area does not lead to better predictions. In fact, the opposite can be true: expertise can lead people to overestimate their own knowledge of the time to come. As a grad student at Carnegie Mellon University, I was taught that researchers doing expert elicitations there found that experts were all over the map when it came to future developments.⁹ There was no consensus. Answers to questions of the form “when will technology capacity x become readily available?” would range from “very soon” to “never.”
  • Perhaps the most fundamental problem with predicting technological futures is that we cannot foresee how technologies evolve or, more important, how they will be adopted and used. Ultimately it is how technologies are used that has the largest impact on society.¹⁰ For this and other reasons, Farzaneh Badiei and Bradley Fidler have argued that promoters of the “values in design” paradigm, which, for example, assumes that “Internet protocols can be designed such their use will necessarily and durably promote human rights,” deeply misunderstand how greatly protocols and their uses shifted after their initial conception.¹¹ Even their original creators could not see where they were going to go. No one who was commercializing the Internet in the early 1990s foresaw, or could foresee, that social media use would apparently be making teens depressed in the 2010s, probably because, among other things, social media platforms broadcast, reify, and stick users’ faces in objective measures of social desirability and hierarchy and, thereby, exacerbate status anxieties.
  • My hunch is that philosopher and Frankfurt School leader Max Horkheimer was simply correct when, building from Kantian and Neo-Kantian foundations, he argued that the human faculty of imagination is fundamentally reproductive, or we could say conservative (in a non-political sense), because it simply shuffles around the furniture of images and ideas that are already within us.¹² (Monsters are parts of creatures reorganized in an “unnatural” and, thus, disturbing way.) In contrast to people like Herbert Marcuse who assert that the imagination has liberatory potential because it opens new alternatives for us, Horkheimer argued that the imagination is, as sociologist John Levi Martin put it, “something that strengthened the hold of the existent on us, not the reverse.” The imagination cannot give us any grip on the future because the future is not already within us; beyond incremental steps forward, the imagination cannot see or know what it does not already know. In this way, activities of “foresight” and “anticipation” mostly end up reinforcing our biases. (When I was starting to think through these issues out loud in public on Twitter, someone said I should check out Monica Byrne’s novel, Actual Star, if I wanted to see a work of radical imagination that disproved Horkheimer’s argument. I found the book a fun read but not remotely intellectually radical; indeed, I thought it was wonderful evidence, basically proof, of Horkheimer’s insight. I came to feel that the book was basically a totemic representation of my very own politics, which have the general aesthetic resonance and even smell of a kale and tofu fart in a Brooklyn, New York hot yoga class aimed at clearing traumas from the energy body. That is, there is nothing at all surprising about it. It is an imaginative projection of what we already think anyway. Our minds and hearts go unchallenged.)
  • And as a bonus, here is a fun little anecdote: I once heard a general in the US military say that, of the past four wars the country had been in, leaders had zero inkling that they would be involved in them even a year beforehand.

To summarize, any worthwhile conception of technology governance must recognize and build forward from our foundational and irremediable ignorance about the future. Begin with uncertainty and assume it is enormous. It is a safe bet. Contrary to futurists’ claims, there is no demonstrable discipline for thinking well about the future, and that is because the future does not challenge you, it does not push back. The future is not really an object. (Thinking requires something to think about.) Dream all you want.

This is not the place to go on at length about it, but I will say in passing that attempts to shift the goals “anticipatory governance” processes from foresight to other ends, like democratic deliberation, fall prey to the same problems I just listed. For example, having groups think through and react to science fictional scenarios of future technological circumstances assumes that the scenarios are plausible, when there is no reason to believe they are. As Granger Morgan and David Keith have argued, “rather than expanding people’s judgment about the range of uncertainty about the future, scenario-based analysis is more likely to lead to systematic overconfidence, to an underestimate of the range of possible future outcomes.”¹³ Stories about the future, as much as we nerds love them (I have a sizable collection of vintage sci-fi paperbacks that lightly tilts towards the British New Wave), are lame policy tools. In conversation, promoters of using future scenarios to think through policies have told me that having experts react to such narratives is “better than nothing,” but to the degree that such prognostications may lead us to overestimate our knowledge of the future or even distract us from the real problems that develop in unforeseen and orthogonal spaces, this defense is simply false. It can, in fact, be worse than nothing.

These reflections bring us to the second big problem with the line of reasoning that leads to calls for anticipatory governance: Such calls almost always come in the context of enormous hype, which I define as unrealistic expectations of near-term technological change. As Brent Goldfarb and David Kirsch explain in their book, Bubbles and Crashes: The Boom and Bust of Technological Innovation, technologies that go through bubbles–that is, their market value loses touch with any reasonable assessment of actual value–are surrounded by seductive narratives about how, because they are supposedly so powerful, they will end up reordering society.¹⁴ For an example, all we need to do is look at the recurrent but ultimately deluded fantasies of profound social and economic change, including so-called technological unemployment, that have surrounded technologies of automation, robots, and artificial intelligence since at least the early 20th century and perhaps even earlier. To give an even sillier example, in his 1975 book, The Third Industrial Revolution, G. Harry Stine, writing at a time of continued excitement about the space race, argued that a “Third Industrial Revolution” had begun when humans took their “first halting steps into space.” He predicted, “The next century or two will see all of earth’s industry move into space, where it can pollute to its heart’s content, and earth will once again become the green paradise it was 100,000 years ago.” How’s that goin’, ya think?

The sad reality is that, at least since the time that the model of “studying” the “Ethical, Legal, and Social Implications” (ELSI) of emerging technologies arose around the Human Genome Project in the early 1990s, playing into hype has become a business model for several social scientific and humanistic academic fields. Granting agencies, like the National Science Foundation, and science and engineering organizations, such as the National Academies of Science and Engineering, themselves function as interest groups looking to increase funding. Hype is one big way they pitch their promise. (A paper from the last few years that I lost sight of argued that grant proposal processes incentivize researchers to overplay and even downright lie about the potential impacts of their work.) Granting agencies are more than happy to give money to humanities and social science researchers willing to play up the potential social effects of new technologies–willing even after decades of such predictions not playing out should have raised red flags. Such researchers render scientists and engineers a real service. (Because of the emergence of this business model, it is no surprise that these are the parts of Science and Technology Studies most likely to embrace worlds of corporate consulting, including “design” and futurism, worlds not of demonstrable evidence of improved outcomes but of vibes and feels. Again, be wary of futurists’ attempts to shift the purpose of their work to things like “reframing” and “sense-making.”) ELSI may provide nice make-work for underemployed Science and Technology Studies scholars but has created little in the way of societal benefit.

At its worst, hype-y emerging technologies become clear conflicts of interest for social science and humanities researchers. The researchers have incentives to play up the potential social effects of the technology when the moral act of truth-seeking and even basic critical thinking suggests that such claims are overblown. A researcher studying genetic engineering once told me the subfield she was looking at was full of hype. When I told her, “Well, THAT is what you need to write then,” she responded that, if she did, she would lose access to the researchers she was studying. The truth would never end up in print, at least not from her.

More than a decade ago, I developed a standup routine that I use in my classes and at nerd parties in which I do dramatic readings of publications focused on the “anticipatory governance” of nanotechnologies.¹⁵ I have experimentally validated that, in retrospect, many claims made in these publications are literally hilarious. Here is a good example: “Nanotechnologies are likely to open gaps by gender, ethnicity, race, and ability status, as well as between developed and developing countries, unless steps are taken now to create a different outcome.”¹⁶ EGAD! NANOTECHNOLOGIES!

Another classic in the nanotech hype joke genre is Michael M. Crow’s and Daniel Sarewitz’s 2001 essay, “Nanotechnology and Societal Transformation.”¹⁷ Near the beginning of their piece, the pair state, “A single technological innovation can remake the world. When the metal stirrup finally migrated from Asia to western Europe in the 8th century, society was transformed to its very roots.” The authors then cite Lynn White’s 1962 book, Medieval Technology and Social Change, a great work for sure, but one filled with problematic and technologically determinist interpretations. By the time Crow and Sarewitz cited White’s book, it was forty years old, and they ignored forty years of more nuanced thinking about the social dimensions of technology. No serious and self-respecting student of technology studies would have repeated White’s claims without significant caveats for decades. Crow and Sarewitz use the stirrup example to set up the potential profound impacts of nanotechnologies– profound impacts that they do not themselves claim but that they do not really challenge either–and argue that social science and humanities researchers should help in “Preparing the Revolution.” This text does not require a lot of dramatic work to get chuckles. The authors themselves wrote the jokes, but that is not surprising given that Crow is the co-author of two humor collections titled, Designing the American University and The Fifth Wave: The Evolution of American Higher Education. (In case you don’t get my satire, those books are actually neoliberal fantasies of making universities even more like corporations. For one analysis of dark sides of these visions, see Davarian Baldwin’s book, In the Shadow of the Ivory Tower: How Universities are Plundering Our Cities.¹⁸) As you can see, I believe that in such contexts one of the few moral avenues available to us is to perform a long ritual and cast a famed-Dungeons & Dragons spell known as Vicious Mockery.

Sadly, I have come to believe that hype around emerging technologies has become so woven into the DNA of Science and Technology Studies that there simply is no possibility the situation will improve anytime soon. I have given up hope. On this score at least, I have turned my mind and heart elsewhere. (Every year, I peruse the conference program for the Society for the Social Studies of Science [4S] and find — once again, once again, ah once again — that the vast majority of named technologies are extremely hyped emerging technologies, so maybe I just like being sad, I say, as I turn on yet another Joy Division record.)

To summarize my argument to this point, we are terrible at predicting the future, and when we try to look forward into the tomorrow of new technologies that are surrounded by a great deal of emotional energy (importantly, not all are), our imaginations end up reproducing the hype-filled narratives that permeate us. We can demonstrate this by looking at our fields’ own histories.

It stands out that many advocates for anticipation are also advocates for increasing democratic deliberation. Often this quest is cast as creating new venues wherein executive branch agencies will learn more about what citizens value and desire, as if such venues will not just become yet another avenue for the continued domination of monied interests, as has happened again and again with such institutions since at least the early-20th century (see standards-making processes; Administrative Procedure Act comment periods, maybe especially environmental ones; and NIMBY control of local housing development policy). There are many ironies here, but one of them is that advocates of deliberation frame themselves as being opposed to “technocracy,” but they themselves urge us to embrace a kind of technical institutional solution for the deep social problems of our material world. Their position is a studied avoidance of actual politics, especially class politics and the politics of poverty. But the deepest issue with this approach, at least in the context of the United States, is that it fundamentally misunderstands the nature of governance. Nearly all of the serious problems I listed near the front of this essay cannot remotely be solved by executive branch action alone but require legislation. We have already known this for decades as US presidents of opposing political parties undid their opponent-predecessors’ executive orders and agency rulemaking as soon as they entered office, but it has become even more apparent since 2022 with the SCOTUS decision in West Virginia, et al. v. Environmental Protection Agency, et. al., which ruled that there were serious limits to how the Environmental Protection Agency can regulate greenhouse gas emissions related to climate change. I am not in any way opposed to the creation of executive branch deliberation institutions — go nuts, brah — but the only real way forward on the biggest issues we face is through Congress (or, if it needs saying, through a Constitutional revolution).

I bet you can see where this story is going and just how fucking dark it is about to get, though. Because thinking down this road leads us to our actual political system. I write this at a moment when the US House of Representatives lacks a speaker, and the reigning party in the House, the Republicans, are unable to move forward and find a representative to unite behind. Things ain’t going so hot. For several years, I have been struck that seemingly most reflections on “democracy” in Science and Technology Studies rely on a philosophical and idealized conceptions of democracy and basically altogether ignore empirical studies of actually-existing democracies from Political Science and other academic fields. (It’s also noteworthy that anticipatory governors often do no real empirical research, do not study the world.) I have heard my new Virginia Tech colleague, Monamie Bhadra Haines, make a similar point. She argues that classic works in Science and Technology Studies, such as books by Sheila Jasanoff and Yaron Ezrahi, present a Western and crypto-essentialist definition of the relationship between science and democracy that not only does not apply to non-Western democratic nations, like India, but, when we look deeper, also misunderstands the politics ofthe United States and other nations in which the field of Science and Technology Studies arose. Our only choice is to study and work in democracy as it really exists. Turning to empirical works leads down dark alleys, though. The subtitle to a book that helpfully synthesizes one line of both empirical and theoretical thinking about actually-existing democracies bleakly captures things pretty well: Democracy for Realists: Why Elections Do Not Produce Responsive Government.¹⁹ Perhaps we should broaden the subtitle and make it even more depressing: Forget Elections, Hardly Anything Else Produces Responsive Governments Either!

When we examine the actual history of technology regulation, we see long periods of inaction in the face of strong evidence that things are going really wrong. We are terrible, non-responsive reactionaries, and our aspiration should be to be better ones. To give one example I have written about before, it took basically twenty years between the time that a scientist identified automobiles as the primary cause of smog and when Congress passed a law (the Clean Air Act Amendments of 1970) that finally forced automakers to do something about it.²⁰ And as I said earlier, scientists, environmentalists, and many others have been warning us about the dangers of climate change and calling us to use law and regulation to change the nature of our technological landscape for as long as I have followed the news, but, in all that time, we have barely gotten cracking. Examples of inaction in legislation, organizational routines, and our daily lives can be multiplied to include cybersecurity, disaster and disease mitigation and preparedness, consumer product safety, and so many other material domains. Of course, most of the time our political system does not react to such problems because the status quo benefits the interests of powerful monied interests. No surprises!

My oldest friends might ask me in the patter of our hometown, “Damn, boy, do you be hopeless?” No. Not at all. I firmly believe that history also provides us many examples of social movements that have surprised all expectations and created beneficial change. In the United States, we can look to the Civil Rights Movement and the environmental movement of the 1960s-1970s, and globally we can look to freedom and environmental movements in many nations, including India, South Africa, and so many more. (It’s noteworthy that none of these movements primarily centered on richly-imagined, science fictional visions of the future, and, again, that’s because such visions recapitulate the existent rather than open truly new possibilities.) What you have to hope for, if you want to believe there’s a road forward, is that new, unforeseen political alliances will open up that now seem impossible and that will include (logically, there is simply no other way) collaboration between people who at this moment see each other as opponents, even as enemies. I am not a member of any theistic faith, but if you want to pray to God that we figure a way to work things out, I will stand by your shoulder in affirming silence.

But again, these thoughts lead us back to perfectly ordinary politics. There is no way to avoid them, try as many academics might.²¹ And this is why when I find heroes in the regions of Science and Technology Studies, it is people like Professor Lilly Irani of the University of California San Diego and Meredith Whitaker, the President of the Signal Foundation, and others of their hue who take part in and call us all to movement politics. (I do think we need to steadily apply critical thinking to claims not of folks like Irani and Whitaker but of some members of the Anti-Big Tech Mutual Admiration Society who congregate on platforms like Twitter and who tend to lack perspective, traffic in hyperbole, engage in weak forms of research or none at all, and make claims without strong evidence. I look at the human pain in places like Flint, Michigan and rural West Virginia, try to gauge how much of that pain is attributable to “Big Tech,” and think to myself, “Y’all have lost touch.”) I am also excited and energized by writers such as Aaron Benanav, who not only has some of the best takes on the actual effects of automation technologies but in a forthcoming book will argue that we should move to a post-scarcity society in which everyone is entitled to basic human needs, including things like housing, food, transportation, and healthcare.²² Such thinking calls us not to anticipation but to conversations about ends and then to plans (which I will say more about in a moment).

In an argument like the one I’ve put forward here, you should expect caveats, but it is important that they are caveats that don’t give everything away. The first is that, when we are creating policies around technologies, involving knowledgeable people in the process can avoid some future problems because they understand the logical relations of how such systems operate. Friends who work on the history of computer networking tell me that computer scientist John Day and other critics of the Internet saw issues with technical architecture, including traffic flows and the like, that called for more research and experimentation quite early in the technology’s lifecycle, but it is also true that even these critics probably did not have a full sense of issues until they saw these systems operate at scale. Similarly, systems engineer and entrepreneur Robert Charette has written a 12-part series of articles, which were later republished as an ebook titled The EV Transition Explained, that lays out a number of potential systematic problems with mass adoption of electric vehicles, ranging from lack of available minerals to an inadequate electricity grid, that are largely ignored by current state-level policymaking.²³ But ultimately here I am saying nothing more surprising than that we should, like, have blueprints and plans for bridges and buildings inspected by experts so that they do not fall down and kill people.

I also want to make clear that nothing I have said in the preceding suggests that we have to give up planning, which is one way of relating to the future. Making plans, which are step-wise schemes based on the strongest available existing-knowlege for reaching some desired end state, is a basic human capacity that has obvious and enormous benefits. The whole history of modern large-scale projects demonstrates that we can initiate and execute plans with huge outcomes, though such projects also later (sometimes justly, sometimes not so justly) came under fire for being top-down and harming many. But if we want to turn Iowa or wherever into a giant solar and wind farm, let’s go for it. My point, though, is that planning is fundamentally distinct (likely on a cognitive level) from “anticipating” future developments, which cannot be foreseen. And there are many other policy tools, from technology-forcing performance standards to carbon taxes, that are much more open-ended and are explicitly designed to allow surprising and hitherto unknown solutions to emerge. And if we do not like something about a technology, we can outlaw it. In 2022, the Electronic Frontier Foundation called for a ban on behavioral advertising because . . . well, because many people feel it is damn creepy.²⁴ Perhaps I could be convinced this is a bad idea, but for now, I say, Amen. We would have to see if such a ban would survive judicial review, and who knows with this Court. There is also a large body of literature on so-called responsive or adaptive regulations — rules designed to evolve as circumstances change.²⁵ But again, none of these tools–from plans to performance standards, from taxes to bans–requires foresight or anticipation, just that we be the best possible reactionaries.

I suppose advocates for “anticipation” and various “futures” work would respond to me that their tools could be useful precisely for forming the kinds of new political coalitions that we require to move forward on things like climate change legislation. My first response would be that they have provided no strong argument on this front that their tools are better than others. Second, we are going on two decades of thinking this way, and I see no real evidence of significant change, just a lot of chatter. Which brings me to my third response: If you really think this is the road forward for politics, then, for crying out loud, . . . show us.

1 Flowers, Catherine Coleman. Waste: One woman’s fight against America’s dirty secret. The New Press, 2020.

2 https://unitedforalice.org/meet-alice. For my interview with ALICE’s creator and director, Stephanie Hoopes, another hero of mine, see: https://newbooksnetwork.com/034-stephanie-hoopes-on-alice-and-economic-hardship-in-the-united-states

3 Mathew, Johan. “Embodied capital in the history of inequality.” History Compass 20, no. 4 (2022)

4 David C. Brock, “Our Censors, Ourselves: Commercial Content Moderation”: https://lareviewofbooks.org/article/our-censors-ourselves-commercial-content-moderation/

5 https://www.newsweek.com/hacked-sex-robots-could-murder-people-767386

6 Guston, David H. “Understanding ‘anticipatory governance’.” Social studies of science 44, no. 2 (2014): 218–242.

7 Smil, Vaclav. Energy at the crossroads: global perspectives and uncertainties. Cambridge, Mass.: MIT Press, 2003. How terrible humans are a prediction has since become a running theme of Smil’s work, including in his 2019 book, Growth: from microorganisms to megacities. Mit Press, 2019.

8 Vinsel, “Prophecy and Politics, or What Are the Uses of the ‘Fourth Industrial Revolution’”: https://medium.com/whats-at-stake-in-a-fourth-industrial-revolution/prophecy-and-politics-or-what-are-the-uses-of-the-fourth-industrial-revolution-30710f349ee9 See also David Edgerton, The Shock of the Old.

9 I will track down a citation for this, and if I have it wrong, I will publicly eat crow.

10 Edgerton, David. “From innovation to use: Ten eclectic theses on the historiography of technology.” History and Technology, an International Journal 16, no. 2 (1999): 111–136.

11 Badiei, Farzaneh, and Bradley Fidler. “The Would-Be Technocracy: Evaluating Efforts to Direct and Control Social Change with Internet Protocol Design.” Journal of Information Policy 11 (2021): 376–402.

12 Martin, John Levi. “Critical Theory, the Imagination, and the Critique of Judgment: Horkheimer’s Vision Reconsidered.” In Society in Flux, vol. 37, pp. 59–87. Emerald Publishing Limited, 2021.

13 Morgan, M. Granger, and David W. Keith. “Improving the way we think about projecting future energy use and emissions of carbon dioxide.” Climatic Change 90, no. 3 (2008): 189–215.

14 Goldfarb, Brent, and David A. Kirsch. Bubbles and crashes: The boom and bust of technological innovation. Stanford University Press, 2019.

15 Alfred Nordmann was an early critic of speculative moralizing around emerging technologies. See his essay, “If and then: A critique of speculative nanoethics.” Nanoethics 1 (2007): 31–46.

16 Cozzens, Susan E., and Jameson Wetmore, eds. Nanotechnology and the challenges of equity, equality and development. Vol. 2. Springer Science & Business Media, 2010.

17 Crow, Michael M., and Daniel Sarewitz. “Nanotechnology and societal transformation.” Societal implications of nanoscience and nanotechnology (2001): 45.

18 Baldwin, Davarian L. In the shadow of the ivory tower: How universities are plundering our cities. Bold Type Books, 2021.

19 Achen, Christopher, and Larry Bartels. Democracy for realists: Why elections do not produce responsive government. Princeton University Press, 2017.

20 Vinsel, Lee. Moving violations: automobiles, experts, and regulations in the United States. JHU Press, 2019.

21 Here we could insert a long line of thinking about the unavoidability of real politics, including thinkers like Carl Schmitt but also his leftist interpreters, such as Chantal Mouffe. Mouffe, Chantal, ed. The Challenge of Carl Schmitt. Verso, 1999.

22 Benanav, Aaron. Automation and the Future of Work. Verso Books, 2020.

23 The ebook is available for download here: ​​https://spectrum.ieee.org/the-ev-transition-explained-2659623150

24 https://www.eff.org/deeplinks/2022/03/ban-online-behavioral-advertising

25 See for example, the work of John Braithwaite (https://johnbraithwaite.com/responsive-regulation/) as well as Jonathan Wiener and Lori Bennear (https://law.duke.edu/news/wiener-designing-regulations-built-learn/).

--

--

Lee Vinsel

I do technology studies, co-organize @The_Maintainers, and profess Science, Technology, and Society at Virginia Tech.