“The good news is that your starting point hardly matters—almost any background can contribute to a strong foundation for this work.”
I studied abroad after my junior year of college, and returned home too late to enter the fray of NYC summer internships. I’d spent past summers racking up professional experience—doing strategic marketing for Univision during the World Cup, and later branding and licensing for the Biggie Smalls estate—so I wasn’t quite sure what to do with two months in rural Connecticut. Although I didn’t have a job, that summer turned out to be one that defined my career.
Slowly, I warmed to my dad’s suggestion that I check out his book collection. Ambivalently, I flipped through the I Ching and A Brief History of Time. And then I discovered Neuromancer by William Gibson—the science fiction novel in which he famously coined the term “cyberspace.” The rest is a blur. I made my way, voraciously, through the rest of my dad’s sci-fi collection, including Gibson’s entire repertoire and much of Margaret Atwood’s, too. When I returned to Barnard College in the fall, I had decided to write my senior thesis on emerging biotechnology approaches and the challenges they present to the human species’ fraught relationship with “nature.”
Since that summer, I’ve only become more convinced that this is a critical moment in our species’ relationship with our planet, and that we have a real opportunity to reposition our structures, institutions, and communities to carry us forward in increasing harmony. William Gibson is still my favorite author, and one of his best known quotes is often recalled by the tech fellows at Ford: “The future is already here, it’s just unevenly distributed.” It’s an observation that inextricably binds us and our decision-making to the past and the future. And it suggests that it matters tremendously how we behave today, in an increasingly polarized and divided world.
For a long time, I’ve been passionate about ensuring that technology empowers people to pursue a more equitable and sustainable path. My first job out of grad school was in an area sometimes called “technology futurism”: I wrote about the intersection of tech development, business, culture, and regulation for multinational companies and government intelligence agencies. The firm I worked for specialized in scenario planning and pattern spotting, which was a great fit for my fascination with systems thinking. From there I kept moving closer to the work of change, first to a planning and project management role in the development of open innovation ecosystems, and now to advising Ford Foundation teams and grantees on strategy and technological change in the social justice space.
Today, as one of the Ford Foundation’s tech fellows, I see part of my role as embracing interconnection. The problems our world faces very often can’t be solved in a linear way: they are complex, with multiple angles and entry points; they can’t be reduced to simply good or bad, right or wrong. In the world today we have separated and specialized everything: theory, language, media, realms of scientific inquiry, and so on. And yet we are moving into a period of much greater convergence, as we recognize that all these areas are functionally connected. When we treat complex problems as “solvable”—when we fail to address their underlying causes—we risk recasting them in new, and potentially more pernicious, forms. A good example of this is structural racism, which we know wasn’t solved by abolishing slavery, and has continued to flourish in the criminal justice system and other parts of society.
A similar sort of binary, over-simplified narrative also surrounds the role of technology: It will save us, or it will ruin us. Robots will be our slaves or our masters. The future might need us, or it might not. I’m interested in bringing nuance to these debates. Increasingly, I’m noticing that what seem to be technical conversations quickly become conversations about much more than technology. Whether the people talking are policymakers, developers, or designers, it’s a quick leap from discussing algorithmic engineering to reflecting on algorithmic fairness, from transparency to accountability, from potential use cases to equity considerations, and from end-user experience to stakeholder engagement and inclusion. I’m driven to continue making these conversations more inclusive, centered on issues of values and ethics; and to make new technologies less challenging to adapt to, and more in service of sustainable goals.
Much of the work in public interest technology today focuses on pushing back against “technosolutionism,” or the ideology that a technology-first approach can “solve” social problems of all kinds—from educational inequality to thriving misinformation. As a discipline, public interest technology is drawing from diverse fields to reframe problems so that the public, in all of its complexity, diversity, and disorder, is at the center of our efforts to pursue “progress.” In order to do this, we need to be able to seamlessly understand and transition between a number of modes of thinking and analysis. As a technologist, researcher, and strategist with training in anthropology, evolutionary biology, systems thinking, and strategy/planning, I’ve found my own interdisciplinary background to be a real asset.
The good news is that your starting point hardly matters—almost any background can contribute to a strong foundation for this work. This work is, by definition, interdisciplinary and cross-sectoral. There is no standard job title, no list of qualifications, no obvious degree requirements, and no one field or sector where public interest tech jobs show up. Many companies don’t yet know they need public interest technology—but they know they need research, strategy, insight, and innovation. So my advice to any aspiring change-maker is to find a place to dig in, and help an organization transform as you find your own voice in public interest technology. And if you don’t already? Read science fiction.