Our Corporate Overlords, Power and Privilege, Tech and Society

The Commodification of People

Among the many ways so-called Big Data is influencing our lives, quantification and predictive analytics is beginning to play a significant role in how people are selected for opportunities, such as jobs, homes, romance, sex, insurance, and so on, substituting the vagaries of human judgement with seemingly objective and reliable analytic scorecards and labels. The same profusion of data that flows from your interactions with the networked and surveiled world, and which results in all those “personalized” ads you routinely encounter, can also be used to evaluate and grade you as a person. Your daily experiences and interactions with websites, mobile apps, credit card processors, eBook readers, cell-phone carriers, security cameras, etc. leaves data trails that are routinely and tirelessly hoovered up to supply the information economy with the raw material of user profiling (but you already knew that, right?). But beyond the now familiar goal of these activities to simply sell you stuff lies a larger information dream: Using data about you to thoroughly understand what makes you tick and using that understanding to predict your future. Opportunity gatekeepers, such as landlords and employers, find this dream very attractive. Business objectives drive gatekeepers to seek out any and all means to maximize efficiency in their operations and reduce their levels of uncertainty and risk. Quantifying people into gradable categories like bushels of rice with consistent and predictable quality is an intoxicating product offering for decision makers, and the data industry is prepared to meet (and create) that demand. By aggregating together your prior preferences and behaviors and comparing those to the preferences and behaviors of thousands of similar people and their choices, a motivated data processor and her algorithms attempts to make a range of predictions about your life, getting out ahead of the uncertainties of evaluating people based on what they self-report or provide through from their chosen references.

But there’s a problem. Quantifying people is not nearly as easy as quantifying grain. Quantification requires standardization, but people aren’t standardized and the data collection methods we have for analyzing people aren’t perfect. So, shortcuts have to be made and proxies must be used to reduce the rich complexity of human experience into discreet buckets. The first reduction comes in the form of the data that is used. Despite the fact that we our lives are increasingly observed and analyzed, the domains and methods of observation come pre-loaded with certain biases. Tracking what books you read with a Kindle (or other eBook reader) requires, first, that you own a Kindle or use the Kindle app. This already eliminates that data point from consideration for all the people who stubbornly continue to read printed books or who choose to spend their limited incomes in more practical ways. Here we see how one chooses to engage with the data ecology might impact her profile. The varied choices people make about participating in social media are similarly influential in profile development, as evidenced by the increasing number of data products that use social media data as inputs (see this for a chilling example).

The data industry also makes use of the open records policies of government agencies to build their profiles. Some types of public records, such as arrest records, tend to reflect negatively on people of color and the poor. For example, there is an abundance of evidence that drugs and weapons laws are routinely violated by people across demographic lines, but African American men are more likely to be arrested and convicted for violations (see this, and this for examples).  As a result, evaluating people based on their criminal histories doesn’t necessarily tell the kind of nuanced story that leads to complete knowledge. These two examples (and there many others) suggest that the construction of the data regime may not be quite as objective and reliable for judging people as we think. In fact, it appears to favor people of privilege – those who can afford to participate richly in the data economy (and choose to) and those for whom readily-available derogatory data is less likely to be discovered.

In addition to understanding how the formation of user profiles might be flawed and unfair, I am also interested in why economic/social gatekeepers are so keen on using analytics to make decisions about people in the first place. And this brings me to the work of Albert Borgmann who writes about the “hyperactivity” of modern society. Borgmann describes a hyperactive society as one that is constantly “mobilized” against the perceived threat of economic ruin. This mobilization has three key features: the suspension of civility, rule of a vanguard, and the subordination of civilians. It is in that third feature that I detect what I would label the “precarity” of the modern worker. Despite our cultural mythologies in the U.S. and elsewhere about how hard work and dedication inevitably lead to riches and success, and in spite of the tremendous wealth our society has created, we have seen in recent decades increasing social and economic inequality and the loss of stable work opportunities for ordinary people due to changes in a variety of structural economic conditions.  There are many reasons for these changes, but one of the results is that those with the power to make important decisions about our lives seem to have considerably more power and incentive now to exploit what Borgmann refers to the “disposability of the noncombatant work force.” In short, the incentives are high to reduce the work force as much as possible and the moral precepts of capitalism do not offer much resistance to doing so. The resulting precarity of work in our society leads to increased competition among workers. In order to survive in this mobilized society, we are basically forced to compete for increasingly scarce resources rather than to join together to challenge the sources (real and imagined) of the scarcity.

While Borgmann tells us something about societal forces that contribute to interpersonal competition for scarce opportunities, another author, James Carey, sheds light on how information systems have provided the means to commodify human beings. Writing in 1989 (but eerily prescient), Carey examined the dramatic social and economic changes wrought by the first electronic mass communication medium: the telegraph. The telegraph was the first technology capable of detaching information from physical objects and constraints, increasing the ability for traders of every stripe to to abstract physical objects into symbols for exchange. With the telegraph, information about the world could travel much faster than any messenger or machine, breaking down prior barriers of time and space. This change in the temporal and physical reach of communication increased a business person’s pool of potential partners, making direct personal experience with each one impossible. As a result, new methods of evaluating strangers had to emerge. This can be linked to another of Carey’s observations about a separate byproduct of electronic communication: the commodification of goods. Carey argues that the emergence of the commodities futures markets was tied to the linking of buyers and sellers regionally and nationally by the telegraph. It became possible to trade goods, such as bushels of wheat, by lots aggregated from dozens or hundreds of sources rather than dealing directly with the individual producers. This practice required the development of standardized grading systems that could be applied to quantities of goods from diverse sources. These seemingly unrelated byproducts of communications technology–the emergence of impersonal business dealings requiring new methods of personal assessment and the invention of the commodities trade that massed and standardized diverse goods into quality categories–set the stage for the emerging commodification of people. In the modern setting, the ability to post a job ad or a dating profile potentially viewable by millions of people means that the “seller” must be able to rapidly sort through dozens, hundreds, or thousands of applicants. The ability to judge candidates individually becomes impossible. Here we see the origins of the reputation industry and commodification of people: Why not employ algorithms to sort them into quality categories as if they were bushels of grain?

How this operates in practice is complex, but one thing is certain: the precarity of position and the perception that resources are scarce motivates people to sacrifice their own freedoms to gain an edge. People will give up their privacy and  otherwise adjust their lives to please opportunity gatekeepers in order to get ahead. A telling example comes from the insurance market where, in exchange for rate reductions, people install data devices in their cars that monitor and report their driving habits to insurers. Even more invasive, people are sharing the data collected by their health tracking wearables for similar incentives. This practice is known as “signaling” by economists. While granting explicit consent to monitor specific activities is a very obvious type of signaling, there are other means of signaling that are a bit more complex, but not too complex for analytics algorithms to notice. Social media activity provides a rich assortment of signals about one’s life, including family composition, health events, employment satisfaction, and financial stability among others. A few banks are confident enough about what they can learn from social media they are basing credit decisions on it (see this and this). As the practice of monitoring social media use to assess one’s worthiness for loans and other opportunities becomes commonplace, it’s not hard to imagine how that may influence how people use social media and therefore how they socialize in general.

There are many reasons why this matters. For one, it represents a progressive rebalancing of information flows. Economists have long rued the “information asymmetry” in buyer-seller transactions, in which the seller uses her deeper knowledge of a good for sale to the potential disadvantage of the buyer. However, one man’s market inefficiency is another’s defense in a world of outsized power imbalances. If the seller is a job applicant for a job at a large corporation, they are arguably arrayed against the titanic power of the modern corporation. Being able to assume some measure of control over the hiring process could be the last semi-free act of her career. Meanwhile, the corporation’s goals are to avoid risk, by choosing the candidate least likely to harm the firm, and to increase efficiency by streamlining the process of choosing from among a pool of candidates. Commodification of the candidate serves the corporation well, but may disadvantage the candidate if she cannot control the sources and biases of the information used to categorize her. As the reputation industry matures and more and more choices about who gets what opportunity are determined by abstracting people into symbols and treating them like graded commodities, the risk that people seeking opportunities become increasing disempowered will emerge as the crowning achievement of information technology: The commodification of precarious lives.

Our Corporate Overlords, Tech and Society

Blame the Election On Facebook (in part anyway)

Donald Trump won the U.S. presidential election last night. This is terrible news for the country and I am horrified by his victory. In particular, I’m having trouble processing his obviously widespread support given the many negative attributes he has displayed throughout his life and during the election. While we’re looking around for whom to blame for how things turned out (and there will be plenty of finger-pointing), I believe Facebook, Twitter, Google, etc. and the entire culture of information “personalization” should be counted among the blameworthy. True, there are a number of complex sociological factors in play in any election, and the combination of Trump’s celebrity appeal and populist messaging seem to have had a powerful effect on a lot of people. But here’s the thing: Many of us were unprepared for this result. We looked on with wonder as Trump won in the primaries and went on to become a popular candidate in the general election. Many of us were shocked and surprised by the outcome of the election. Did you, like me, experience ongoing shock and disbelief over Trump’s consistently competitive poll numbers even after allegations of sexual assault and the array of other deeply negative revelations about him? If so, it might be because you and I live in a media bubble built out of algorithmic profiling, an echo chamber designed to soothe us with an overwhelming number of messages that we agree with, or are pretty dang close.

When you view your Facebook “newsfeed,” you’re not viewing every post of every person you are connected to on the network. Facebook’s learning algorithms access thousands of data points about your past behavior on Facebook and your interactions with other websites, merchants, and mobile services to identify your tastes and preferences. The resulting newsfeed you and I see contains only posts that Facebook believes are the most “relevant” to each of us. On Facebook, we’re mainly connected only with people who we have identified as “friends” and what we hear from them (and they from us) is winnowed down into a preference-focused feed that may not even include the contrary views of people we know. When we perform searches on Google, another collection of personalization algorithms massage our search results to conform to what the system believes each of us wants to see. Twitter users can choose whose posts to follow, enabling users to curate their information sources into narrowly defined subjects and communities. In my Twitter feed, I only follow academics and research institutions working in my field, plus a few journalists and news sources whose reporting appeals to me.

Getting news this way is completely different from  traditional journalism, where the goal, ideally, is to provide readers and viewers with a diverse range of ideas and multiple viewpoints. On commercial information services the information we receive is narrowly restricted and designed to please each of us individually. (Much of this will not be news to anyone who has read Eli Pariser’s “The Filter Bubble.”) The goal of customizing our various search results, feeds, and follows is to keep us online, staying engaged with whichever service we’re using, clicking links, viewing ads, buying things. The more time we remain engaged this way, the more information about our preferences and inclinations that can be translated into advertising dollars. The result of all this customization is that each of us is experiencing very different information flows from people who disagree with us – flows designed to keep each consumer engaged and to limit any feelings of discomfort. If you think Hillary Clinton is dishonest, it’s likely reflected in your online media choices and personalizations, and you’re unlikely to see posts or articles that champion her as a person of integrity.

Democratic deliberation requires the airing of a plurality of ideas and room for meaningful debate on the merits. It is still true that people are more likely to find common ground and back down from extreme positions if given the chance to truly understand each other. It is also true that customized information sources are as likely as not to include easily disputed rumors and distortions that would become apparent if more viewpoints were available for consideration. This is not what is happening. Unfortunately for democratic deliberation, the discomforting effects of stories and worldviews that don’t conform to our biases are bad for the online business model. If the goal is to keep people where they are, engaged and consuming what you’re offering, it doesn’t make business sense to question or challenge them and their version of reality.

Our media elites used to do a decent job of providing us with a plurality of views. Traditional journalism is far from perfect; media biases and filters are not new. But there were (and still are) journalistic institutions dedicated to reporting more fact than rumor, and for presenting multiple viewpoints on contentious questions. When that system was more functional than it is now, while I might not agree with the opposing viewpoint, I could at least come to understand and engage with it. Similarly, people on the other side of an issue might come to understand a piece of my truth. But traditional journalism is in decline. Fewer of us are relying on well-established media sources that can legitimately claim to be objective or balanced. One outgrowth of this is that some of the remaining media institutions have become clownish and shallow, more interested in salacious gossip and to pleasing political leaders in return for “access” than to soberly analyzing their views and statements.

As my old friend David Newhoff points out in his blog, viewing the world through the filter of commercial information platforms, including social media, makes it “very hard to distinguish between being vigilantly informed and hysterically manipulated.” As more of us come to get most of our political news from these platforms, whose shared mission is to harvest and monetize information from us and not to inform us, we will continue to fail at gaining a thorough understanding of what comes blasting out of the fire hose. Still more problematic, we’ll also continue to fail to truly understand what the other side actually thinks, resorting instead to caricatures and hyperbole. We are going to see the results of this filtering effect repeated again and again with the result that we become weaker advocates for our causes and candidates. This is not making us smarter. It is making us naive and vulnerable.

While we’re busy pontificating (myself included) on social media about our views and sharing our carefully curated information tidbits with our online followers and friends, remember that this narrowly focused information sharing is a central problem for political discourse. Despite the potential for sharing our views with more people than most of us could have hoped to before these platforms existed, the intentional limiting of our feeds and searches by platform operators means that what we say, do, and seek in the information space is not likely to escape the comfort of our individual echo chambers. We’re just yelling at ourselves while generating revenue for others and carving out ever-tinier slices of an increasingly subjective reality.

Our Corporate Overlords, Tech and Society

Groundwork to a Rhetoric of Technology

I recently had the opportunity to learn about a field known as the “rhetoric of science,” which is the study of the discourse around scientific topics. While the word “rhetoric” is often thought of as a pejorative, here it is a neutral term that broadly describes how we go about trying to persuade each other to a point of view using words and, quite often, do so by targeting emotions and assumptions. Each of us uses rhetoric pretty much daily; from technical arguments about politics, to mundane negotiations about household responsibilities. We spend a great deal of time in conversation “making a point,” which is another way of saying that we try to persuade others that we’re right about something. In studying the rhetoric of science, scholars seek to understand how science is described, debated, and understood (and frequently misunderstood). Rhetorics of science frequently affect how specific research or an entire science is perceived, and it can affect how future work is conducted. An example is the examination of how topics like climate change and human evolution, both of which are firmly settled questions within the scientific community, have been successfully portrayed by activists as ongoing debates. Another example is the study of how simple metaphors are used to describe deeply technical topics like genomics. (Is DNA really a “map” or a “blueprint” of a gene?) Consider the recent controversy concerning Planned Parenthood and the alleged sale of fetal tissue to researchers for profit. A fiercely politicized discourse has been employed to depict a fairly routine activity—the use of human tissue for research—as something deeply nefarious. The rhetoric of science is a fascinating research area, one that we are all engaged in whenever we consume media on scientific topics, which happens with increasing frequency thanks to the ease with which information and misinformation rapidly spreads via social media and cable news programs.

My brief introduction to the rhetoric of science caused me to redirect my thinking about how we talk about information systems and technology. If you know me or have been reading this blog, it probably won’t surprise you to learn that I have been labeled a technology “skeptic.” There’s an example of a rhetorical move right there. I think “skeptic”—which has a mix of connotations, some of them pretty negative—isn’t quite the right word. What I feel is that common portrayals of modern technology in our public discourse lack a satisfying amount of questioning and thoughtfulness. This shouldn’t be too surprising since, after all, most of what we hear about popular technologies comes, either directly or through proxies, from the giant corporations that make them. We learn most of what we know about iPhones from Apple (and its many allies), social media from Facebook (and its many admirers), and so on. Those who craft the messages we most often hear on these topics exploit the fact that most of us are readily impressed by sleek designs and technological novelty. While the twittersphere may contain an abundance of contrarian voices on technology topics, you kind of have to want to hear them to find them, and even then, credibility is hard to establish. It’s easy to dismiss critics as uninformed, puritanical, or simply “no fun” (consider this blog, for example). I think it’s safe to say that the most consistent and well-crafted information we hear about most technologies comes from marketers. Whether it arrives in the form of a slick advertisement, or through something more viral, like blog posts and cable news appearances by various spokespersons and consultants, the major channels of communication still favor those with the deepest pockets and largest marketing infrastructures. There are people who get paid really well to spin great stories about a direct link between new technologies and human flourishing and they do it very well. Even seemingly neutral information sources like news programs often lack introspection; favoring instead to offer breathless “reviews” of new technologies that fail to offer criticism that might alienate an audience that is generally awestruck by the latest gadgets and apps.

What I’m planning to do with all this is to start looking at some of the common tropes and stylistic moves that tech-evangelists use to convince their audiences about the seeming promise and inevitability of tech-mediated living. Examining the metaphors is one way to do this. Words like “interactive” and “disruptive” deserve closer inspection. So does the term “social media” for that matter. In each case, we should be asking, what do these words mean to an audience? And do they accurately describe the states and changes they are employed to describe? The use of terms that invoke freedom and choice have a long history of association with market based thinking, and have become even more pervasive in the rhetoric of Silicon Valley. What is a “free” app exactly? What range of “choices” do we actually have in selecting and using information technology?

My main concern is that as we move from living and interacting in physical space and in real time—on the street, in the park, in the auditorium—to online existences where we interact using social media, augmented reality, gaming, and so on, that we are moving into spaces not only mediated by technology, but easily manipulated by the corporations that make the technology. Interactive spaces made by corporations are not agenda-less spaces. They contain (and are) rhertorics designed to persuade. One look at the default screen of an iPhone offers numerous clues as to the priorities of Apple, which likely do not conform entirely to yours. Wherever possible, companies that can hold your attention will seek to convince you to use more of their products and services and, as often as possible, will reinforce their tech-focused, consumerist worldviews. As more and more of the information we receive is “curated” for us by the algorithms that select, say, your personalized Google search results, there is a real risk that powerful voices will dominate and hijack your access to information. Consider for a moment what could happen if Eric Schmidt, the CEO of Google, decided that he really wanted Donald Trump to be the next president. How much tinkering would it take to subtly change your search results to present the most sympathetic accounts of Trump and his views? Technology companies have access to enormously powerful rhetorical tools. Our actual freedoms and choices may well depend on how attentive and aware we are of that.

This is a topic I plan to return to. For now, I invite you to do what I’m doing, which is to listen closely to the words that get used in any conversation about information systems and technology (including mine) and seek out the meta in the conversation. What shorthand is used to describe complex, socially impactful developments? How are contrary voices characterized? You may start to make some very interesting observations. You may find yourself becoming something of a critic. Who knows? You might even become a “Luddite.”