Essay III: The Rise of Subconscious Technologies

 


Democracy and citizenship continue to evolve, spurred by new threats and new opportunities. In order to exert some control over these changes, we need to understand the nature of the threats and how they developed, the shifts in how people are thinking about politics and community, and the democracy innovators and innovations that are emerging today. This series of essays, to be released weekly in advance of The Future of Citizenship: The 2023 Annual Conference on Citizenship, will help set the stage for a national discussion on where our country is headed.


By Matt Leighninger and Quixada Moore-Vissing

Too often, the people working to strengthen democracy have been caught flat-footed by the pace of new trends and innovations. All kinds of changes, many of them driven by technology, are affecting how we live, work, vote, interact, and get information. It has always been difficult to understand the implications of trends in the moment, but it is even harder today because knowledge is so vast and specialized: the experts on each individual trend are often isolated from one another, and there is no overarching map for everyone to see.

Many of the dangers and opportunities we face have to do with the growing sophistication of what could be considered “subconscious technologies” and the increasing determination among citizens to make their actions and opinions matter in public life, an impulse we are calling “conscious engagement.” These two forces are rampant, and the ways in which they conflict with or complement one another may be critical to the future of politics and democracy. (Conscious engagement will be the topic of the next essay in this series.)

What are subconscious technologies?

Some of today’s fastest-moving and least-understood trends are based on two new features of our technological landscape. The first is the massive amount of data that is now available, generated both by people and devices. About 90 percent of all data available today was generated in the last two years; we collectively churn out 328 quintillion bytes of data per day, and the number continues to grow exponentially. The second is the capacity of different forms of artificial intelligence (AI) to make use of this data. These new realities have made possible six uses of subconscious technologies, described below.

A caveat: it seems almost clichéd to keep asserting how rapidly these technologies are developing, but it is true. Six months ago, the Healthy Democracy Map team started compiling our list of organizations working to improve democracy; we are now using capacities of generative AI that weren’t available at the beginning of our project. The first five uses of subconscious technologies described below are now well-established, perhaps even old news. By this time next year, the influence of the sixth may have produced a whole new set of emerging uses.

1. Anticipating wants and needs

One of the functions that all this data and computational capacity can serve is to help determine what people want and what they need without actually asking them. These assessments are based on everything from how people talk to the text they write in social media posts to their recorded blood pressure levels. A whole host of technologies are being used in this way, including natural language processing (NLP), sentiment analysis, computational linguistics, biometrics, and digital phenotyping.

Advertisers were among the early pioneers in this work, and much of their activity and investment is still focused on anticipating what people will buy and when. One of the most frequently cited examples is how companies like Target are able to identify pregnant women on the basis of their purchases and social media posts. But governments are also using these technologies, in partnership with other organizations, to suggest public services or even do little things like helping people find open parking spaces. “It is quite possible that providing these kinds of assistance will cause people to value public institutions more,” says Darrell West of the Brookings Institution.

Some governments are also starting to use these technologies as part of their approach to policymaking. For example, the Canadian government used an NLP tool to collect news articles and tweets about the G7, and then to assess, identify and analyze the context, the subjectivity and tone of each piece of text. The results were then presented to the public and used as part of the discussion material for 320 face-to-face and online deliberations on what Canada should do during its presidency of the G7. Jaimie Boyd, who led the effort as part of her role as Canada’s Director of Open Government, sees this form of opinion analysis as superior to traditional polling. “It is a brave new world for government,” she says.

2. Forecasting and risk assessment

In a range of fields, governments and corporations are using machine learning to recognize patterns in data so that they can make predictions about behavior and assessments of risk. This practice has had a particular impact on health and health care, but it is also emerging in criminal justice and corrections and other fields.

Justifying his country’s investment in AI, French President Emmanuel Macron said, “The innovation that AI brings into health care systems can totally change things: with new ways to treat people, to prevent various diseases, and a way—not to replace the doctors—but to reduce the potential risk.” The predictions made possible by these analyses are helping to guide valuations by insurance companies, but they can also suggest decisions and actions that will improve overall health. For example, the Crisis Text Line, a mental health app, uses NLP to determine whether the person texting is distraught and needing immediate help or simply seeking some information. “The data can show the overall factors leading to congestive heart failure, and also the steps we can take to prevent it,” explains Peter Eckart of the Illinois Public Health Institute. But the data can also embed and perpetuate inequities. “The flood of personally generated data – from Microsoft platforms, insurance-based health programs, Fitbits – tends to produce inequitable analyses because more of the data is coming from higher-income people,” says Eckart.

This can also have dramatic effects in criminal justice and corrections when inequitable data is used to inform bail and parole decisions. “At their most powerful, algorithms can decide an individual’s liberty, as when they are used by the criminal justice system to predict future criminality,” writes Jim Dwyer of the New York Times, who reports that in one case, risk scores for recidivism were wrong about 40 percent of the time, with “blacks more likely to be falsely rated as future criminals at almost twice the rate of whites.”

3. Micro-targeting and messaging

In addition to finding out what people want and predicting their behavior, subconscious technologies can also be deployed to exert influence on individuals. AI can discern patterns in data to determine which people are prone to shifting their opinions and how they might be swayed. Some observers feel that the kind of “fake news” messaging that was so prevalent in the 2016 election is a manageable problem. “Ultimately, fake news will be dealt with by some synthesis of human fact-checkers and algorithms that weed out bots and bad actors,” says David Lazer of Northeastern. He points to spam prevention as a model, saying that “current email systems now deal with spam pretty well.”

Virtual reality and “deep fakes” may pose a larger challenge, since it is almost impossible for viewers to distinguish the fake images and footage from what is real. Until recently, it required advanced skill and knowledge to create deep fakes, but new platforms have made it possible for even casual users to produce them. There is now a kind of arms race going on between the people developing deep fake production technology and those inventing technology for detecting deep fakes, with no end in sight.

If the individual responds positively to a message by clicking on links or merely staying on the page, that feedback then provides more data about the person’s interests and passions. And so, the microtargeting technologies and the messaging technologies can inform one another, continually focusing in with greater precision on what is most compelling to the individual, creating an experience that becomes more and more addictive. The futurist Jon Barnes provides examples: “Instagram drip feeds you ‘likes’ so you keep going back, Twitter’s loading icon varies in duration to give a variable reward dynamic (like slot machines), Facebook’s algorithm censors towards cognitive bias, Google gives you searches based on our existing narrow view (even in incognito mode).” Barnes calls this “addictive design.”

4. Automating interactions

More and more of our interactions online, for things like shopping, booking hotels, or accessing public services, are with computers that are able to ask and answer questions. They have become nimble enough that it is becoming harder to tell if you are dealing with a human being or a machine. Google’s Duplex, a technology used to conduct conversations over the phone, caused controversy this year because it is so lifelike. The bot interjects sounds like “um” and “uh” to better mimic human speech.

These kinds of technologies can be useful, argue Allison Fine and Beth Kanter. They point to examples like the nonprofit Invisible People, which uses bots to provide “virtual case management” for homeless people. In that case, the bot provides information about services and gathers information from the user to build a case file, thereby reducing the workload of human case managers. Abhi Nemani has proposed using the data gathered through 311 calls to create new local government bots. “Historically, innovators have focused on the issue-reporting functionality of 311, building apps to streamline reporting of potholes, graffiti, etc.,” reports Nemani. “Data suggests, however, that these make up just a small fraction of 311. Instead, most is taken up by questions about city operations, ranging from office hours to council meetings.” Compiling and crunching all these questions and answers could lead to “cheaper, automated citizen support systems,” he suggests.

However, Fine and Kanter warn that these kinds of systems can leave citizens frustrated and confused if they rely solely on bots to handle all the communication on the part of the organizations or institutions. They contrast two mental health nonprofits to illustrate their point: Crisis Text Line uses texting bots to get information to people who need it, offer a less intimidating option to people who aren’t ready for a phone conversation about their mental health issues, and determine whether the individual needs immediate help. The second step after the interaction with the bot is a conversation with a live human mental health professional. Their other example is Woebot, a chatbot that can be accessed through Facebook Messenger. Woebot offers no interaction with live humans and collects users’ data for analysis by Facebook.

The ethical and administrative questions related to bots are not being examined in a comprehensive way, assert Fine and Kanter. “We are unprepared for this moment, and it does not feel like an understatement to say that the future of humanity relies on our ability to make sure we’re in charge of the bots, not the other way around.”

5. Monitoring and surveillance

Perhaps the most frightening and controversial trend on this list is the growing use of subconscious technologies to monitor individuals. Technologies for facial recognition, geo-location, geo-fencing, DNA profiling, and other types of analysis can determine who we are, where we go, and even how we are feeling. Some of these technologies, such as facial recognition, rely on data that doesn’t accurately reflect the diversity of our population. Darrell West reports that “facial recognition is 90 percent accurate for whites, 70 percent for African Americans – if the data is inequitable, the analysis will be as well.”

Lesser-known technologies such as affective computing can use the images captured through webcams on our computers to analyze an individual’s emotional state based on their facial expressions and the tone, pitch, and rate of their speech. Individuals can be tracked not only by their physical appearance, but by the “digital identifiers” they leave whenever they operate one of many devices connected to the internet, from their washing machine to their home alarm system.

Many laws and ethical guidelines govern whether and how any of this data can be used, but these of course vary from place to place, and most commentators agree that the rules are increasingly difficult to interpret and enforce given the pace of innovation. “We cannot continue on the current path without stopping to build in necessary human rights protections to mitigate harm,” writes Brett Solomon.

6. Generative AI

The latest forms of generative AI combine many of the functions described above, in a package that is easier for humans to use. You can ask ChatGPT a question like “Look at this list of organizations and this list of goals and determine which of the organizations are pursuing which of the goals?” (In fact, the Healthy Democracy Map team used generative AI in this way in the course of exploring methodologies for our mapping process.) You can also ask platforms like DALL-E or Midjourney to generate images based on the ideas you want to depict and the styles you want the technology to use in the image.

Because their interfaces do not require advanced knowledge or jargon, tools like ChatGPT are making AI less “subconscious” in the sense that a much larger number of people are becoming aware that these technologies exist and that they can use them. But the exact ways in which generative AI accomplishes the tasks you set, are as inscrutable as ever.

“The large language models that power AI tools like ChatGPT feel magical to the average citizen,” says Cameron Hickey, CEO of the National Conference on Citizenship. “So magical in fact that they not only seem sentient and capable of agency, but so valuable that we can’t help but want to use them everywhere to save us time and increase the quality of the work we do. However, to have a grounded understanding of the both the risks and threats posed by these tools as well as their potential to improve society, the average person needs a better understanding of what is actually going on inside of these systems, to know what they are and are not actually doing.”

There are some situations in which AI has been used in more transparent ways to support participatory processes, such as the use of the platform poli.is to help citizens and policymakers in Taiwan come to shared agreements on a number of policy issues through the vTaiwan process. Tiago Peixoto of the WorldBank also argues that participatory processes can support and guide uses of AI. He points to the U.S. national kidney transplant matching algorithm, which was developed through the contributions and deliberations of a wide range of citizens, scientists, doctors, and government officials.

It is possible to create a sort of virtuous cycle, as Peixoto puts it, of “AI for democracy, and democracy for AI.” But to do this will require a far more proactive (and conscious) approach than we have seen thus far.

This essay is adapted from Rewiring Democracy, a publication of Public Agenda.

Some Related Posts

View All

Thank You to Our Key Partners