Essay X: Directions for Democracy Innovation in a Conscious/Subconscious World

 


Democracy and citizenship continue to evolve, spurred by new threats and new opportunities. In order to exert some control over these changes, we need to understand the nature of the threats and how they developed, the shifts in how people are thinking about politics and community, and the democracy innovators and innovations that are emerging today. The League released a series of essays in advance of last week’s The Future of Citizenship: The 2023 Annual Conference on Citizenship to help set the stage for a national discussion on where our country is headed.


By Matt Leighninger and Quixada Moore-Vissing 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    – Isaac Asimov, I, Robot

The science fiction writer Isaac Asimov wrote his “Three Laws of Robotics” in 1942. For at least that long, we have been concerned about the inevitable onset of human creations with the capacity to – at least in some ways – outthink humans. How should we govern intelligences that can absorb, analyze, and use so much more data than we can?

A real-life incident in 2017 seemed to illustrate some of these concerns. When researchers at Facebook directed two chatbots to negotiate with one another, the machines actually developed their own modifications to English, continually repeating certain words and inventing new constructions outside the rules of grammar in order to come to agreements. Essentially, they developed their own language that the human researchers couldn’t understand. Since the idea was to make the computers better able to communicate with humans, not each other, the researchers ended the experiment.

It may be overly simplistic to think of autonomous robots when we are confronted with the modern challenges of “subconscious technologies” – the capacity of Artificial Intelligence to process huge amounts of data in ways that most people do not track or understand. The 2004 action movie, I, Robot, starring Will Smith, was based loosely on one of Asimov’s stories, but it played up the ethical dilemmas of robotics to maximum theatrical effect. The fact is that robots are not about to start murdering humans and using secret languages to take over the world. In movies and books, robot characters are often just a foil or tool for exploring human emotions.

Plus, “artificial intelligence (AI) is still fairly stupid at this point,” points out Peter Eckart of the Illinois Public Health Institute. Though with each advance in computer technology the machines have more and more sheer calculating power, there is a great deal of debate over whether an AI can ever be as innovative, instinctual, or creative as a human mind.

Still, even in their current stupid state, subconscious technologies already have tremendous power to injure human beings, or by inaction allow them to come to harm – and many of those harms are most likely to occur to populations that are already vulnerable or marginalized. The use of subconscious technologies is affecting our elections, our health care, and our criminal justice system.

Meanwhile, the advance of conscious engagement, driven by a desire to matter in public life, is also a rampant force in society. “We are in the midst of a profound global Great Push Back against concentrated, monopolized, hoarded power,” writes Eric Liu. This impulse has driven the diversification and expansion of the ways in which we engage, from social media to crowdsourcing to hyperlocal online networks. “The space for civic participation has grown enormously, and power has shifted away from traditional political structures and actors,” agrees Burkhard Gnärig of the International Civil Society Centre.

The desire to matter sometimes leads to opportunities for direct democracy and sometimes to more intensive forms of engagement, like participatory budgeting or citizens’ assemblies.

Sometimes, the desire to matter, in an apparent paradox, can lead to public support for authoritarian figures. Gnärig sees the appeal of “strongmen” as part of “the responses of traditional political elites to their loss of power in both local and global directions.” While these authoritarian figures may actually curtail the power and human rights of ordinary people, the voters who support strongmen don’t seem to see it that way. Many citizens already feel politically powerless, and because the strongman promises to bash the bureaucrats or beat back a threatening tide of immigrants, voters may feel that electing an authoritarian actually increases their own power and freedom. Though the desire to matter is an understandable motivation, we should recognize that in certain circumstances it can be extremely destructive: though it may help us improve democracy, it can also help autocrats dismantle democracy entirely.

What, then, shall we do about these potentially harmful, potentially beneficial forces? The cases, trends, and intersections in this report suggest several possibilities:

  • Conscious engagement can help set the terms for the use of subconscious technologies.
  • Subconscious technologies can help scale conscious engagement.
  • Conscious engagement can contribute to and capitalize on data.
  • Subconscious + Conscious = Deliberation + Power?

Conscious engagement can help set the terms for the use of subconscious technologies. Subconscious technologies raise important questions about how AI should be used in decision-making, how to balance potential harms and potential benefits, and how to draw the line between the privacy of individuals and the interests of society as a whole. In the past, we relied on government regulation as the main tool for making and enforcing these sorts of decisions about how technologies can be used; lawsuits then became a second tool, primarily by punishing corporations that did not follow the law. Various forms of enforced transparency – for example, requiring automakers to disclose the fuel efficiency of their vehicles – represent a third way of governing technologies. All of these approaches can be effective, but none of them seems sufficiently trusted, equitable, or effective on its own.

Conscious forms of engagement can be employed in addition to, alongside, and/or as a way of informing these other tools. Through conscious engagement we can encourage the public, technology leaders, scientists, and government officials to share information and deliberate about how subconscious technologies should be rolled out, and to think about the implications such technologies have on our personal lives and democracy.

For this to work, the conscious engagement should have “thick” components that allow people to learn about the issue or technology, connect its use to their experiences, weigh different options, and decide what they think. It should also have “thin” components so that large numbers of people have opportunities to get information, suggest ideas, indicate their approval or disapproval. One example of this combination is the Citizens Initiative Review (CIR) in Oregon. The CIR, established by the Oregon legislature in 2010, is a process by which 25 citizens, chosen by random, come together to study a ballot initiative that will be coming before the voters. The group studies the issue, hears from advocates for and against the initiative, and writes a Citizens’ Statement containing the information they feel people need to know when voting on the initiative. The Statement is included in the voter guide that is sent to every household in the state. Because large numbers of Oregon voters find out about the work of the CIR, because the jury is made up of their peers, and because they trust the process, the CIR recommendations have had an effect on how voters make their choices at the polls.

The example of the Icelandic constitution suggests another approach worth considering: the use of crowdsourcing to help people generate and refine tenets of an agreement, followed by in-person deliberation by a smaller number of people (perhaps a demographically representative group, as in Oregon) to sort out conflicts and present a proposed document to elected officials. The resulting charter, and the process used to develop it, could be featured in end-user agreements in order to give them greater clarity and legitimacy in the eyes of the people using the technologies.

In the case of smart cities, engagement could take place both at the neighborhood level and the citywide level. Approaches like vTaiwan, which combine online commenting, clustering of comments through AI, and face-to-face deliberation, could help people understand and decide how smart-city technologies should use their data and improve the quality of life. If this sort of engagement were conducted on a regular basis, as vTaiwan has been, it could become part of the civic infrastructure that smart cities need. In their guide to “Making a Civic Smart City,” Eric Gordon and his co-authors illustrate this goal by saying that “a civic smart city works with publics to define problems, and reflect on potential solutions, before implementing new technologies.”

On health and health care, we should keep in mind the capacity of engagement to strengthen social networks and raise social capital, since those qualities have a direct impact on people’s physical health. Loneliness is in fact the leading risk factor for serious illness and premature death. Sustained, broad-based forms of engagement such as “On the Table” could be particularly appropriate because they reach so many people, because they encourage the formation and strengthening of relationships, and because they focus on food (which in itself is a central aspect of good health). As part of regular “On the Table” meetings, participants could address questions of how health data should be used in ways that both protect privacy and improve population health.


Subconscious technologies can help scale conscious engagement. The most productive, inclusive, deliberative examples of engagement have been local ones. Of the thousands of engagement processes conducted each year, the majority occur in cities, towns, or neighborhoods. This is especially true of “thick” forms of engagement, in which people spend time in small groups learning, comparing experiences, considering options, and planning for action. Thick engagement is productive, but intensive.

Local engagement is more common because these initiatives typically require a diverse, critical mass of participants to succeed, and the number of people required to create this diverse, critical mass is smaller at the local level. You need a sufficiently large web of relationships so that potential participants are approached by people they already know, and you need to give people some assurance that their participation will make an impact. Both things are easier to achieve in communities, towns, and neighborhoods.

Twenty years ago, when widespread internet use began to reshape how we organized and thought about engagement, it was tempting to assume that online communication would immediately solve the problem of scale. But while some examples of digital engagement have managed to involve tens of thousands of people, those numbers don’t always seem to matter if the participants are dispersed across a state or the country. Legislators usually ask if their constituents are taking part, and whether those people are a small, like-minded group or a large and diverse one.

Subconscious technologies can help scale engagement in several ways. First, the most fundamental challenge in engagement is recruitment: figuring out who needs to be at the table, who might like to be at the table if invited, and actually convincing them all to take part. Some of the same technologies used for micro-targeting and messaging could be used to identify the people who are interested in a particular issue or have the most at stake, and then crafting messages that will appeal to them.

This approach to recruitment would be particularly appropriate for some engagement approaches that utilize smartphones as a way of structuring and connecting face-to-face discussions. One example is “Text, Talk, Act,” first developed as part of President Obama’s National Dialogue on Mental Health. Participants in the discussions are recruited primarily through social media and asked to form groups of 3-4 people. They text “start” to a pre-assigned code and then receive a series of text messages, including: discussion questions for the group; process suggestions; polling questions that can be answered from their phones; and requests to respond with action ideas and commitments they will make to increase engagement with their audiences. Throughout the process, participants also receive links that allow them to see how participants across the country have responded to the polling and action questions. Text, Talk, Act has involved over 50,000 people. Using subconscious technologies as part of this approach would seem to make sense, since the strategy already relies on appeals through social media, and since the engagement (though face-to-face) can happen whenever and wherever 3-4 people can get together with at least one smartphone.

Second, technologies like natural language processing (NLP) and sentiment analysis can be used to get at least some sense of what people are thinking about a topic – not as a proxy for conscious engagement, but as a way to inform it. This is essentially what Canada’s “My G7” process has done, scraping and analyzing social media and other web content, and then presenting the most common themes and questions to citizens in 320 face-to-face and online deliberations.

Finally, the proliferation of hyperlocal online spaces represents another opportunity for scaling engagement. These forums are situated “where the people are,” they enable people to solve everyday problems like how to find lost cats, they help people build relationships by promoting the neighborhood barbecue, and they provide a way to mobilize around shared goals and concerns. And because these spaces are online (and contributing to yet another rapidly expanding pool of data), their members are reachable and networkable in ways that the PTAs and neighborhood associations of yesteryear were not. Thomas Jefferson famously argued that we should “Divide the country into wards,” each of them small enough for successful participation, but sufficiently linked to one another that they could function as a nation. Suddenly, we may actually have a country of linkable wards.


Conscious engagement can contribute to and capitalize on data. The practice of “participatory action research” (PAR), which was first developed in the 1970s, illustrates another direction for combining subconscious technologies with conscious engagement. In those kinds of research efforts, which have focused on a wide range of topics from substance abuse prevention to watershed management to disaster relief, citizens and researchers work together to set research goals, collect data, and analyze the results. A core part of the philosophy of PAR is establishing trust and shared agreements between citizens and researchers on the knowledge they want to gain, why they think it will be valuable, and how they can achieve it.

The capacity of citizens to engage in participatory action research is perhaps most obvious in health and health care. There are a wide range of devices that allow us to measure aspects of our own health (blood pressure, diet, exercise) or the health-affecting qualities of the environments we live in (air quality, water quality, workplace safety). Much of this data can be collected easily or even subconsciously. Existing PAR efforts show that when citizens and researchers trust each other and agree on why and how they are building knowledge, they can produce research that serves the public good. If PAR could be scaled up through initiatives like “On the Table,” so that communities can establish the necessary trust and coordination, it might fuel further breakthroughs in disease prevention and health promotion.

In addition to helping direct research and collect data, citizens can also use, analyze, and interpret data. The “open data” movement strives to give ordinary people access to information, particularly data collected and owned by governments and other institutions. Many open-data advocates and beneficiaries have been tech-savvy, entrepreneurial types who have used that data to create bus schedule apps or other helpful tools. But while there still may not be that many people with those sorts of skills, the number of people with the basic numeracy and analytical skills necessary to understand data-based research has grown steadily. The popularity of data-focused forms of journalism, from sports websites to FiveThirtyEight, illustrates the desire of many media consumers to use numbers to help them understand the world. So, while not many of us have the capacity or will to build an app, many of us are able to capitalize on the increasingly data-rich world we live in.

One way in which the data, and our capacity to use it, might be particularly valuable is in efforts to reduce inequality. One example is the history of participatory budgeting (PB) in Brazil, which has always had an explicit focus on equity. From an early stage, analyzing the equity of the process (who was participating, and was that group broadly representative of the population?) and the outcomes (how was funding distributed among neighborhoods) was built into the functioning of PB. By and large, this has worked: examples of sustained PB in Brazil have helped alleviate poverty, expand access to public services, reduce corruption, raise tax compliance, increase the number of civil society organizations, and improve the social well-being of a wide range of citizens. As Wampler and Touchton argue, “Brazil has reduced inequality incrementally.” Some observers, such as Tiago Peixoto of the World Bank, have wondered whether American PB processes can replicate these achievements if they do not uphold the need to address inequality and incorporate ways of measuring it. Practitioners and researchers like Madeleine Pape and Josh Lerner have suggested a range of ways in which equality data might be used to inform PB and other engagement opportunities.

Subconscious technologies provide us with ever-growing capacities to collect and analyze data. Conscious engagement informed by the history of PAR and PB could ensure that those capacities are being used in ways citizens want, and capitalize on their potential for promoting population health, addressing social and economic inequalities, and other worthwhile aims.


Subconscious + Conscious = Deliberation + Power? The Oregon CIR and the history of PB illustrate possibilities for combining conscious engagement and subconscious technologies to inform voting. By convening small, intensive deliberations or by providing useful data to large numbers of participants, they help ensure that voters understand the implications of their decisions. There are many combinations of engagement and data that could make the exercise of power more deliberative and deliberative processes more powerful.

Because they can potentially make it easier to validate voting processes by verifying voters’ identities, blockchain and other technologies may make direct democracy easier, more flexible, and more widespread. But that may not be the only way in which blockchain can affect voting. In most of the existing cases where deliberation and power are combined, the two experiences are separate: people engage with one another, or look at information that other engaged people have provided, and then go into the voting booth to make their decisions.


Asimov wrote I, Robot while the horrors of World War II were raging around him. In the book’s final story, the governance of the world has been entrusted to a set of computers, the Machines, so enormous and complex that no human can fully understand how they function or communicate. Conscious engagement continues, however, some of it in the form of anti-robot protests or attempts to sabotage the system.

The main protagonist of the book, the “robopsychologist” Dr. Susan Calvin, has been asked to examine a set of seemingly illogical decisions made by the Machines. She finds that the computers have internalized the Three Laws of Robotics so completely that they are making illogical decisions on purpose in order to expose or offset the actions of the saboteurs, thus preserving the ability of the Machines to make public decisions on behalf of the public. In Asimov’s imagination, the Machines are the custodians of a self-correcting system that can save human beings from themselves.

Around the world, people now seem increasingly frustrated with political systems in which elected officials, experts, and civil servants make decisions on behalf of the public. I, Robot is a work of fiction, and our reality may never come close to it, but we should use our own imaginations to think carefully about what might happen to us next. Immersed in the flow of technological innovation are some incredibly significant questions of power, governance, and rights. Even in our most optimistic vision of progress, it seems unwise to cede our judgement and autonomy to the Machines.

In an increasingly conscious/subconscious world, people have different expectations and can make new kinds of contributions to public decision-making and problem-solving. They can also be manipulated and maltreated in ways so subtle they cannot be easily recognized. To meet these challenges and opportunities, we need to build things – agreements, charters, technologies, institutions – that balance the needs of the individual with the good of society.

Can democracy become a self-correcting system that will save us from our worst impulses? Even in its faltering, incomplete forms, even in its beta versions, it remains our most ingenious invention. To keep it, we will need to upgrade it – and to do that, we must be as realistic, constructive, and imaginative as possible.

 

This essay is adapted from Rewiring Democracy, a publication of Public Agenda.

Some Related Posts

View All

Thank You to Our Key Partners