Translation Technology Pilot: Early Lessons and Ethical Guidance from Rochester, MN 

 During a public forum in Rochester, Minnesota, on December 4, 2025, the Center for Democracy Innovation piloted a new accessibility tool to expand participation in civic conversations: live AI translation using a pair of smart glasses. 

The forum brought together elected officials and residents to discuss what makes civic engagement effective, the challenges people experience when participating in public meetings, and how communities can better elevate resident voices in decision-making. 

As part of the experiment with the smart glasses, one resident wore them throughout the roundtable discussions. The glasses provided live English-to-Spanish translation during facilitated conversations. As residents spoke English, the glasses would verbally translate what was being said in Spanish to the wearer. Overall, the participant found the tool helpful and encouraged continued experimentation, while also offering thoughtful feedback on its limitations. 

What Worked 

  • Meaningful access: The participant reported that the glasses helped him stay engaged and follow the discussion more easily. 
  • Accuracy: Translation quality was estimated at roughly 80–90% accurate, which felt sufficient for understanding both content and tone. 
  • Comfort and presence: The hands-free format allowed the participant to remain focused on the conversation rather than managing a separate device. 

Key Limitations and Lessons 

  • Battery life: With continuous translation, a full charge lasted about 60 minutes. While this aligned with the length of the roundtable, it would limit use in longer meetings without charging breaks. 
  • Translation lag: At times, translation continued after the conversation had shifted topics. In fast-moving discussions, this made it harder to stay oriented. 
  • Listening-focused access: With only one pair of glasses, the tool supported comprehension but not full bilingual exchange. The wearer could listen in Spanish but still needed to respond in English. 
  • Pace matters: The participant emphasized that the technology works best when speakers slow down and pause, allowing time for translation and response. 

One crucial insight was that for native Spanish speakers who do not speak English, rapid conversation shifts can significantly disrupt participation. When dialogue moves too quickly or pivots frequently, participants may lose their train of thought or choose not to interject. 

Ethical and Practical Considerations 

Alongside the Rochester forum, the project team also explored the ethical and practical implications of using AI-powered live translation in public engagement settings. While the technology shows promise as a tool for accessibility and inclusion, early conversations with practitioners, researchers, and public-sector leaders underscored the importance of proceeding thoughtfully, especially when a public agency or facilitator introduces such tools. 

Engagement practitioners and experts on this topic emphasize the need for upfront consent from all participants when AI-enabled devices are used in public forums. This includes explaining: 

  • What the technology does and does not do 
  • Whether audio or visual data is stored 
  • Who has access to any recorded or processed information 

Even in public meetings, participants may have different expectations when technology capable of recording or analyzing speech is present. 

A key concern is how large language models handle conversational data, and what legal or ethical responsibility a municipality may assume if personal information is shared through such tools. While community conversations are inherently public, the possibility that audio data could be stored, analyzed, or used to improve proprietary AI systems raises governance questions, particularly in the absence of a formal municipal AI policy. 

Understanding default settings, data retention practices, and opt-out options was identified as essential before broader use. Practitioners note that proactive communication about these safeguards can help build public trust. 

While AI translation can reduce language barriers, some have raised concerns about whether live translation may unintentionally mediate or appropriate a participant’s voice in ways that feel different from human interpretation. This reinforces the importance of facilitation practices that slow the pace of conversation, allow time for response, and center the agency of participants using the technology. 

Early Guidance for Responsible Use 

The Rochester pilot suggests that AI translation tools may be most effective when they are: 

  • Used in scoped, time-limited settings 
  • Paired with intentional facilitation and slower-paced dialogue 
  • Framed clearly as assistive tools, not replacements for inclusive meeting design 
  • Introduced transparently, with opportunities for participants to ask questions or opt out 

We are encouraged to continue piloting the tool, noting that real-time translation remains a major access issue in many civic and health-related settings. Such experimentation still requires clear ethical standards, informed consent, and ongoing evaluation to guide their use. 

The resident noted that additional pairs of glasses could enable two-way translation and better support multilingual dialogue, though this would require more resources and coordination. Moreover, higher-end versions of the model can provide visual translation on lenses, which could create an additional accessibility use for those who are hard of hearing. While still in the early days of this technology, the Rochester experience demonstrated real potential to improve language access, particularly when paired with thoughtful facilitation and inclusive meeting design. 

Some Related Posts

View All

Thank You to Our Key Partners