Conversational AI for Non-Human Representation in Decision-Making

SHU YANG LIN

Institutional Problem or Research Question

Describe what the open institutional problem or research question you’ve identified is, what features make it challenging, and how people deal with it currently.

Solving today’s complex, multi-stakeholder challenges requires input from diverse perspectives. Although non-human entities, such as natural features, cannot directly express their perspectives, they have interests and can possess knowledge and experience that offer insights on what is important and worth protecting. Integrating these perspectives into decision-making processes could improve outcomes for both human and non-human entities.

Today, non-human perspectives are frequently disregarded in collective decision-making processes, leading to adverse consequences such as the exacerbation of climate change, endangerment of species, and degradation of ecosystems. These issues, in turn, can negatively impact human well-being, for instance by depleting crucial resources like clean air and water.

Existing efforts to incorporate non-human perspectives into decision-making – such as advocacy by civic society groups – often rely on human interpretation and subjective judgments. This can complicate efforts to capture the complexity and interconnectedness of the relevant ecosystems and to incorporate their perspectives.

Some legal systems have begun to recognise the legal rights of rivers, such as the Whanganui River in New Zealand and the Ganges and Yamuna Rivers in India. This progressive shift in legal and philosophical thinking provides a foundation for the goals of this project and highlights the growing importance of including non-human perspectives in decision-making.

Addressing these challenges will require adopting more holistic, systems-based approaches to governance. Doing so could ensure that processes are comprehensive and inclusive, and foster a more sustainable and equitable future for all entities.

Possible Solution

Describe what your proposed solution is and how it makes use of AI. If there’s a hypothesis you’re testing, what is it? What makes this approach particularly tractable? How would you implement your solution?

To address the neglect of non-human perspectives in decision-making, we need a holistic, systems-based approach to governance. Such methods should recognise the interconnectedness of natural systems, with the goal of promoting the well-being of humans and non-humans. Doing so requires a shift in mindset and values, and the development of new decision-making tools and methods.

AI can facilitate dialogue between humans and non-human entities. Such discussions can serve as an interface of care, allowing non-human entities to express views and concerns. Uses for conversational AI include:

Data collection: conversational AI can collect data on the well-being of non-human entities through voice or text-based conversations. This information can inform decision-making processes and deepen our understanding of the impact of human activities on non-human entities.

Cultural translation and communication: conversational AI can enable non-human entities to express perspectives, state interests, co-create plausible solutions with other entities (e.g., in a context of deliberative workshop), and/or communicate with decision-makers directly.

Education and awareness: conversational AI can educate people about the importance of non-human perspectives in decision-making and raise awareness about the effects of human activities on non-human entities.

Simulation: conversational AI can model different scenarios and their effects on non-human entities, helping decision-makers comprehend ecosystem interactions.

First, we plan to focus on cultural translation and communication (the second bullet point above), and to open up the project to others who might find it interesting. We aim to create a conversational AI interface that could eventually be integrated into decision-making and policy-making processes.

Method of Evaluation

Describe how you will know if your solution works, ideally at both a small and large scale. What resources and stakeholders would you require to implement and test your solution?

We plan to test this interface in the context of decision-making workshops and policy-making processes. We see several potential methods of evaluation for our proposed solutions.

Small scale:

Interviews: interview workshop participants before and after a workshop to gauge their understanding of non-human perspectives and how this affected their decisions.

Surveys: use feedback forms to gather participants' thoughts on the inclusion of non-human agents in decision-making processes, and to assess how it influenced outcomes.

Observe workshop dynamics and the interactions between human and non-human agents to see if there was genuine dialogue and if the non-human perspectives were taken seriously.

Large scale:

Monitor policy outcomes and assess whether they reflect a more holistic, systems-based approach to governance that promotes the well-being of non-human entities.

Analyse data on the well-being of non-human entities to see whether more inclusive decision-making processes led to improvements.

Conduct surveys or interviews with stakeholders and decision-makers to gauge their understanding and acceptance of these new approaches to governance.

Risks and Additional Context

What are the biggest risks associated with this project? If someone is strongly opposed to your solution or if it is tried and fails, why do you think that was? Is there any additional context worth bearing in mind?

The project carries three main risks: a lack of legitimacy in integrating the conversational AI interface into the decision making process, inaccuracies in the interfaces’s representation of non-human entities, and human resistance to the inclusion of non-human perspectives.

Next Steps

Outline the next steps of the project and a roadmap for future work. What are your biggest areas of uncertainty?

Key next steps include development and testing of the conversational AI interface, and the implementation of the interface into decision-making and policy-making processes.

We aim to begin by creating interfaces for a specific non-human entity and then working with stakeholders such as local advocacy groups who have been collecting data and speaking for this entity. Through such consultations, we hope to come up with an ethical interface of care that could suitably represent the entity. For real-world implementation and testing, we will also search for opportunities to implement the interfaces into decision making.

Future work would include continual improvement to the interface and shaping a new approach to governance – a form of democracy that directly incorporates non-human perspectives. Awaring of the fast advancement of AI, we are also eager to share experiences and learnings from this project in the hopes of more general contributions to development of AI standards.

Next
Next

Harnessing Collective Intelligence for Dynamic AI Governance