So here’s the thing…AI is both awesome and intimidating!
It’s both wonderfully amazing and exciting and fun, and worrying, overwhelming and scary…..and it’s coming towards us at a pace that our brains are finding hard to reconcile.
Just like working online was in 2020, finding a way to engage and embrace AI into our engagement and facilitation practice is one of the most important questions of this time! We will remember 2024 as the second ‘shakabuku’ (swift spiritual kick to the head that changes our perspective on things) in a decade!
Our facilitators are already making the most of this new AI era – delving in while anchoring everything we do with AI to our core principles and foundational values. We’re already seeing incredibly exciting outcomes for participants and processes. As we work with this new technology, we’re constantly learning and testing and challenging ourselves as we go.
Let’s explore some of what’s coming up for us so far. Including how our natural human biases are coming into play, the risks and opportunities we’re identifying, and the three key principles we see as critical going forward.
cognitive biases
As our brains grapple with this technological disruption, we know from all our engagement work that there are some critical defaults (aka cognitive biases) that can short circuit our thinking (sometimes for good and sometimes not). So, let’s unpack a few biases that might crop up and what they mean:
Anthropomorphism Bias
This is when we attribute human characteristics, intentions, or emotions to AI, thinking of them more like humans than machines. It's like assuming your chatbot has a bad attitude on Mondays or even just the act of saying ‘please’ and ‘thankyou’ to the machine (a normal human behaviour) is, in essence, ‘humanising’ the machine.
Automation Bias
We tend to over-rely on automated systems, thinking they're infallible. It's the digital equivalent of putting all your eggs in one basket and then handing that basket to a robot.
Technophobia
The fear side of the tech coin. It's a bias against AI and technology, assuming they'll lead to negative outcomes like job losses or Terminator-style scenarios. It's the "technology is moving too fast, and I don't like it" vibe.
Technophilia
The opposite of technophobia, this is an uncritical love and enthusiasm for technology, believing AI can solve all our problems. It's like seeing every new gadget and thinking it's the answer to world peace.
Confirmation Bias
An oldie but a goodie! Not exclusive to AI, but definitely relevant. We tend to favour information that confirms our pre-existing beliefs about AI, whether those are utopian or dystopian visions. If you think AI is going to save the world, you'll notice every positive article and ignore the downsides, and vice versa.
AI Effect
This is a quirky one. Once AI successfully solves a problem, we no longer consider it AI. It's the "Oh, that's not so special" effect, diminishing AI achievements because they become part of the norm.
Moral Machine Bias
When we debate the ethics of AI decisions (like in autonomous vehicle dilemmas), our biases can shape what we think AI should or shouldn't do, often reflecting our own moral and ethical frameworks. A great example I learnt at a conference last week is with driverless cars we (society) accept hundreds of accidents daily from human error, but one mistake from a driverless car and we don’t trust the machine ever again.
Bias Bias
Last but not least, the recognition that AI systems can inherit biases from their human creators or the data they're trained on. It's the realisation that AI isn't just a mirror; it's a mirror that can reflect and even amplify our own biases.
Risks and opportunities
To confidently walk into this AI future, it feels important to consider both the risks and opportunities that it brings to our work. Here’s a list of a few.
Bias and Inequality: AI can unintentionally amplify biases present in its training data, potentially leading to unequal representation or treatment of different community groups. This can skew engagement efforts and reinforce societal inequities, rather than bridging divides.
Enhanced Data Analysis: AI can process and analyse vast amounts of data from engagement activities, identifying patterns, trends, and insights that might be missed by humans. This can lead to more informed decision-making and tailored community initiatives.
Privacy Concerns: AI systems often require large datasets to learn and improve, raising concerns about the collection, storage, and usage of personal data from community members. Mishandling this data could breach privacy and damage trust.
Overreliance and Depersonalisation: Leaning too heavily on AI for community facilitation can lead to depersonalised engagement processes. The nuanced understanding, empathy, and interpersonal skills humans bring to these roles are hard for AI to replicate, risking a loss of the personal touch that fosters genuine connection and trust.
Efficiency and Scalability: AI can handle routine tasks, manage large-scale data collection and analysis, and even facilitate initial stages of community engagement, freeing up human facilitators to focus on deeper, more meaningful interactions. This can expand the reach and impact of engagement efforts without proportionally increasing the workload.
Accessibility and Inclusivity: AI-driven tools can make community engagement more accessible to a broader audience. For example, real-time translation services and accessibility features for those with disabilities can help ensure everyone has a voice.
Miscommunication and Misinterpretation: AI, especially in its current state, might misinterpret complex human emotions, sentiments, or cultural nuances, leading to misunderstandings. Relying on AI for communication or data interpretation in sensitive community contexts could misrepresent community voices or issues.
Innovative Engagement Methods: AI opens up new avenues for engaging communities, such as using predictive models to identify emerging community concerns, deploying chatbots to provide information and gather feedback 24/7, or providing custom GPTs to help community members analyse data. These tools can create more dynamic, responsive, and engaging processes.
kEY PRINCIPLES
So, to keep us balanced in our thinking about this new age we have found it useful to clarify some big principles about AI and our use of it going forward. We are fully aware that these principles might work for now, and will definitely need to evolve as we learn more and as AI itself adapts and changes.
Here are three of the key principles we’re seeing as important:
Fairness
Ensure AI systems do not embed or propagate biases that could lead to unfair treatment of individuals or groups.
Transparency
Maintain transparency about the use of AI, including the algorithms, data sources, and decision-making processes.
Privacy and Security
Uphold stringent measures to protect personal and sensitive data, adhering to applicable data protection laws (e.g., GDPR, CCPA).
And in case you are wondering…….yes, AI helped me with this article :)
WANT TO JOIN US ON THIS AI JOURNEY?
If you’re grappling with an engagement issue, about to embark on an engagement process or simply want to touch base with us about what might be possible in terms of integrating AI with engagement, we’d love to chat about the ways we could integrate AI into your project or process.
stay in the know + get lots of free engagement stuff
We share tips, ideas, news , free resources and more through our monthly e-newsletter.