Programmed Values: The Role of Intention in Developing AI

The advent of artificial intelligence (AI) seems to have sent shockwaves through the world. Not a day passes without my clients and students contemplating its effects. James, a young journalist, told me, “Maybe AIs will be wonderful assistants when I do research,” while Ravi, an electrical engineering student, is less optimistic. “In five years, our society will be completely changed by the influence of AI.” He adds, “The problem is that AI will develop in ways we can’t foresee, with exponential speed.” He looks downcast. “I find that deeply depressing, and it leaves me sleepless at night.”

AI is a widespread source of anxiety. Many worry that their jobs will be obsolete since AI can do some tasks faster and better than humans. Tristan Harris, from the Centre of Humane Technology, warns that privacy can easily be violated in many areas of our lives. He also suggests that AI systems can aggravate existing societal biases and discriminations. Malicious actors can exploit vulnerabilities in AI systems, leading to the manipulation of public opinion. This could lead to negative psychological impacts on marginalized individuals or communities. He is especially concerned about the potential for AI to increase polarization. Intelligent algorithms could radicalize moderate consumers to capture attention.

Juan, a young scientist from our local university interested in existential and spiritual concerns, has an exciting view. “I wish we could program a longing for wisdom and goodness into our AI. Then AI could influence our society in a positive, compassionate way.”

Leike, Schulman, and Wu speak about the AI “alignment problem,” which describes the degree of correspondence between the values and motivations of humans and AI. They tell us, “Unaligned AGI could pose significant risks to humanity, and solving the AGI (Artificial General Intelligence) alignment problem could be so difficult that it will require all humanity to work together.”

Philosopher Nick Bostrom notes that “to build an AI that acts safely while acting in many domains, with many consequences, including problems engineers never explicitly envisioned, one must specify good behavior in as in ‘X equals such that the consequence of X is not harmful to humans.” In brief, many AI researchers and thinkers believe good, human-compatible intentions must be explicitly programmed into how AI systems are designed. AI systems carry the intentions of the people who create them, whether coded intentionally or not.

AI reflects the intention of the person who created it. If the intention is to make a profit, then that is what the AI will do. If the AI intends to replace a human’s work, then that is what the AI will do. If the intention is to mimic a human’s voice and expression, then that is what the AI will do. AI has no inherent sense of caring, intuition, or intrinsic conscience.

To tackle the enormous problem of how to ethically and safely achieve the goal of integrating AI into society, we need to build alignment and intention. Alignment refers to the state of congruence between values, beliefs, and goals. In terms of purpose, it involves aligning our actions and plans with a more profound sense of purpose in life. Intentions play a crucial role in shaping our experiences and outcomes and help us to stay focused, motivated, and in tune with our goals, even when we face challenges. By being fully present with our inner values, we align our intentions with our core values. When alignment and intention work, they create a powerful positive synergy.

That is why we have to clarify our intention, tune into our intuition, and be conscious when programming AI. We need the utmost clarity and self-awareness as an individual, a group of scientists, a society, or an international decision-making body.

Buddhist psychology can help us here. It emphasizes the importance of cultivating wholesome intentions. Wholesome intention originates in our generosity, loving kindness, compassion, and the absence of harmful desires. This leads indirectly to positive outcomes by promoting behaviors aligned with those intentions. Meditation and mindfulness allow individuals to make conscious choices that lead to well-being and spiritual progress.

It is important to note that Buddhist psychology does not solely focus on intentions but also considers the actual consequences of actions. Intention alone is insufficient to determine an effort's ethical value, as the outcome and impact on oneself and others are also considered. However, intention serves as a crucial starting point and a significant factor in determining the ethical quality of an action.

I recommend that we lean on positive and Buddhist psychology to develop a playbook for making AI useful for individuals, groups, and societies so we successfully support our health and well-being. The potential threats of unaligned AI are undoubtedly immense, but we are not powerless in their midst. Careful alignment and encoding of positive human values in AI systems require us to understand our values, intentions, and motivations. Here, Buddhist psychology can serve as a guide, offering practices to listen, discover our intentions, and more deeply align them with our best interests.

References

https://nickbostrom.com/ethics/artificial-intelligence.pdf

Mindful Heart Programs

"To provide educational programs in mindfulness, compassion and nature connection to enable us to care for ourselves, others and our world by transforming suffering, building resilience and deepening our capacity for serving and training others."

Previous
Previous

PODCAST: Duncan Trussel and Radhule Weininger

Next
Next

How to Be Compassionate Without Burning Out