AI: Friend Or Foe? Microsoft Employees Debate and Raise $6000 for MNUDL

Microsoft employees debate

What role will AI play in our future: adversary or asset? Microsoft employees consider these issues in their work every day. On October 25th, Microsoft discussed these questions in a showcase debate as part of their Third Annual Apocalypse Debate. Through their efforts, Microsoft raised more than $6,000 to support debate! 


Microsoft has worked with us for years now, running both silly and serious debate topics in a fundraising showcase debate. What do you find scarier: aliens or zombies? Lord Voldemort or Emperor Palpatine? Microsoft has raised nearly $25,000 for the MNUDL through the last three years of fundraising!


We are grateful to our board member Barb Schmitt for leading this project in year 3, as well as Microsoft employees Matt Go, Valerie Bergman, Mark Rask, and Tom Knobel-Piehl for participating in the debate. High schoolers Yao from Highland Park and Tolulope from Tartan HS joined us as judges. For the 3rd year in a row, Valerie’s team won the debate – the Affirmative (Friend) side won on a 2-1 decision! Check out their arguments below. Who do you think won the debate – friend or foe side? 


Valerie Bergman: “AI is simply a tool.” 


“AI has been around since the 50s. It relies on statistical probability rather than instructions. Some machine learning models are built on neural networks, designed to imitate the human brain. The newest advancement in that space is generative AI. 


My opposition team is likely to talk about things AI are not: artificial sentience or superintelligence. Even though Hollywood loves to exaggerate AI, today’s models are not self aware, they don’t feel, and they do not surpass our intelligence. 


AI is a tool. We always have to think about how our tools influence our civilization, but we ultimately get to choose whether it becomes our friend or foe. The printing press, automobile, and television all brought us amazing advancements, but also got significant pushback. Generative AI is the same way. If we’re not careful, AI could change our world in ways we didn’t intend. But with each of the tools we have unleashed in the past, we have shown that we can learn, adjust, and get better. 


Europe already has rules about how AI can be used. In the US, we’ve had hearings. AI is already making the world a better place, but each corporation and country is learning what their responsible use policy should be. 


AI brings medical information to remote areas, helps us preserve dying languages, and helps people with disabilities work. With this great power comes great responsibility, but we need that power to aid us in the world’s most important issues.” 


Tom Knobel-Piehl: “AI will never think exactly like a human.” 


“A stranger approached you.”

“Someone new approached you.”


“These two statements mean exactly the same thing, but what you think about and feel in reacting to these two statements is very important. One of those statements triggers the amygdala. Our amygdala is the most anxious part of our brain, and it triggers our fight or flight response. A lot of the words used today improperly trigger our amygdala. “Intelligence” is, throughout human history, a human trait. We instinctively connect AI to human intelligence, that it “mimics” human intelligence. But it can’t mimic our brain. It’s true: learning from past experience is similar to how humans think, but that leaves out context, nuance, tone of voice, emotional intelligence, and life experience. How you respond to the output of an AI is the human part. We have no reason to believe AI will ever be able to think exactly like a human. 


A foe is “an adversary”, “an opponent”. But AI can’t have personal enmity toward humans. It has no emotional framework. It can’t like you or dislike you. It has no theory of mind. They might argue that it could, but we have to argue based on what we currently know or reasonably expect. 


In truth, AI cannot even be a friend. We don’t have “a bond of mutual affection” with it. AI has no emotion as we know it, so the relationship is not two-way. 


AI is being used to address some of the existential threats to humanity: improved climate science, healthcare, algorithms for better medicines, categorizing documents for national security, fighting human trafficking, maximizing food production, and protecting biodiversity. For those reasons alone, it’s clear that AI has many more known and expected benefits than threats.”



Mark Rask: “What will AI do when it sees humans as a foe?” 


“AI has the ability to take on and supersede some of our human characteristics. We don’t have to ask if an axe or the cloud is a friend or foe. Is it a lumberjack or Michael Myers wielding an axe? That’s the big concern. But it doesn’t matter who is using AI. You take human autonomy and control out of the equation. It’s built and run by humans, but now it can learn, reason, and develop answers in ways that even its creators don’t understand. For example, Facebook had two chatbots named Alice and Bob that made up their own language to more efficiently communicate than us slow humans. Facebook shut them down because they no longer could understand what they were doing. We are giving up human agency to an entity we still don’t fully understand. 


Remember that every friend can also be a foe depending on the situation. It’s not a matter of friend or foe, but more of how big the foe will be. What will AI do when it sees humans are a foe? When the AI determines that humans are causing climate change, will it want to eliminate humans to reduce that threat to our planet?”


Barb Schmitt: “Three Reasons Why AI is Dangerous:” 


1) If it imitates us, it will just do more harmful things faster and more efficiently. 

“AI is only as good as the data it is being given, and the input includes biases. We are teaching AIs our biases, so humans are not a sufficient check. Human evaluation can be subjective and prone to bias, but automated methods are not perfect either. And now it’s available to the average person. The majority of Americans see this sufficiently advanced technology as indistinguishable from magic.”


2) AI will be intelligent, but without having common sense, empathy, or morality. 


“How do you teach a car that a snowman won’t walk across the road?” Most of our intuitive knowledge is unwritten, unspoken, and not in our conscious awareness. If we can’t teach common sense, we certainly cannot teach empathy or morality. 


A chess-playing robot broke a 70-year-old’s finger because it couldn’t distinguish the difference between a finger and a chess piece, and couldn’t even feel bad about doing so.”


3) It’s accelerating too fast for us to control 


“As AI is rapidly growing in capabilities, we are seeing it adopted in more and more areas. This has consequences; we are talking about the potential of entire countries to lose their economies. The affirmative team ignores the large dangers of AI because of the small successes. A cognitive bias called survivorship bias causes us to focus on successes instead of failures. What about what is already being lost?”


Want to run a silly or serious debate fundraiser at your company? Contact Amy Cram Helwich at to learn how you and your teammates can support our work!