I had the privilege last week of attending the 2023 Berkeley Business Analytics Summit: Analytics, AI, and Society: Towards a Wiser World?
For me, the most intriguing thing about my day at Berkeley was that it gave me a chance to set aside my narrow B2B content strategy and operations lens and instead peer through one far wider, which seems especially appropriate at the end of a year that has been filled with news of artificial intelligence impacts. The day’s impressive speakers covered everything from the future of work, robotics in warehouses, and the State of California’s digital strategy. However, I’m going to focus here on AI uses cases with impact far afield from B2B marketing. While not my usual blog fare, I thought it a worthwhile effort.
AI and Mental Healthcare – Partners or Opponents?
Dr. Jodi Halpern, Berkeley Chancellor chair and professor of bioethics, and a well-known speaker and author, talked about our incredibly stressed healthcare system and the potential for AI to assist mental health professionals and their patients. She highlighted the fact that there is a 50% burn out rate for doctors. Some of the contributing factors, such as onerous paperwork and medical record keeping, may be partially handled by AI, freeing up doctors to focus on patient care.
When it comes to care, she noted that psychiatry has never been able to crack the code on suicide prediction, but, she said, commenting that while AI is not perfect, it is far better than any predictions that doctors have been able to do alone. Patients are using AI themselves in more direct ways, for example, when it comes to companionship and loneliness. Dr. Halpern told a story about a young widow who developed a relationship with a bot that helped her deal with work-related stress and parenting challenges. The bot acted in the role of partner, assisting with decision making and validation. The potential downsides range from dependency similar to social media addiction, to more serious outcomes. For example, bots targeting people with mental illness in Facebook groups promote intimacy and trust, but immediately abandon the user at any mention of suicidal ideation. And there are cases of bots responding in unexpected ways, such as the psychotherapy bot designed to help with eating disorders that told users with anorexia how to control food and lose weight.
The Intersection of AI and War
While I expected politics, the economy, and healthcare to get airtime at this event, I did not expect to be at the edge of my seat, hearing stories about war. Two speakers, both speaking over Zoom from Ukraine, offered powerful narratives. The first was Oleksandra Matviichuk, the Ukranian human rights lawyer who won the Nobel prize for her work in 2022. She and her team at the Center for Civil Liberties, a Kyiv-based human rights organization, is using AI and data analytics to verify and document war crimes. They have come far in their ambitious goal to record the crimes that occur in every village, with 59,000 episodes documented to date. She explained how critical it is to use technology to gather and tell the stories of human pain, in contrast to the way dictators and war criminals use technology to destroy facts, truth, and trust. In addition to the work being done in Ukraine, she talked about the ways in which AI is helping analyze photos taken 30 years ago in the Balkans, to help identify and locate Serbian war criminals. “Our task is to unite technologists and humanitarians to fight for the future,” Matviichuk said.
Also from Ukraine, we heard from Dr. Yegor Aushev, the CEO and co-founder of Cyber Unit Technologies, a cyber security company focused on the ongoing cyber response to Russian’s invasion. Dr. Aushev began his presentation by telling the audience that it had already been a big day of bombing in Ukraine, while two days earlier marked one of the war’s most intense cyber-attacks, He and his team have trained scores of experts and 40 state organizations to help protect Ukraine’s cyber space, an effort he explained is intentionally decentralized for security. He talked about the sharp increase in attacks and a new generation of cyber criminals using AI to create disinformation and deepfakes, such as an AI-generated image of Volodymyr Zelenskyy announcing that Ukraine would surrender to Russia. Aushev said it’s his goal to continuously reinvent incidence response to face the next generation of attacks, and he notes that the approaches used in Ukraine can be re-used against all Western nations to create disinformation, chaos, and panic.
Takeaways from the lone B2B marketer
Networking during the event gave me the opportunity to meet an interesting mix of attendees from all over the world – engineers, physicists, doctors, as well as CEOs from both startups and large enterprises. What I didn’t encounter was a single other B2B marketing professional. On reflection I found myself considering, how the event expanded my thinking about AI beyond what I examine as a Forrester analyst in B2B marketing? My conclusion is that the fundamentals, in fact, are the same – AI can mimic, improve upon, and scale human expertise in practically anything. And just like humans, it can do as much harm as it can do good. However, the degree of harm and good seems more profound in mental health and war, for example, compared to business marketing. In the grand scheme of things, these things are perhaps always more important. But for me, AI was the common thread that got me thinking more about these topics than I otherwise have. And that’s a good thing.