Four expert takes on artificial intelligence

January 3, 2025

Johns Hopkins faculty members Gillian Hadfield and Mark Dredze join veteran tech journalist Kara Swisher and Human Intelligence founder and CEO Rumman Chowdhury for a wide-ranging discussion

Key Takeaways


Years ago, in the early days of AI study, technology journalist Kara Swisher asked how the new tool would solve world hunger. The answer? Kill half the population.

“It was a logical answer, but not a good answer,” she said, describing the interaction at a live taping of her podcast, On With Kara Swisher, held at the Johns Hopkins University Bloomberg Center.

The wide-ranging conversation, featuring Johns Hopkins faculty members Gillian Hadfield and Mark Dredze and Rumman Chowdhury, the founder and CEO of Human Intelligence, a nonprofit focused on improving AI algorithms, focused on the threats the technology poses and the outstanding questions surrounding it.

Here are four things to know from their conversation:

  1. The algorithms are biased.

Dredze, a professor of computer science, found, perhaps surprisingly, that large language models are biased toward women in discussions of intimate relationships.

 Dredze and his team gave models, which said they weren’t biased, scenarios about a fictitious couple, “John and Sarah.” They found the models were more likely to side with Sarah, even when the only thing that changed in the scenario was the character’s name.

“What we wanted to do was show that even though the model won’t say something that’s biased, all that bias is lurking under the surface, and we don’t necessarily know what that is,” he said.

  1. Does AI need a business license?

Much of the discussion about regulating AI has focused on broad discussions of safety and research protocols. But Hadfield, a professor of computer science with a joint appointment in the Johns Hopkins University’s new School of Government and Policy, is also thinking through how to hold an AI actor or “agent” responsible for its actions.

“In order to make the laws governing ‘you can’t sell chicken that kills you,’ we have to know who sold it to you,” she said. “We need a system to hook those [AI] actors into our accountability regimes.”

  1. AI brought existing issues to the forefront.

Chowdhury argued that AI has made it impossible to ignore issues that “were limping along,” such as economic inequality and access to higher education.

Pointing to ChatGPT’s ability to write essays in less than five minutes, she said the solution isn’t to ban AI but to rethink how we teach children to synthesize information.

“It’s pushing us to reimagine a lot of our institutions, which were built in the previous industrial revolution,” she said. “This stuff was built 100 years ago for a world that does not look like the world does today.”

  1. AI’s new popularity is a double-edged sword.

Both Drezde and Chowdhury reflected on the “AI obsession” of the moment.

“AI is not the solution to all problems,” Drezde said. “And there’s too much focused on the technology, certainly not enough focused on the applications.”

Chowdhury added that the sudden surge in interest in the technology has made regulating it more difficult.

“I actually long for the days where the idea of AI governance was very boring, because then the only people in the room were the people who actually cared about it,” she said. “Now it’s like somebody like spent five minutes on an LLM, and suddenly they show up in the room as an expert.”