5 things every reporter needs to know about AI
Understanding the potential—and pitfalls—of artificial intelligence is crucial for journalists today
- Image Will Kirk
Key Takeaways:
- AI isn’t confined to tools like ChatGPT; it has much broader applications and can be incorporated into a range of technologies across many industries.
- While AI presents significant opportunities in fields such as health care and public policy, challenges around data bias must be taken into consideration.
- The growing number of myths and misconceptions surrounding AI can distort public perception, underscoring the need for informed reporting.
The rapid rise of artificial intelligence can at times feel both thrilling and alarming. With AI seemingly everywhere today, it’s critical that reporters grasp the complexities, implications, and nuances of this transformative technology on a wide range of sectors and issues, from health care to public policy and governance, to climate change.
“AI has taken over our lives in every way you can imagine, and the press has a very important role in how this technology is understood and interpreted by the public at large,” Rama Chellappa, interim co-director of Johns Hopkins University’s Data Science and AI Institute, said at a recent Hopkins Bloomberg Center event.
As AI continues to evolve, here are five things every reporter should know:
- AI is more than just ChatGPT
When most people think of AI, their minds immediately go to tools like ChatGPT. However, AI is much broader and far more versatile than a simple chatbot / virtual assistant. It encompasses a multitude of technologies, including machine learning, natural language processing and autonomous systems, and it’s reshaping fields from health care to public policy to climate change and urban planning.
“ChatGPT and other large language models are a form of AI, but AI is much, much more broad than that,” said Mark Dredze, interim deputy director of JHU’s Data Science and AI Institute. “AI is not just about large language models; it’s about diverse applications in fields like health care and public policy. ChatGPT shouldn’t be identifying sepsis or driving your car, but AI is doing those things.”
- AI presents opportunities and risks in health care
AI has the potential to revolutionize health care, improving patient health, streamlining workflows, and reducing costs. For example, AI can play a role in early detection of conditions like sepsis.
“There is a huge value creation opportunity with AI, especially in improving sepsis outcomes—one of the leading causes of hospital mortality,” Suchi Saria, a leading AI researcher and associate professor at Johns Hopkins University, said.
However, she also pointed out challenges remain, noting that data bias, the complexity of model training, and the necessity for human oversight are significant concerns. “We are at an inflection point—there’s a lot of hope, but there’s also a lot of work to be done to solve real problems and gain user trust,” she said.
- AI myths persist, distorting public perception
Several misconceptions about AI exist, including that it is a new technology when in fact it has been around for decades. It’s just that recent advances are making AI better at performing tasks, and programs such as ChatGPT are making it easier for the average person to use AI on a daily basis.
Other myths include that AI will keep getting better on its own and that large language models have mysterious emergent abilities that we cannot control—both of which give the technology credit for far more agency than it actually has.
“The ways the systems will develop are completely dependent on the choices that we make,” such as which datasets, applications, or regulations are used, Dredze said. “These models are just computer programs, and the way they behave depends on the data they use. Believing that the model taught itself gives it a degree of autonomy it does not have.”
- AI isn’t 100% accurate or unbiased
AI systems are not infallible and can reproduce human prejudices or provide incorrect results due to biased datasets, an inadequate amount of data, and/or unexpected inputs the technology has difficulty deciphering, said Anton Dahbura, executive director of the Johns Hopkins Information Security Institute.
“It’s really hard to get rid of that kind of bias because the machine learning model, even if you remove one biased parameter, can find proxies for it,” he said. “We have to be more thoughtful about modeling and where AI fits in, so that we don’t rely too much on AI, and we also remember where to put the human in the loop.”
- AI can play an important role in public policy and governance
AI is increasingly being used to improve government operations, from optimizing city services to managing climate resilience. Already, places like Buenos Aires and Washington, D.C., are using AI in critical city services like 311 systems, helping to streamline responses to public needs.
“AI has obviously emerged as one of the most incredible opportunities for government to think differently,” said Beth Blauer, a John Hopkins University professor who advises governments in technology innovation. “If you’re not using these tools every day, you’re getting lapped by the private sector.”