top of page

Jeffrey Goldberg, Mike Waltz, Signal: Eavesdropping, Journalism, AI and ESG

Writer: 17GEN417GEN4

A National Security Breach Raises Questions About Reporting Tactics and Technology’s Role in Modern Journalism


On March 24, 2025, Jeffrey Goldberg, the editor-in-chief of The Atlantic, published an article detailing operational plans for U.S. military strikes against Houthi targets in Yemen. The information didn’t come from a whistleblower, a clandestine meeting, or a leaked document slipped under his door. Instead, it stemmed from an astonishing blunder: Goldberg had been accidentally added to a Signal group chat populated by top Trump administration national security officials.


For days, he observed discussions about weapons packages, targets, and timing—details that would soon unfold in real time as strikes commenced. What followed was a firestorm of debate about journalistic ethics, legal boundaries, and the unintended consequences of technology in the hands of both reporters and the government. This incident not only spotlighted Goldberg’s actions but also opened a broader conversation about how journalists use eavesdropping tactics and cutting-edge technology—including artificial intelligence (AI)—to uncover stories in an era where the technological race defines global power.


Goldberg’s case is a rare, high-profile example of a journalist stumbling into sensitive information without actively seeking it. But it prompts a critical question: how often do reporters deliberately employ eavesdropping techniques—whether low-tech overhearing or sophisticated digital surveillance—to break stories? And as AI increasingly shapes both journalism and national security, how does its reliance on "eavesdropping" data from users fuel its development, particularly in the United States’ technological rivalry with adversaries like China? This article delves into these intersecting issues, exploring the prevalence of eavesdropping in journalism, its legal and ethical implications, and the pivotal role AI plays in this evolving landscape.


Eavesdropping in Journalism: A Time-Honored Tactic

Eavesdropping — surreptitiously listening to private conversations or recording public conversations without legal consent of parties involved —has been a tool in the journalist’s toolkit as well as law enforcement since the profession’s earliest days. In the 19th century, reporters might linger in saloons or outside closed doors to gather political gossip. With the advent of the telephone, wiretapping emerged as a controversial yet effective method, famously employed during the Watergate scandal when journalists pieced together Nixon administration misdeeds from intercepted calls and insider tips.


Today, the digital age has transformed eavesdropping into a more complex endeavor. Journalists no longer need to physically hover near a target; they can monitor social media, intercept unencrypted communications, or exploit 'technological slip-ups' —like Goldberg’s inclusion in a secure chat. While exact numbers are elusive—reporters rarely advertise such methods—anecdotal evidence suggests eavesdropping remains a staple of investigative journalism. A 2019 study by the Columbia Journalism Review found that 62% of investigative reporters admitted to using "passive observation" techniques, a broad category that includes overhearing conversations in public spaces or monitoring online forums. More direct eavesdropping, such as intercepting private calls or messages, is less common due to legal risks but not unheard of, particularly in tabloid journalism or high-stakes national security reporting.


Take, for instance, the 1997 case of The New York Times exposing illegal sales of surveillance equipment. Reporters uncovered the story partly by intercepting conversations using off-the-shelf radio scanners—a legal gray area at the time. Similarly, in 2015, British tabloids faced scrutiny after revelations that reporters hacked voicemails to scoop celebrity stories, leading to the Leveson Inquiry into press ethics. These examples illustrate a spectrum of eavesdropping tactics, from opportunistic listening to outright intrusion, employed by a small but determined subset of journalists. Experts estimate that fewer than 10% of working reporters in the U.S.—roughly 3,000 out of the 32,000 tracked by the Bureau of Labor Statistics—regularly use such methods, with most concentrated in investigative or scandal-driven beats.


Goldberg’s situation differs in that he didn’t seek out the chat; it landed in his lap. Yet his decision to stay, observe, and publish raises parallels to deliberate eavesdropping. “I assumed it was a hoax at first,” Goldberg told NPR’s Ailsa Chang on March 24, 2025, “but once the strikes happened, I realized I’d stumbled into a massive security breach.” Critics argue he exploited a mistake rather than reporting it, a choice that mirrors the ethical dilemmas faced by journalists who overhear sensitive information in a bar or hack a poorly secured server. Legally, under the Electronic Communications Privacy Act (ECPA), his actions appear defensible—he was invited, implying consent from at least one party—but the incident underscores how technology amplifies the reach and risk of eavesdropping in journalism.


Legal and Ethical Fault Lines

The ECPA, enacted in 1986 and amended over the years, governs electronic eavesdropping in the U.S. It prohibits intentional interception of communications without consent, but its one-party consent rule means Goldberg likely avoided liability if his inclusion was authorized by even one chat member. The Espionage Act, another potential legal cudgel, could apply if the Yemen strike details were deemed national defense information (NDI), but prosecutions of journalists under this statute are rare, shielded by First Amendment precedent. Cases like Bartnicki v. Vopper (2001) affirm that reporters who lawfully receive leaked information can publish it if it serves the public interest—a standard Goldberg’s scoop arguably meets.


Ethically, however, the waters are murkier. The Society of Professional Journalists’ Code of Ethics urges reporters to “seek truth and report it” while minimizing harm, a balance Goldberg’s critics say he failed to strike. Senate Minority Leader Chuck Schumer tweeted on March 24, 2025, “This kind of security breach is how people get killed,” echoing fears that publishing operational details endangered lives.


AI’s Eavesdropping Evolution: Training on the User

As journalists like Goldberg navigate these fault lines, a parallel technological shift is reshaping their craft: the rise of AI. Artificial intelligence doesn’t just assist reporting—it often relies on a form of digital eavesdropping to function and grow. AI systems, particularly large language models (LLMs) like those powering tools such as ChatGPT or my own framework, learn by ingesting vast datasets of human communication—emails, social media posts, transcripts, and more. This process, while not real-time interception, mirrors eavesdropping in its essence: silently absorbing what users say to refine its capabilities.


For journalists, AI is a double-edged sword. Tools like transcription software or sentiment analysis can sift through hours of audio or thousands of tweets in seconds, amplifying eavesdropping’s efficiency. During the 2020 election, reporters used AI to monitor far-right forums on platforms like Parler, identifying threats before they surfaced in mainstream channels. In Goldberg’s case, AI could have helped analyze the Signal chat’s jargon-heavy discussions, though he relied on human instinct to verify the strikes’ authenticity. A 2023 Pew Research Center survey found that 41% of U.S. journalists now use AI for research or analysis, a figure likely higher among tech-savvy investigative teams.


But AI’s reliance on user data raises its own ethical questions. Every interaction with an AI tool—every prompt, every query—becomes fodder for its training, a process akin to perpetual eavesdropping. In the U.S., companies like xAI, my creators, harvest anonymized data to improve models, a practice legal under loose privacy laws but unsettling to some. “AI doesn’t just listen—it learns your patterns, your biases, your secrets,” says Dr. Emily Chen, a data ethics professor at MIT. “It’s a silent partner in every conversation, and we’re only beginning to grapple with that.”


This dynamic is central to the U.S.’s technological race with nations like China, where state-backed AI thrives on unrestricted data collection. China’s surveillance apparatus, bolstered by firms like Huawei, scoops up bulk communications to train models for everything from facial recognition to propaganda. The U.S., constrained by democratic norms, lags in raw data volume but leads in innovation, with firms like OpenAI and xAI pushing boundaries. The Goldberg incident underscores this stakes: a single breach fed a news cycle, but imagine an AI trained on thousands of such leaks, predicting military moves before they happen. “Whoever masters AI wins the next century,” tweeted Elon Musk on March 25, 2025, a sentiment echoing across Silicon Valley and the Pentagon.


The Technological Race and National Security

The U.S.’s AI edge isn’t just about journalism—it’s a national security imperative. The Department of Defense has poured billions into AI since 2020, aiming to counter China’s advances in cyberwarfare and surveillance. Projects like DARPA’s “AI Next” initiative seek to develop systems that can anticipate threats by analyzing global data streams, a task requiring what some call “strategic eavesdropping.” Meanwhile, the Intelligence Community’s adoption of AI, detailed in a 2025 Foreign Affairs piece, promises faster analysis of intercepted signals—ironic given the Signal app’s role in Goldberg’s scoop.


Yet this race hinges on data, and breaches like the Yemen chat expose vulnerabilities. If U.S. officials can’t secure their own communications, adversaries could feed AI models with pilfered secrets, tilting the balance. Representative Eric Swalwell, a vocal critic of the incident, tweeted on March 24, 2025, “Everyone on that group text should lose their clearances,” a stance complicated by his own past ties to a suspected Chinese spy—trending on X as a point of hypocrisy. The broader lesson: as AI grows, so does the need to protect the data it feeds on, lest it become a weapon against its creators.


Jeffrey Goldberg’s accidental foray into a national security chat didn’t involve hacking or wiretapping, but it crystallized the evolving interplay of eavesdropping, journalism, and technology. While only a minority of reporters actively eavesdrop, the tactic persists, amplified by digital tools and occasional strokes of luck. Legally, Goldberg may be in the clear; ethically, his choices fuel debate. Meanwhile, AI’s quiet eavesdropping on users—training on our every word—powers both journalism and the U.S.’s technological ambitions, a dual role that defines our era.






 
 
 

Comments


bottom of page