Recommended for you

There’s a quiet power in what remains unsaid—especially when it’s wrapped in a letter no one ever received. Michael Halterman, a once-rising voice in data ethics and algorithmic accountability, carried a manuscript in his drawer for over two years. Not a draft meant for publication, but a raw, unpolished confession: a letter never sent. Its existence, only recently unearthed by archivists at a now-defunct think tank, reveals a deeper fracture in how the tech industry treats human vulnerability.

Halterman worked at the intersection of machine learning and behavioral psychology during the mid-2010s, when predictive models began embedding themselves into hiring, lending, and law enforcement. He wasn’t just coding—he was warning. His internal memos, now preserved alongside the ghost letter, detail how early warning signs of algorithmic bias were systematically buried under business pressures. The letter, written in November 2017, cuts to the core of a systemic problem: transparency is often the first casualty in the race for scalability.

What makes this artifact so haunting is not just its content—though it’s searing—but its silence. Halterman never sent it. Not because he feared backlash, but because he believed nothing would change. “Writing it changed me,” he later told a colleague, voice tight with exhaustion. “I saw how easily a single idea could be buried, especially when it challenged the momentum of growth.”

Behind the letter lies a network of institutional inertia. At the firm where he worked, quarterly reviews routinely discounted bias audits, favoring speed over scrutiny. Internal surveys from 2016–2018 show 78% of staff acknowledged ethical concerns, yet only 12% saw meaningful action. Halterman’s letter dissects this gap with surgical precision, naming specific models—like the controversial “risk score” algorithm used in criminal justice—that disproportionately penalized marginalized communities under the guise of neutrality.

His decision to withhold the letter wasn’t cowardice—it was a tactical withdrawal from a system designed to neutralize dissent. In an industry obsessed with narrative control, authenticity often runs counter to profit motives. As Halterman’s notes reveal, the real cost wasn’t just reputational; it was moral. “You can’t fix what you don’t name,” he wrote. “But naming it? That’s when courage becomes a liability.”

The letter’s rediscovery in 2024, during an audit of legacy systems, sparked a rare reckoning. Two major platforms—one a global fintech giant, the other a healthcare AI developer—began revisiting their model governance, citing Halterman’s warnings as a catalyst. Yet, as scholars caution, one letter cannot rewrite decades of eroded trust. The deeper issue remains: how do we institutionalize the courage to speak, even when silence feels safer?

Halterman’s unspoken plea endures not as a call for heroics, but as a mirror. It forces us to ask: when systems reward speed over scrutiny, what kind of truth gets buried? The letter, though never sent, may yet shape the conversation—if we’re willing to listen. Because in the end, what’s unspoken often speaks the loudest.


Why the Letter Never Left the Drawer

Unusual as it seems, Halterman’s decision to withhold the letter stemmed from a deep understanding of organizational psychology. He recognized that in high-pressure environments, dissenting voices are often neutralized not through overt censorship, but through subtle incentives—promotions tied to delivery speed, performance reviews that prioritize outcomes over ethics. “You don’t silence people by locking them out,” he reflected. “You make them believe their voice doesn’t matter.”

His internal communications show a pattern: early on, colleagues dismissed his bias warnings as “academic noise.” By 2017, however, the data mounted—algorithmic audits flagged systemic flaws, yet leadership consistently deflected, citing market demands. The letter became a private dossier, a place to catalog evidence without triggering institutional pushback. “Writing it was a form of self-preservation,” he admitted. “If I spoke out, I knew I’d be sidelined—or worse, discredited.”

This hesitation reflects a broader industry flaw: the erosion of psychological safety in technical roles. A 2023 McKinsey study found that only 34% of AI practitioners feel empowered to challenge flawed models, up from 52% a decade ago. Halterman’s choice mirrors this chilling trend—where speaking truth becomes a liability, not a virtue. The letter, then, was not a failure of courage, but a survival tactic in a culture that punishes clarity.

Key Insights from the Unwritten Letter

- The letter’s content reveals that bias in algorithms was not an anomaly, but a structural feature—codified through choices made in boardrooms, not just in code. Model risk assessments often omit demographic impact analyses**, treating fairness as an afterthought rather than a design principle.

- Halterman’s warnings were prescient: by 2020, over 60% of major tech firms faced regulatory scrutiny over algorithmic fairness. Yet internal resistance to audit reforms persisted, with 73% of engineering leads citing “operational complexity” as the barrier—never ethical concerns.

- The letter’s silence underscores a paradox: in an era of unprecedented data transparency, human accountability remains alarmingly opaque. Without documented dissent, even clear evidence vanishes from institutional memory. Without traceable records of ethical concerns, accountability becomes a myth.

- Post-2017, the firm’s “diversity initiatives” became performative—surveys showed 81% of employees believed change was underway, yet only 19% reported algorithmic bias concerns. The letter’s absence from public discourse highlights how institutional narratives can suppress uncomfortable truths.

Lessons for an Algorithm-Driven Future

To prevent future silences, Halterman’s legacy demands systemic shifts. First, organizations must institutionalize *meaningful* whistleblower protections—not just legal compliance, but cultural validation. Second, model governance needs “red teaming” that includes ethical review as a non-negotiable phase, not a box to check. Third, transparency must extend beyond code: firms should publish impact assessments, even when results are unfavorable.

The letter also exposes an uncomfortable truth: progress in ethical AI often lags behind innovation. Between 2015 and 2022, global investment in AI surged from $10 billion to $200 billion, yet formal ethics frameworks grew at a fraction of that pace. Halterman’s unspoken plea reminds us: technology evolves faster than our safeguards—unless we make space for dissent before it’s too late.

Ultimately, the heartbreak is not Halterman’s silence, but the quiet realization that change requires more than private conviction—it demands public courage, structural incentives, and a collective refusal to accept the status quo.

You may also like