The Minority Table: Race, Upbringing, and the Bias We Build Into the Future
In high school, I remember sitting at the back of the refectory at Lakeside during lunch—what we all quietly called the “minority table.” There wasn’t a sign, and no one assigned us seats. But still, somehow, we ended up there. Black, Asian, Latino—it didn’t matter where exactly you came from, just that you weren’t part of the majority. It wasn’t enforced. It was inherited.
I grew up in the projects in Seattle, in a Cambodian immigrant community. We didn’t have much, but we had each other. It was a tight-knit, culturally rich place—also a place quietly segregated from the rest of the city. Then I got a scholarship to Lakeside, the private school that’s known for producing Bill Gates and Paul Allen. The birthplace of Microsoft. A symbol of innovation.
Lakeside prided itself on being progressive and inclusive. And in many ways, it was. But even in a space like that, where opportunity flowed freely, the social code remained. Minority students often clustered together—not out of hostility or division, but out of a subtle, shared understanding. There’s comfort in proximity when you spend your days navigating difference.
What strikes me now is how these kinds of early social patterns—where people sit, who they talk to, how they’re seen—don’t just fade away. They evolve. They show up in our institutions, our decisions, our technologies.
And now, they’re showing up in our code.
We often talk about artificial intelligence as objective—as cold, logical, fair. But AI doesn’t make decisions in a vacuum. It learns from data. From us. And if we’re not careful, it learns our biases—our seating charts, our hiring patterns, our policing disparities, our historic inequalities—and bakes them into something that feels permanent, automated, and unchallengeable.
Facial recognition that misidentifies people of color. Hiring algorithms that screen out “non-traditional” names. Predictive policing models that double down on already over-surveilled communities. These aren’t anomalies—they’re the natural outcome of a system trained on incomplete or biased information.
The problem isn’t just that AI can go wrong. The problem is that it reflects the world we’ve built—only faster, quieter, and at scale.
That’s why this story of a lunch table matters. It shows how early, invisible cues shape how we move through the world. Those same cues are being replicated in machine learning models. When an algorithm decides who gets a loan, or a job, or a second chance, it’s not making a neutral decision. It’s predicting based on the past—and the past has not been equal.
I don’t bring this up to rehash debates about DEI. Those conversations are shifting and evolving in real time. But I do believe we need to focus on something deeper: how bias lives in systems, not just in people. How easily we can embed prejudice into tools that feel impersonal. How crucial it is that we examine not just the outcomes of our algorithms—but the data, assumptions, and histories that feed them.
If we’re building the future with code, we need to ask: whose story is in the training data? Whose isn’t? Who ends up at the center—and who’s still sitting at the back of the room?
Because if we don’t question it, the bias becomes the standard. And the systems we design will only reinforce the same patterns we grew up with.