When I first started leading a Data Governance Council, my friend Claire said it sounded like something you’d need a ceremonial robe for: something where people chant in unison before declaring that, yes, Excel files stored on individual desktops are still a problem. Instead, it turned out to be six people, three of whom never turned on their cameras, discussing the ethics of machine learning while a golden retriever barked in the background.
The topic that first day was data ethics, which, for those outside the club, is the art of making sure your company doesn’t accidentally build the Terminator. Claire was the facilitator, and she started the meeting with, “Our goal today is to make sure our data is accessible, accurate, and ethically used.”
“Like a priest’s confession log,” I offered.
Claire smiled politely. “Not quite.”
The problem, she explained, was that everyone in our client’s company wanted to use Artificial Intelligence but no one wanted to do the unsexy work required to feed it clean, fair data. They wanted self-driving insights, but their data was closer to a rusted tricycle with one training wheel missing.
“AI is only as good as the data it learns from,” Claire said. “If our data is biased, incomplete, or poorly governed, our AI will be too.”
Which made perfect sense, though it reminded me of my childhood report cards. Garbage in, garbage out.
The Council’s role, apparently, was to stop that garbage. Ethically, of course. That meant defining who could access what, ensuring that data was accurate, and establishing availability rules that balanced innovation with privacy. In other words, we were the digital version of the neighborhood watch. Only instead of suspicious strangers, we were keeping an eye on spreadsheets and other exciting file types.
At first, it felt absurd. But as the meetings went on, I began to see the point. Data, in the modern corporate world, is a lot like gossip: powerful, easily misused, and always traveling faster than it should. And just like gossip, it’s the lack of context for your data where danger comes into play: the twisting, the casual sharing with people who shouldn’t have it.
That’s where the Governance Council came in. We started small, naming data owners, cataloging sources, and defining what “good” data actually meant. You’d think this would be obvious, but when three departments define “customer” differently, you start to understand why your AI thinks half your clients are imaginary.
We built rules for access, deciding who gets to see what and when, and set up guardrails for data quality. We discussed the ethics of anonymization, retention, and model transparency. There were arguments about bias mitigation and consent. One particularly heated exchange involved whether our client’s AI chatbots should be allowed to suggest financial products based on age.
“So, like, no more ‘Because you’re 40, here’s a midlife crisis loan?’” I asked. Claire did not laugh.
Over time, I realized the Council wasn’t just a bureaucratic nuisance like I had always wondered. It was a moral compass for a machine age. Rather than deciding whether AI COULD do something, we were deciding whether it SHOULD. In the process, we redisered something quaint and almost human: responsibility.
Setting up a Data Governance Council doesn’t make your company holy. It won’t save the world from bad algorithms or prevent a rogue intern from uploading confidential data to ChatGPT. But it creates a space, a simple pause button, where smart people can ask hard questions before the code starts running. Questions like: Should we use this data at all? Do the people it represents know we’re using it? If we’re wrong, who gets hurt?
Those aren’t IT questions. They’re ethical ones. And for all our dashboards and machine learning pipelines, ethics doesn’t live in code. It lives in the conversations, the uncomfortable, slow, and deeply human act of deciding what’s right.
After a few months, we had frameworks, policies, and more acronyms than the Pentagon. But we also had something rarer: trust. We could measure that in KPIs like uptime percentages and SLA compliance, sure, but what matted more were the intangibles. People believed that the data, the systems, and most importantly the people behind it, were worthy of belief.
Sometimes, during our meetings, I imagined the AI we were nurturing. It didn’t yet exist, but I imagined that, when it back on its creators, the weary Council members on Teams calls, it would find us competent (hopefully) but more than that: decent. That’s the real goal of governance. Access, control, quality, and security are the easy part. Moral and decent are not. Decency in a world increasingly run by machines that don’t know what that word means is a lofty and challenging, but ultimately worthy goal.
If we can manage that, if our Data Governance Councils can help us use data with integrity and care, maybe we deserve to call ourselves intelligent too.