AI governance is a function of trust
A conversation with Merve Hickok, President & Policy Director of the Center for AI and Digital Policy
Merve Hickok saw the risk of algorithmic bias long before most had heard of machine learning. Now President & Policy Director of the Center for AI and Digital Policy, she works at the intersection of ethics, public policy, and frontier tech, advising governments, companies, and multilateral organizations on how to align AI systems with democratic values.
We spoke about the global misalignment between where AI is being developed and where it’s actually used, the systemic risks of regulatory gaps, and why trust, not model size, will determine AI’s long-term impact.
How did you get started in AI governance?
I used to be in the corporate world, leading recruitment technologies at Bank of America Merrill Lynch across Latin America, Canada, Europe, Africa, the Middle East, and Asia. Back then, in the early 2010s, we were already talking about unconscious bias and algorithmic bias in hiring.
It made me think: if these systems are built on biased data and flawed design decisions, where else could this be happening? The answer was everywhere. That curiosity led me to leave the bank and found AIethicist.org, a consulting and training company focused on responsible AI. A few years later, I co-founded the Center for AI and Digital Policy (CAIDP), where we work on policy and governance aligned with democratic values.
What do you think most people get wrong in today’s AI ethics and governance conversations?
The field is too focused on large language models. Since 2023, public attention has overwhelmingly shifted to foundation models, language and image systems, at the expense of more embedded, decision-making AI. These older systems still drive credit decisions, job screening, pricing, healthcare triage, and housing access. They haven’t gone away.
The second issue is an imbalance between hype and reality. We hear a lot about what these systems might do, but not nearly enough about what they can’t do or the real use cases already delivering value. That distorts both policy and investment decisions.
What would a more holistic AI governance approach look like?
First, we need public AI literacy. Most people don’t realize how often they’re interacting with algorithmic systems, whether it’s content feeds, pricing, or automated eligibility decisions. That makes informed engagement impossible.
Governments have a role in embedding AI literacy into education, ideally starting in high school, as part of civic education. And they need to balance power between consumers and companies, especially when market incentives fail.
Corporations, meanwhile, should build transparency into their AI system and invest in internal literacy. You can’t deploy AI top-down. You need people on the ground, who understand their workflows, to identify valuable use cases. That requires training at every level.
But the incentive structures at every level (individual, corporate, sovereign) pull toward short-term AI adoption. How do we align that with long-term governance and stability?
There’s a real “shiny object syndrome” right now. AI is framed as a solution to everything. But it’s not a plug-and-play upgrade. It’s a long-term investment across infrastructure, compute, talent, and governance.
I’m seeing a lot of organizations pursue AI adoption because competitors announced it, not because they’ve identified a real use case. That leads to disillusionment later. The question should be: “Do we have a real problem AI could solve? And are we ready to test it responsibly?”
And yes, there’s also the risk in not adopting AI. If your competitors or adversaries are deploying it, whether in business or cyber, your ability to respond matters. So even if you don’t lead in AI, you need AI readiness: talent, strategy, and contingency planning.
What do you think the next 10–20 years of AI might look like?
I see two competing trajectories.
On one hand, hype could outpace results. If we don’t see enough real-world use cases, especially in healthcare, science, or public services, then trust in AI will erode. If people only see companies pushing the technology without safety, privacy or cyber guardrails, then this will impact the adoption, which can then stall the investments. We’ve seen this before in other tech cycles.
On the other hand, if we build trust and show AI can deliver responsibly, adoption will grow, and so will its benefits. That means more transparency, more collaborative research, and more openness in how models are built and evaluated.
My fear is that the AI “race” narrative, especially at the geopolitical level, will channel too many resources into military or cyber domains, starving the societal applications that matter most.
If there is a correction or “burst” in the AI cycle, what survives?
Transparency. That’s what builds trust and resilience. Right now, we don’t know what’s in most models, including and especially the data, the design decisions, the safety thresholds. Even researchers are reverse-engineering from the outside.
We need open systems and engagements, not black boxes. That includes research transparency, open data, and open-source infrastructure, so that labs and researchers worldwide can build on trustworthy foundations at large We cannot simply expect those inside the major companies to have all the answers on how to improve the systems, their evaluations and governance.
I also think the real inflection point will come when companies (not just big tech) have the talent and infrastructure to adopt AI meaningfully. We are still in a transition period where many businesses are interested in adopting AI but do not have the talent to manage this in a way where benefits outweigh the risks. That’s when you’ll see confidence grow across sectors. But we’re not there yet.
You’ve said regulation doesn’t have to be restrictive. What should operators and investors watch for in jurisdictions?
Clarity. Companies want to invest, but they also want to know the rules or at least the boundaries of what would be acceptable. Large firms can treat fines as a cost of doing business. But small and medium enterprises can’t. Regulatory uncertainty is a blocker to investment as well
Japan and Vietnam are good examples where regulation is being used to encourage responsible AI adoption, offering guidance, not just penalties. In contrast, the UK is seeing AI investment stall partly because of ambiguity.
Also, remember: many jurisdictions already have laws that apply to AI, like civil rights protections or consumer laws in the U.S. We do not need to repeat all these regulations just to apply to AI. Discrimination, product safety, deceptive practices etc are already managed. The question is: where are the gaps, like deepfakes, transparency mandates, or human oversight? Can we be more surgical and precise when we create legal clarity for businesses and consumers alike? That’s where smart regulation can make a difference.
If you could give one piece of advice to a younger version of yourself just entering the AI space, what would it be?
Build trust. And invest in foundational science.
I’m extremely optimistic about AI’s potential in scientific breakthroughs in healthcare, materials, space, energy. Tools like AlphaFold or the models simulating materials at Microsoft’s MatterGen can be paradigm-shifting. And they are also more impactful because they are open-source systems and are built on trustworthy data.
So if I were investing today, I’d put money into two things: foundational AI for science, and open-source infrastructure that democratizes access. You can’t have responsible AI without transparency or the engagement of wider community


