The Enthusiastic AI Builder, Buyer, and the Missing Framework

A Call to CHROs on AI Readiness

· AI,Compliance,HR Tools,Governance

The problem in the excitement of the new frontier.

There is a particular kind of post making the rounds on LinkedIn lately. A proud People or Talent team member shares that they just vibe coded an AI tool to streamline their screening process, automate their onboarding workflow, or generate performance review summaries. The comments fill with fire emojis and enthusiastic follow up questions of others who want to do the same.

It is a genuinely exciting time. AI is democratizing the playing field and confirming what a VC friend told me recently: “Anyone can cook.” Curious and motivated practitioners are using newly accessible tools to solve real problems that impact their daily work. That fresh energy is needed for the profession and good for the people they serve.

Yet…

If you have spent enough time in HR to know what you know and do not know, something in the current environment of idea to deployment at lightning speeds may give you pause. Not because the intention is wrong, rather because the framework and guardrails are missing.

Building an AI tool in HR is not like building one in marketing or operations. The data feeding the engine is highly sensitive and the outputs carry more human consequence. While a flawed recommendation engine for ad targeting will quickly reveal a revenue problem, a flawed screening tool
built on historical hire data is a potential EEOC problem, and the harm is quietly hidden until it may be too late.

The stakes of getting this wrong are not hypothetical. Workday currently faces an ongoing class action lawsuit (Mobley v. Workday) alleging age discrimination tied to their AI screening tool, which they unsuccessfully attempted to have dismissed. While they have not been found liable, that is beside the point. The time and resources spent defending this case is enough to be disruptive to them and their customers. Every organization that deployed that tool and trusted it is sitting with potential exposure they may not fully understand yet. The harm does not stay with the vendor, rather it also rolls downstream to every customer who outsourced their judgment along with their budget.

That is not a reason to stop building or buying, but it is a reason to know what you are building, what you are bringing in, what data it is touching, and what could go wrong before you ship it to your leaders on a
Tuesday after it was merely an idea two weeks ago.

Right now, in most organizations, nobody has given that enthusiastic IC building the cool new tool the framework they need. This isn’t their failure, rather a leadership gap. The same gap exists when HR leaders bring in new vendors, sign contracts, and trust that somebody upstream has done the hard thinking. Oftentimes, nobody has.

What this moment calls for is a framework that applies equally to internal tool building and AI vendor procurement. Not as a brake on progress, but as the infrastructure that makes progress defensible, durable, and worth keeping.

The stakes are different here.

HR holds some of the most sensitive and consequential data in the company. Compensation. Performance history. Medical accommodations. Demographic information. Age, gender, and nationality. Candidate assessments. The full lifecycle of one’s employment, from application to exit. These are the
records that shape people’s livelihoods.

The legal exposure is not abstract. Employment decisions are governed by frameworks with real teeth. Examples include: Title VII, the ADEA, the ADA, EEOC guidance on algorithmic bias, GDPR, CCPA, and an expanding body of state-level AI legislation that is moving faster than most procurement cycles. The bodies governing these rules do not care whether the discrimination or data misuse was intentional. Disparate impact doctrine exists precisely because a neutral-looking process still can produce discriminatory outcomes. An AI tool trained on historical hire data does not need malicious intent to encode the biases of those who made prior decisions. It just needs historical data.

Additionally, there is the visibility problem. In most other functions, the persons affected by a bad AI decision can see it and respond to it. A customer gets a bad recommendation and clicks past it. A team member gets a suspicious expense flagged and disputes it. But a candidate filtered out by an AI screening tool has no recourse, no visibility, and often no idea it happened. The harm is invisible to the person experiencing it, which means the feedback loop that would normally surface the problem simply is absent.

Who owns the space?

The honest answer: right now, from what I have seen, outside of a few fringe cases, in most organizations, nobody yet owns it cleanly. This presents a risk that will accumulate quietly until it doesn’t.

The ownership question gets murky fast. IT is involved because tools need to clear their security and data governance requirements. Legal is involved because employment law and vendor contracts intersect in ways that can sting. HR is involved because it is their function, their data, and ultimately their problem when something goes wrong. Yet when something does go wrong, the finger-pointing reveals what was true all along: nobody had a clear mandate, and everyone assumed someone else was asking the hard questions.

What may fill that vacuum is whoever wants the mandate most, which could also be problematic. The person who raises their hand highest for ownership of something new and shiny is not always the person best positioned to own it. Power vacuums in emerging domains equally attract the ambitious and
the curious. Ambition without context is its own category of risk. Meanwhile, the people who do have the context and understand the legal exposure, the data sensitivity, the human stakes may be the ones hanging back. Not because they don’t care, rather because they are already underwater, politically cautious, or genuinely uncertain whether this is their battle to pick.

After much reflection, this is what I think the rightownership model may look like:

  • IT owns the security perimeter: procurement, data governance, SOC compliance, vendor security review. They are well positioned to determine whether a tool is safe. They are not positioned to ask whether it should be used for the intended purpose at all. That is a different question requiring different expertise.
  • Legal owns and reviews blind spot identification. While they do not own veto power over People strategy, their value in this context is pattern recognition: surfacing the exposure that isn’t obvious, stress-testing assumptions, flagging what the team cannot see because they are too close to it. They are a review function, not an ownership function. If Legal becomes the de facto owner of AI governance in HR, you may end up with compliance theater that detracts from the
    intended strategy and desired outcomes.
  • The CHRO owns everything in between: use case judgment and defining ethical guardrails. The question of whether a given AI application is appropriate for a people decision at all. They also should own AI readiness as an organizational capability through sequencing of what gets built and bought, and when. They also serve as the translation layer between IT’s security requirements, Legal’s risk flags, and the business’s desire to move fast.

That last part requires the CHRO to hold a position under pressure: to walk into a room of a CEO who wants to announce an AI-powered people strategy and say, “Here is what we are ready to do now, here is what we need more time on, and here is why that distinction matters.”

The CHRO who cannot hold that position will default to one of two failure modes: rubber-stamping everything in the name of innovation and hoping nothing goes wrong, or slow-rolling everything in the name of caution and losing credibility with the business. Leadership is the third option: a clear point of view, a defensible framework, and the ability to confidently explain both to a skeptical CEO. The third option requires something most CHROs are not currently being given enough credit for needing: cross-functional authority and enough technical fluency to lead this conversation, not just participate in it. While coding fluency is not required, they need the technical, systems thinking, HR domain, and business fluency to ask the right questions. They need to know what disparate impact means in the context of a vendor demo. They need to understand what it means when a tool is trained on proprietary data. They need to recognize when a vendor’s answer to a governance question is a real answer or an optimistically misleading sales promise.

This is a new job requirement, and the CHROs who get ahead of it will be the ones who define what great looks like for everyone who comes after them. This means making AI readiness a core organizational capability by continually building fluency across the People function. We will come back to what that looks like in practice. But it starts with the CHRO claiming it as theirs to own.

The arc that should be happening.

The good news is that the framework may not be as complicated as we think. Here is how I am currently thinking about this:

Step one: assess before you build or buy. Before a single tool gets built or a vendor demo gets scheduled, the CHRO needs to ask a set of questions that most organizations skip entirely because they feel like they are slowing things down. They are not slowing things down, rather preventing the kind of slowdown that comes from deploying something consequential without understanding what’s being deployed.

What data does this tool touch? Who does it affect, and how directly? What is the legal exposure if it produces a biased or erroneous output? What happens to the person on the receiving end if it gets it wrong? Also, do we have the internal capability to evaluate whether it is getting it right and can mitigate quickly?

Vendors sales reps are skilled at presenting confidence. Every experienced HR tech buyer well knows
that a polished demo with clean outputs is not evidence that the tool will perform equitably across populations, handle edge cases correctly, or produce outcomes that would survive legal scrutiny. The assessment step exists to make sure someone on the People team is asking those questions before the contract gets signed and not after the first complaint arrives.

Step two: make cross-functional review a real checkpoint, not an FYI. IT and Legal need to be in the room before a decision is made, not after. This sounds obvious yet rarely happens in
practice.

The default pattern is that HR identifies a tool, builds enthusiasm internally, gets budget approval, and
then loops in IT for security review and Legal for contract redlines. By that point, the decision is functionally made. IT and Legal are ratifying something but not evaluating it.

Real cross-functional reviewmeans IT is asking security and data governance questions before the demo, not after the Letter of Intent (LOI). It means Legal is reviewing the use cases, not only the vendor contract, and surfacing the employment legal exposure that the vendor’s sales team did not mention. The review must have actual power to reshape or pause a decision, not just document that the review happened. This is where Legal earns its place in the model. Not as an owner or a committee member, but as the blind spot finding function that asks the questions the team is too close to ask itself.

Step three: build and buy in the low stakes operational space first.

This is where the unbridled enthusiasm finds its home. Scheduling optimization. Policy lookup tools.
Onboarding logistics. Benefits navigation. First tackle all administrative workflows that eat time and resources yet add little strategic value when done manually. These are the right places to start, and not just because the risk ceiling is low, but because this is where the People function builds the
collective fluency that enables the harder things that come later.

The IC who builds a tool to automate interview scheduling is not just saving time. They are learning how
these tools behave, where they break, what assumptions they make, and what happens when the data is messy. This is the hands-on experience and organizational knowledge that makes someone a better evaluator of a vendor’s more consequential tool six months from now. It is the difference between a team
that is learning the basics on a high stakes problem and a team that already knows the basics because they learned them somewhere safer.

This is the CHRO’s opportunity to channel the raw excitement and energy rather than suppress it. Create a team sandbox and define the boundaries of what belongs in it. Give practitioners the explicit permission and the lightweight governance to experiment, build, and learn safely within a defined space. Celebrate what they figure out and document what goes wrong. Build the institutional knowledge intentionally rather than hoping it emerges on its own.

The low stakes operational phase is doing three things at once: delivering near-term value the business can see; building individual and team fluency through real hands-on experience; and, lastly, creating the evidence that HR can move both thoughtfully and effectively. This gives the CHRO credibility when they say “not yet” on the consequential stuff. That “not yet” lands very differently coming from a function that has earned the right to say it, versus one that reaches for policy as a reflex.

Step four: pilot narrowly before scaling anything consequential.

When the time comes to bring in tools that touch higher stakes decisions, such as screening, performance, compensation, workforce planning, first pilot small and deliberately.

Start with a small population and defined parameters. A parallel process that lets you compare AI-assisted outcomes against existing process so you can see what the tool is doing. Criteria should be established in advance to define what success looks like and what failure looks like to avoid any temptations of rationalizing outcomes.

There must be a person in the room whose job is to ask the uncomfortable questions throughout. They must carry enough context to notice when an output pattern looks off, has enough authority to pause the pilot if it does, and enough credibility with leadership to explain why pausing is the right call.

Don’t treat the pilot as a formality, rather as the last line of defense between a promising tool and an
organization-wide deployment that cannot be easily reversed.

What AI readiness truly means.

There is a version of readiness and change management towhich many organizations default: a lunch and learn or series, a curated list of approved tools, an FAQ, a policy added to the employee handbook. The boxes get checked and they celebrate via a slide at the all-hands. Yet everyone goes back to doing exactly what they were doing before, except now there is a policy document nobody reads sitting on the intranet.

True AI readiness should be a capability, not an event or artifact to later put on a shelf to collect dust. It is the ongoing, deliberate work of building the fluency across the People function and ideally across the
organization that allows teams to engage with these tools thoughtfully rather than reactively. It means practitioners who understand not just how to use a tool but how to evaluate one and can see through a best-case vendor demo scenario carefully constructed to not show edge cases. There must be someone
who has enough context about bias, data sensitivity, and legal exposure to know when something feels off, even if they cannot immediately articulate why.

Think about what a People team looks like after twelve months of building and experimenting in the low stakes space with intention and good governance. They have handled messy data. They have seen tools behave unexpectedly. They have made small mistakes in contexts where they are recoverable, and they learn from them. They have developed instincts that can only be earned through experience. That team is fundamentally better equipped to evaluate a consequential vendor tool than a team that has only ever sat through demos.

This is the investment argument the CHRO needs to make to leadership, and it is a strong one. You are not asking for permission to go slow, rather asking for the space to build something enduring. The
organizations that treat AI readiness as a capability and invest in it the way they invest in other critical competencies will be the ones that can move fast on the consequential stuff without blowing themselves up. The organizations that skip this step will be the ones managing the fallout.

There is one more dimension of AI readiness that does notget enough attention: the CHRO’s own AI fluency. The fluency to hold an informed conversation with a vendor’s product team. To ask what data the tool was trained on and know whether the answer is satisfying. To know that “our model has been tested for bias” is not an answer, it is the beginning of a conversation. To push back on a capability claim with a specific, grounded question rather than a vague concern.

This will become a baseline job requirement for senior People leaders, and the gap between where most CHROs are and where they need to be is real.

This moment is ours.

There is something worth saying that I don’t think gets said enough in conversations about AI and the future of HR leadership: this is a level playing field. Nobody has twenty years of experience governing AI in HR. Nobody has a battle-tested playbook they have run three times at three different companies. The CHRO who has been in the function for two decades and the one who is two years into their first senior role are standing at roughly the same starting line on this.

That matters more than it might seem. HR leadership, and People functions broadly, have long operated in environments where confidence is seen as competence. Where the person who speaks first and most definitively in the room shapes the decision, regardless of whether their certainty is right. Where the careful, rigorous, show-your-work approach gets read as hesitation rather than rigor. In a domain where everyone is figuring it out in real time, the leaders who ask better questions have a genuine advantage over the leaders who project false certainty. The leaders who build real frameworks outlast the ones who perform innovation for the sake of innovation. The leaders who protect their organizations from foreseeable harm build more durable credibility than the ones who move fast and hope for the best.

The CHRO who walks into the CEO’s office with a clear point of view on where to move now versus where to move carefully, while presenting the framework to allow for both, is not the cautious one in the room, rather the most prepared one in the room. That is a different identity, and it is available right now to any People leader willing to do the thinking.

The enthusiastic IC posting their AI tool build on LinkedIn is not the problem. They are an exciting signal. They are telling you they are ready to engage with these tools, their curiosity is real, and their energy to step into the new exists. The question is whether there is a leader upstream who has built the infrastructure to make that energy productive rather than just fast. That leader is the CHRO. And this is our moment to step into it, not by slowing the function down, but by being the person who thought it through first.

Photo by Nemuel Sereti