Novisto AI Enablement · A Proposal
A proposal · May 2026

AI Enablement at Novisto.

An embedded AI Enablement Lead role, designed around the one ratio the board flagged (revenue per employee versus OPEX), and built so the capability lives with Novisto, not with me.

To
Charles, Cyrille
From
Akshay Konjier
Date
May 2026
01
What this is

A concrete proposal, shaped as partnership.

After our intro call, this is a concrete proposal for how I'd come in alongside the work you're already doing on AI at Novisto and help accelerate it. The shape I'm proposing is an embedded AI Enablement Lead role, Monday through Thursday, for a full year. The point of this document is to lay out the thinking behind that shape and what the partnership would look like in practice.

AI enablement is not a tool rollout. It is a business transformation that touches workflows, culture, governance, and how people spend their day, and it works only when it is resourced like one.

Most companies underfund the human side of this work and conclude AI didn't deliver. The reason this proposal looks the way it does, and costs what it costs, is that the work that actually moves the OPEX-versus-revenue ratio the board flagged is cultural, operational, and technical all at once. You cannot get there with a webinar budget.

Cyrille, you already own this work at Novisto. Nothing here is meant to suggest otherwise. What I'm offering is execution capacity, cross-functional reach into the non-engineering departments, and a partner for the parts of the program that benefit from someone whose full attention is on it four days a week. The frame is partnership, not replacement.

02
The landscape, briefly

Most enterprise AI rollouts fail the same way.

A company buys 200 seats of something, runs a 45-minute webinar, sends out a one-pager, and waits for the productivity to show up. Most people never log in. Three months later, the engagement numbers get buried in an appendix and the spend has nothing to show for it.

This isn't a tool problem. It's a rollout problem. Nobody owns the outcome, no specific workflow was targeted, training was passive, and adoption was measured by license activation rather than by anything that connects to the business.

The companies that get this right look different in five specific ways:

  • Work department by department on actual workflows, not company-wide on abstractions. One team, one workflow at a time, with someone whose job is to make those teams successful.
  • Stay readily available to support new AI initiatives and don't put technology blockers in the way. The answer to a new idea is "let's get you what you need today," not "submit a request, we'll review it in six weeks." That means quick integrations and real token budgets: mindful of cost, never gated on cost when the experiment is cheap and the upside is real.
  • Say the quiet part out loud: this isn't an initiative to replace anyone's work. It's an initiative to help people upskill, do better work, and improve the overall health of the company so everyone benefits, including through bonuses and growth opportunities. Most AI skepticism inside companies is rooted in a real fear, and pretending the fear isn't there is the fastest way to make adoption stall.
  • Measure business outcomes, not adoption rates. Cycle time on a real process. Hours redirected from internal overhead to customer-facing work. Support ticket volume per support person. Metrics the board recognizes, not metrics the AI vendor invented.
  • Budget for intensity, not seat count. A heavy user of an agentic tool can cost many times what a casual user costs. Companies that budget AI like SaaS will be wrong, sometimes by a lot.

The worst version of the next 12 months at Novisto is the one where AI shows up as another tooling rollout, generates an adoption number that means nothing, and changes nothing about the OPEX gap the board flagged. Everything else in this proposal is built backwards from avoiding exactly that outcome.

03
What the board actually cares about

Revenue grew. OPEX grew faster.

That framing is the right one to anchor this work to, because it's the framing that makes AI a strategic decision rather than a procurement one. There are exactly two ways AI changes that ratio.

Goal 01 · Revenue

Accelerate revenue per employee.

Every workflow that consumes time but doesn't directly produce revenue is a candidate for AI absorption. The goal here is straightforward and people-positive: give the team their time back so they spend it on the work that actually moves the company forward. A salesperson preparing for a discovery call gets a 5-minute structured brief instead of an hour of LinkedIn digging. A CSM drafting renewal updates spends fewer evenings on Excel exports. A finance analyst pulling a quarterly variance report cuts the prep time in half. A developer triaging a bug spends less time clicking through environments and more time solving the problem. A marketer writing a campaign brief turns a two-week cycle into three days. None of those jobs disappear; they get cleaner, faster, and more focused on the work people are actually good at.

For a company in Novisto's space, selling into regulated, ESG-conscious enterprises, there's a second-order revenue effect: your customers' AI maturity is becoming a question they ask of their vendors. Being able to credibly speak about how Novisto governs, deploys, and benefits from AI internally is going to start showing up in procurement questionnaires. That's a sales asset, not just an internal program.

Goal 02 · OPEX

Rationalize OPEX without destroying culture.

This conversation has to be handled with care because the failure modes are obvious. Done badly, it's the "AI replaces people" story that wrecks morale and exits your best talent. Done well: where natural attrition happens, we evaluate whether AI absorbs the load before backfilling, and the freed budget redirects to revenue-generating hires. Where workflows are manual and frustrating, we automate them. People don't leave when their boring work disappears; they leave when their interesting work is buried under it.

For engineering specifically, there's a thesis worth exploring with Cyrille: with the right scaffolding, AI is changing the math on distributed teams (onboarding compresses, async friction drops, work shifts toward planning and review), opening an option to expand capacity at a sustainable cost structure. That's a thesis, not a prescription. How it actually applies to Novisto's engineering org is something to think through with Cyrille, not something to assume from the outside. The honest version is that AI doesn't shrink headcount overnight, and trying to make it do so is a good way to break the company. It changes the math on attrition, backfills, and capacity expansion quietly, over a year, in ways that compound.

04
How my thinking is evolving

Five operating bets behind this proposal.

Charles, you asked on the call to hear what I think you might not already know. The point of this section isn't to be right about everything. It's to make the operating bets behind this proposal explicit so we can disagree about them openly.

01

AI enablement is a change-management problem the industry is pricing like a tool rollout.

Most change-management programs assume a defined tool with known workflows. Pick a CRM, train people on it, they use it. AI is the opposite on both axes: the tool itself is ambiguous, and the public marketing has been poor enough that most workers come in either over-hyped, suspicious about their jobs, or convinced it's a fad. None of those is a stable starting point for adoption.

The cultural work is bigger than the tooling work, and most companies have the budget split backwards. They spend most on licenses and most of the leftover on a training webinar. The companies that get it right invest disproportionately in the things that are hard to put on an invoice: weekly demos of real wins, time spent with each team, individual coaching for skeptics, and a steady drumbeat of "here's what a colleague just did, here's how they did it." It's slow, deliberate culture work, and it's the whole game.

02

Switching costs are the hidden OPEX line, and almost nobody is measuring them.

Most knowledge work isn't slowed down by lack of skill; it's slowed down by context switching. A developer fixing a bug: read the ticket, load the environment, start the server, reproduce, fix, test, push, write the PR description, respond to reviews, redeploy, verify. Maybe ten minutes is the actual fix. The rest is movement between tools and mental modes. The same shape repeats everywhere. A salesperson moving between CRM, email, deck, transcript, and Slack to send one follow-up. A CSM toggling between five tabs to write one customer update.

The companies that win at AI over the next two years won't be the ones who use AI to do the work faster. They'll be the ones who use AI to eliminate the switching costs around the work. Every hour of switching cost eliminated is an hour redirected to customer-facing work or judgment work or things that compound. That's where the OPEX-versus-revenue ratio actually moves, and it's something I'd want us to measure explicitly.

03

Anyone in the company can now improve the product.

This is a pattern I've seen work in my current company, and I'd want to explore whether and how it fits Novisto with Cyrille. The traditional chain for a small product issue (confusing label, missing field, copy bug) is: customer-facing employee notices, translates to Slack, PM scopes, dev builds, QA tests, ships in three weeks to never. With the right scaffolding, that chain isn't necessary for a meaningful percentage of small issues. The customer-facing employee describes it in plain language, gets a candidate fix, pushes it to a branch with a preview environment, and hands the package to a developer for review. The developer's role on these compresses to reviewer, not builder.

The pushback from engineering on this is real and legitimate: code quality, security, consistency, and the risk of non-developers shipping things that break. All of those are answerable, but only with the right scaffolding (easy preview environments, clear conventions, real CI guardrails, explicit checkpoints, genuine review). What that looks like at Novisto is Cyrille's call. When it works, the people closest to the customer become the people fixing the customer's problems, and the developers spend their time on the harder problems.

04

The ROI is recoverable if you treat this as a transformation, not a tool swap.

A lot of companies are going to roll this out like a standard tool enablement: pick a vendor, run some webinars, send a one-pager, call it done. That's the problem. AI isn't a new ticketing system. It's a business transformation, and it needs serious focus and serious support, more than a normal software rollout, not less. When companies treat it that way and resource it appropriately, the ROI is recoverable over a year or two through revenue growth and cost reduction. When companies treat it like a tooling rollout, the spend is real and the result is usually nothing.

The discipline that makes the ROI math work is pretty mechanical: pick specific workflows, fund them generously enough to actually work, measure them per-workflow rather than aggregating spend at the org level, and kill the ones that don't pay off. A workflow that costs $400 a month in tokens and eliminates 30 hours of skilled work pays for itself many times over. A workflow that costs $200 a month and saves nobody any time is a tax we're paying for the appearance of progress. Most enterprise AI budgets today are a mix of both, and almost nobody is rigorously separating them.

The upfront cost of doing this properly is real, but the per-workflow returns tend to land quickly once a workflow is shipped.

05

Task decomposition is the bottleneck skill, and verification is what we should be teaching.

The framing that's caught on lately is "treat AI like a smart intern." A smart intern can do almost anything you can clearly describe to them, but they can't read your mind or watch you work for a week and guess what to take over. The biggest single predictor of whether someone gets value out of AI is whether they can break their own work down into discrete, atomic tasks and hand each piece off cleanly. Developers do this naturally; most other functions don't. A lot of what an Enablement Lead does day to day is sit with people in non-engineering roles and help them learn to see their own work as decomposable.

The companion skill is verification. Almost every AI training program teaches prompting; very few teach verification. That's backwards. Prompting is the easy part. Knowing when to trust the output, what to look for, and how to articulate what "good" looks like is the hard part, and it's what separates AI usage that creates value from AI usage that quietly creates risk. It's also the foundation of the agentic workflows that get genuinely high ROI: if you can describe verification rigorously, you can build it into the loop itself.

05
The role

What an AI Enablement Lead does at Novisto.

Five areas of work, in roughly the order they create value. The frame throughout is partnership with Cyrille on the technical and governance surfaces, and direct support to Charles and the rest of the exec team on the business outcomes.

01

Workforce enablement.

Adoption isn't a training problem. It's a peer-visibility problem. People learn to use AI on real work by watching the colleague at the next desk get value out of it, not by sitting through a 45-minute webinar. The work is to make that visibility happen on purpose, through an AI Ambassador cohort of 5 to 10 people across functions, mixed by seniority and skepticism, with elevated tool access, real token budgets, a public channel to share wins, and explicit permission to experiment without delivery pressure.

The leading indicator I'd watch isn't license activation; it's peer-to-peer help volume, meaning how many people are asking colleagues "how did you do that?" each week. Underneath the visibility motion, I'd run quarterly workflow audits function by function, sequenced so we're always shipping something new for one department while another is being assessed. Each audit produces a prioritized list of the 3 to 5 highest-leverage workflows per function with rough effort estimates and an honest read on whether the team is ready to absorb the change. The output isn't a deck; it's a backlog we work through together.

02

Hands-on building.

Most enterprise AI value sits in the gaps between SaaS tools, not inside them. The off-the-shelf vendors will eventually catch up to the workflows that matter, but "eventually" is 12 to 24 months, and that's a long time to wait when the workflow is consuming hours of someone's week today. Building those integrations in-house, when the ROI is clear, is one of the highest-leverage things this role does.

The shape isn't a six-month engineering project. It's the kind of thing where, on a Tuesday, a CSM tells me they spend two hours a week reformatting customer data, and by Friday there's a working tool that does it for them, plus a Loom of how it works, plus a follow-up the next week to check whether it actually saved them time. Most of the AI work that pays for itself looks like that, not like a platform. The other half is the build vs. buy vs. wait call: knowing which workflows are about to ship in a tool you already pay for, which need to be bought, and which need to be built because the gap won't get filled. Being willing to be wrong publicly about it is part of the job.

03

Evangelism and culture.

The public reputation of AI is poor. People are tired of the hype, suspicious of the threat-to-jobs framing, and quietly worried they're being asked to adopt a tool that doesn't actually work. None of those reactions is unreasonable. The evangelism work is being the person inside the company who tells the truth about AI weekly: what's working, what isn't, what we tried that failed. Demos in all-hands. A standing slot in the company Slack. Lunch-and-learns that are genuinely useful, not vendor-led. The goal is to replace "AI is a thing leadership is making us do" with "AI is the thing my coworker just used to save four hours."

The single biggest cultural variable is whether leadership is visibly using AI on real work. A CEO drafting board updates or pressure-testing thinking with AI moves the company faster than any training program can. If it would be useful, I'd love to spend some of our time together on your own workflows. The third piece is engaging with skeptics directly, because their concerns are often legitimate, and the loudest skeptic on day 30 is often the strongest advocate on day 120 if their concerns were taken seriously.

04

Governance and tooling.

Novisto sells into regulated, audit-sensitive enterprises, and the governance work has to reflect that. The specifics of what governance looks like at Novisto are Cyrille's call. What I'd bring is execution capacity and a clear month-one discovery to surface where we are, where the gaps are, and where priorities should sit. The work I'd expect to share in that effort:

  • A shadow-AI audit to find out what's already happening. There are almost certainly people already paying for ChatGPT, Claude, Cursor, or specialty tools out of pocket. They're using them on real work, often well, often with no security review. The goal isn't to crack down; it's to stop pretending it isn't happening and bring it under proper budgets and policy.
  • A written AI policy that's clear about what to use AI for, what not to, how to handle customer data, what gets logged, and where the lines are. Gated through the LMS so every user acknowledges it before getting tool access. The policy becomes the moment people are introduced to AI at the company on the right footing, not a tax bolted on after the fact.
  • A data permissions audit before deploying any AI tool that surfaces internal documents. The failure mode worth preventing is the one where the AI tool starts returning files that "everyone" had nominal access to but nobody realized they did.
  • A tooling stance that's deliberate rather than reactive. Which models, which seats, which agents, which budgets. Reviewed quarterly because the market moves quarterly.
05

Executive alignment and metrics.

The work has to ladder up to the board's read on the company, which today is "revenue grew, OPEX grew faster." Every workflow we ship needs a clear story for which side of that equation it moves and by how much. The cadence I'd run is bi-weekly working session with Charles and Cyrille, monthly written update to the exec team, quarterly board-readiness brief. The updates are short and honest: what shipped, what's working, what we killed, where I was wrong.

The metrics I'd want to displace are the easy-to-fake ones (license activation, seats deployed, percentage of code "AI-generated"). In their place: outcome metrics tied to specific workflows. Examples of the shape are cycle time on a named process, support ticket volume per support person, time-to-onboard, proposal turnaround, customer-facing time per revenue role, dev cycle time, and the number of workflows actually in active use. The exact metrics per department are something we'd nail down with each department lead in the first weeks. The dashboard those metrics live in should be honest about its own failures. If a workflow we shipped didn't move its metric, the dashboard says so, and that's the conversation we have on the next bi-weekly. The fastest way to lose leadership trust in an AI program is to dress up bad outcomes; the fastest way to keep it is to tell the truth about them first.

06
Engagement options

Two shapes. Both are real.

Option A · Advisory

Lighter touch.

Useful if Novisto's primary need right now is occasional senior input rather than someone partnering on the program.

Rate
$250 / hour
Time
2–3 hrs / week
  • Scheduled time with you or your team leads on AI trends, workflow ideas, and tooling questions.
  • No embedded presence and no shared execution.
07
Why I want to do this

Watching people get their time back.

What gets me out of bed for this kind of work is watching people get their time back. A CSM who used to spend Tuesdays formatting reports goes back to talking to customers. A salesperson who used to walk into discovery calls underprepared now gets a five-minute structured brief beforehand. An engineer who used to spend their time clicking through their dev environment goes back to actual problem-solving. The work people are actually good at, and that companies actually need from them, is mostly buried under the work AI is going to absorb. The job I want is the one that does the unburying, deliberately, function by function, person by person, alongside the team already doing the work.

For background: roughly fifteen years in tech leadership across product, consulting, and engineering, currently leading an AI-native implementation function where I've automated most of my team's recurring work with agentic systems I built end-to-end. I also taught programming during the transition into the AI era at the post-secondary level, so I'm familiar with the reluctance, the challenges, and the patterns of how adults absorb genuinely new ways of working.

08
What success looks like at month 12

The honest test isn't an adoption metric.

It's whether the company would push back hard if you tried to take AI away. By month 12, the answer should be yes.

Across departments

AI is woven into daily work.

The departmental metrics we agreed on at month one have moved. Sales spends more of its week in front of prospects. Customer success spends more of its week with customers and less time reformatting reports. Marketing turns campaigns around in days that used to take weeks. Finance and ops have absorbed the work that used to spill into people's evenings. Each function has its own specific metric, and each metric has its own honest before-and-after story.

In engineering

Running leaner and faster.

Engineering is running leaner and faster. Bug fixes ship at a higher rate. The team has the option to be more globally distributed in a way that wasn't viable two years ago. Information flows through the org more quickly. Senior engineers spend a higher share of their time on architecture, review, and product judgment, and a lower share on the kind of work AI can absorb. How exactly that looks is a conversation with Cyrille, not a prescription.

In the culture

The conversation has shifted.

The starting point a year earlier was something like "AI is overhyped, AI is scary, AI is coming for my job." A year in, the dominant register is "AI is the thing that makes my day better." That shift is not free. It is the result of a year of deliberate culture work, peer-led adoption, and leadership visibly using AI on real work themselves.

For the board

A clear, defensible story.

Charles can show the board a clear story about how AI has moved the revenue-versus-OPEX ratio. Specific workflows. Real before-and-after numbers. A program that can be pointed at and trusted. And critically, the capability lives with Novisto, not with me. The ambassadors are who the company turns to. The dashboard is something Cyrille's team runs. The playbook is documented. Whether the role becomes a permanent hire, continues as fractional, or evolves into something else, what was built stays.

09
Risks, honestly

A few things worth being honest about up front.

01

The technology will move faster than the plan.

It will. The unit of work is "ship one improvement," not "execute a 12-month Gantt." If the best tool in October is one nobody has heard of in May, that's expected. The discipline is the constant; the toolchain isn't.

02

AI spend can get away from a company quickly.

Real risk, and the public horror stories aren't theoretical. The way to keep it from happening is per-workflow budgets, weekly cost reviews rather than monthly, and a standing rule that anything exceeding 2× projected cost gets a real review rather than an automatic top-up. Generous funding for the workflows that work, hard discipline on the ones that don't.

03

Faster work isn't always better work.

This is the risk most operators worry about and rarely name. The way to address it is to instrument quality, not just throughput. Every workflow we ship has a before-and-after sample we can compare on real outputs, and the ambassador cohort is partly responsible for catching quality drift inside their own functions.

04

Some team resistance is real and reasonable.

Some skeptics have legitimate concerns. Adoption that's earned compounds; adoption that's mandated decays. The ambassador program and the visible peer-led wins are how skepticism gets engaged with evidence rather than steamrolled.

05

The premise might shift.

A foundation-model jump or a market shift could meaningfully change what AI is good for inside a SaaS company. If that happens, we'd revisit the program shape openly rather than carry on by inertia. That's part of what the month-six check-in is for.

06

Continuity.

Everything we build is built with the idea that ownership will eventually transfer to someone internal. This engagement is responsible for setting up the infrastructure for AI inside Novisto, and the whole way through we'll be thinking about how to make sure it lives properly past when I'm gone. The notice period gives both sides a fair off-ramp if the engagement isn't working, and the capability stays with Novisto regardless of how my involvement evolves.

Next steps

Happy to keep going by email, or grab time on Calendly.

If you have any questions, I'm happy to answer them by email. If you'd like to talk it through, you can grab time on my Calendly. Thanks for the call, and looking forward to the next conversation.

Akshay Konjier Proposal prepared for Novisto · May 2026