Real results. Measured, not estimated.
Every number on this page comes from a shipped product, a post-launch measurement, or a board-level business outcome. No projections. No "estimated" figures. No design output described as impact.
The question this page answers: What real impact has this person delivered, at what scale, across which domains — and how do you know it's true?
Aggregate outcomes across all engagements
Revenue enabled, cost eliminated, decision cycles compressed
Every engagement I led was scoped around a business outcome, not a design deliverable. The UX work was the mechanism — the business result was the measure. Below is what that looks like in practice across four enterprise contexts.
Protecting $1.8M ARR at risk
At an AI startup, two of three pilot enterprise customers were at risk of not converting. The failure wasn't the ML model — it was the absence of explainability. Redesigning the confidence layer and evidence panel turned skepticism into trust. Both customers converted to annual contracts. That single UX intervention directly protected approximately $1.8M in recurring revenue.
Oil & Gas AI PlatformEliminating $2M+ in annual procurement waste
A Gulf petrochemical manufacturer ran procurement through email chains and Excel files. No approval visibility. No audit trail. $2M+ in annual operational waste from delays and rework. I led the 0→1 UX design of an AI-enabled procurement intelligence platform — taking cycle time from 11 to 4 days and making every transaction auditable for the first time.
Digital Procurement · Gulf Petrochemical EnterpriseClosing the contract visibility gap
A government enterprise was managing procurement contracts through disconnected spreadsheets with no expiry alerts. The visualization platform I designed surfaced committed spend, contract health, and renewal risk in a single live view. In the first reporting period: zero contract lapses. Manual reporting cycles compressed from 3 days to live.
Procurement Visualization · Government EnterprisePlatforms that scale. Interfaces that hold up under real-world pressure.
Product impact isn't measured at launch. It's measured when the platform is running at scale, with thousands of users, across three client brands, under enterprise-grade operational pressure. These are the numbers from that environment.
One design system. Six products. Three brands.
An enterprise energy platform had six distinct products with divergent component sets and no shared architecture. I designed a 4-layer token system — global primitives → semantic roles → component tokens → brand overrides — that unified all six under one source of truth. Three client brands now draw from the same component library. No forks. No drift. Feature delivery time dropped 40% in the first quarter after adoption.
Enterprise Energy Platform · Design Systems94% task completion in a banking app users were abandoning
Corporate banking customers were visiting branches for tasks the app should have handled. The issue wasn't aesthetics — it was 9 unstructured task paths, inconsistent navigation logic, and zero confidence in card management. I rebuilt the information architecture into 4 primary journeys, redesigned card controls for direct access, and reduced the card freeze interaction from 4 screens to 2 taps. Task completion in testing: 94%. Branch visits dropped 23% within 90 days.
Corporate Mobile Banking · Leading Regional Bank−41% support call volume from 5 UX fixes
Five distinct failure modes were driving support call volume for a regional credit card product: navigation confusion, feature discoverability, statement legibility, panic around card loss, and forgetfulness around payment dates. I mapped each archetype separately and designed a targeted fix for each. The combined intervention reduced credit card-related support calls by 41% within 90 days of launch.
Credit Card Journey · Regional Financial InstitutionAI systems that domain experts trust, adopt, and build their work around.
Most enterprise AI fails at adoption, not accuracy. The model works. The interface doesn't. I've shipped four production AI systems where the design work was specifically about closing that gap — building the trust layer, the evidence trail, and the feedback loop that turns a model into a tool experts defend to their managers.
From 12 minutes to 47 — turning skepticism into deep engagement
The AI startup's beta product had expert users leaving after 12 minutes. The model surfaced raw probability maps with no context, no confidence framing, and no way to interrogate the reasoning. I redesigned around three principles: show confidence intervals not certainties, make every AI suggestion interrogatable, and convert every human override into a training signal. Session depth went from 12 minutes to 47 minutes. Two at-risk pilot customers converted.
AI Platform · Decision Engine RedesignBuilding a proprietary data moat through UX design
The most counterintuitive outcome: the override workflow I designed became the product's primary competitive asset. When experts disagreed with the AI, they selected a structured reason from a taxonomy I co-designed with the ML team. Those structured overrides re-entered the retraining pipeline. Over 18 months, the dataset of expert corrections became something no competitor could replicate. The board cited it in the Series C pitch as a proprietary moat.
AI Platform · Feedback ArchitectureMaking AI recommendations defensible, not just accurate
Accuracy wasn't the adoption problem. Defensibility was. A geoscientist won't act on an AI recommendation they can't explain to their asset manager. I designed an evidence panel alongside every AI suggestion — showing which data inputs drove the recommendation, what confidence level the model assigned, and what comparable cases existed. The rubric score for decision confidence increased 41% post-redesign.
AI Platform · Explainability LayerDesign maturity built. Teams aligned. Capability that outlasts the project.
The most durable impact I deliver isn't measured in a single product launch. It's measured in the design capability, governance structure, and cross-functional alignment I leave behind. These are the organizational outcomes that compound over time.
Building a UX function from zero as a solo hire
I joined an AI startup as their first and only UX hire. There was no design infrastructure, no research methodology, no component library, and no stakeholder alignment rituals. I built all of it: domain research protocols, an AI-specific UX pattern library, a component system, and a design review process that made ML engineers and domain experts genuine collaborators in product decisions. The function outlasted my engagement.
AI Platform · Design Function 0→1Governance that lets 4 teams ship without diverging
On the enterprise energy platform, four independent engineering teams were shipping features across six products with no design coordination. I created the governance model: a token architecture that encoded design decisions as data, a contribution workflow with exception tracking, and monthly cross-team design reviews. Teams could ship independently at speed — without fragmenting the product experience across brands or products.
Enterprise Energy Platform · Design GovernanceStructuring national innovation from ambition to architecture
A national energy company had innovation ambition but no submission mechanism, no evaluation pipeline, and no way to connect an idea to resources or decision-makers. I led the research and UX strategy: conducted discovery across 4 behavioral archetypes (Innovators, Evaluators, Sponsors, Champions), mapped the full idea lifecycle, and designed the platform architecture — submission, triage, development, and commercialisation. Twelve innovation domains launched from a platform that didn't exist 6 months earlier.
Innovation Hub · National Energy CompanyWhat changed, and by how much
State of each system before I joined versus post-launch reality. Not design intent — shipped outcomes.
- 6.5hr interpretation cycles per session
- 5+ disconnected legacy tools
- Raw ML outputs with no context
- 12 min average session depth
- 2/3 pilot customers at churn risk
- 1.8hr cycles — 72% reduction
- Single interpretation canvas
- Confidence ribbons + evidence panels
- 47 min average session depth
- 2/3 pilots converted to annual contracts
- 6 products with no shared components
- 3 client brands built as separate forks
- No token system or naming conventions
- 12+ day average feature delivery
- Design debt compounding with every sprint
- 87 shared components from 1 system
- 3 brand themes via token overrides
- 4-layer token architecture
- 7 day average feature delivery
- Zero component divergence across teams
- 11-day approval cycles via email
- Excel-based tracking, no audit trail
- Zero supply chain visibility
- $2M+ annual operational waste
- Manual reporting — 3 days per cycle
- 4-day cycles — 63% reduction
- Real-time approval tracking
- Full supply chain intelligence layer
- 100% audit compliance from day 1
- Live reporting — 3 days → real-time
- 9 unstructured task paths
- Card freeze required 4 screens
- High branch dependency for digital tasks
- High support call volume
- Users describing app as 'confusing'
- 4 clear primary journeys
- Card freeze in 2 taps
- 94% task completion in testing
- −41% support call volume
- −23% branch visits within 90 days
Career scope
Want to see the decisions behind these numbers?
Each of these outcomes is documented in full — problem framing, options considered, trade-offs made, and measurement approach. Seven case studies, no vague claims.