Skip to content
The Rubix Way · the operating system

Four phases. Four pillars.
Three principles.

The operating system behind every Rubix engagement. Phases run sequentially. Pillars run in parallel through every phase. Three principles govern the work. A library of frameworks does the lifting.

The phases sequence the work. Each phase has a deliverable. Each deliverable funds the next.

Section I · The phases

Sequenced. Each phase has a deliverable.

Engagements start where the client is. Clients with a strong internal diagnostic skip Frame and enter at Strategize. Clients ready to ship enter at Build. The phases sequence the work, not tax it.

00

Frame. Diagnose. Align. Decide.

We diagnose what the client actually has, align senior stakeholders on the question worth answering, and produce an honest call on whether the work is worth doing. If it isn't, we say so. That is part of the value.

Duration
2–4 weeks
Our approach
  1. 01

    AI Maturity diagnosis. Five-level scoring on data, governance, talent, technology, and process. We see where the client really is, not where they hope to be.

  2. 02

    Senior stakeholder alignment. Working sessions with the executive sponsor, function heads, and IT/data leadership. Surface the real question, not the framed one.

  3. 03

    Value-chain quick-scan. First-pass map of where AI is plausible and where it is not. We exit Frame with a candidate list, not a fantasy.

  4. 04

    Engage / pause / decline call. An honest recommendation: ready for Phase 01, not yet, or never. Most consultancies cannot say the third option. We can.

What you receive
  • Frame Document: the diagnostic on a page
  • AI Maturity baseline: the 5-level scorecard
  • Engage / pause / decline recommendation
  • Optional: scoping for Phase 01
What to expect
  • 2–4 weeks, mostly on your premises. We need access to people, not data. The data work begins in Phase 01.

  • Complimentary. The Frame deliverable is yours regardless of what happens next.

  • Phase 00 is how we earn the right to Phase 01. If our diagnosis isn't compelling, you should not engage us further.

  • Decision gate at the end. Proceed to Strategize? The only commitment we ask for at this point.

Phases run in sequence. Pillars run in parallel, through every phase, without exception.

Section II · The pillars

Parallel. Every phase stresses all four.

The pillars are how we know the engagement is sound at any phase. A Build that ignores Governance & Risk produces software that cannot go live. A Strategy that ignores Change & Adoption produces a roadmap that cannot ship. Each pillar is the audit dimension of the work.

Pillar 01

Governance & Risk

Risk and governance stitched in from Phase 00, not appended at Phase 03. International standards because Saudi enterprises are increasingly audited against them.

  • NIST AI RMF mapping per use case
  • ISO/IEC 42001 governance baseline
  • Eval-driven release discipline
  • Bias and fairness as release blocker
  • Audit trail with provenance
Pillar 02

Data & Knowledge

AI is only as good as the corpus it stands on. Data work is architecture, not preparation; institutional memory is a first-class asset.

  • Data architecture diagnosis
  • Retrieval-layer design
  • Institutional memory layer
  • Lineage, provenance, access control
  • Curated bilingual AR/EN corpora
Pillar 03

Technology & Platform

Platforms designed to survive a model swap, a vendor change, a regulatory shift. Sovereignty is a default architecture decision.

  • Multi-model orchestration
  • Sovereign / KSA-resident inference
  • Existing-system integration
  • Edge / cloud / hybrid deployment
  • LLMOps lifecycle discipline
Pillar 04

Change & Adoption

The hardest part of an AI engagement is not the model. It is the operating model. Built for client capability and ownership from day one.

  • Hub-and-spoke CoE design
  • 70-20-10 capability building
  • Operating-model before software
  • Adoption metrics tracked
  • Hand-the-keys exit posture

Phases describe how we work. Pillars describe what we audit. Principles describe what we will not do.

Section III · Three principles

Three principles. Non-negotiable.

They hold whether we are running a 2-hour workshop or a multi-year transformation. They are the discipline that keeps every Rubix engagement honest. If we cannot apply them, we decline the work.

Principle I
I.

Process before platform.

No tool conversation until the underlying process is mapped. Tools are chosen by the process they serve, never the reverse. This protects clients from buying technology that solves the wrong problem.

What this rules out

Vendor demos before diagnosis. Tool selection before value chain mapping. "AI-first" thinking.

Principle II
II.

Evidence before enthusiasm.

Every use case must be tied to a measurable business KPI: time, cost, error rate, compliance, leakage, revenue. If it cannot be measured, it does not get built. Conviction is not a substitute for a business case.

What this rules out

Pilots without baselines. ROI claims without numbers. "Innovation theatre."

Principle III
III.

Readiness before roadmap.

Data, governance, talent, and change capacity decide what is deployable inside twelve months. We say no to what will fail, so the portfolio delivers what will win. Ambition is disciplined by readiness.

What this rules out

Roadmaps the organization cannot absorb. Bold plans on broken foundations. Strategy as wishful thinking.

In practice

When these three principles meet a client engagement, some ideas survive intact, others are reshaped, and a few are set aside for later. That filtering is the value we add, before any AI is deployed.

The frameworks below are not decorative. Each one gets named where it is applied.

Section IV · Frameworks library

The consulting library behind the work.

Every Rubix engagement draws from a defined set of frameworks. Some are international standards we apply. Others are Rubix-developed and refined across engagements. They are not decorative. Each one gets named where it is applied.

AI Strategy Canvas

Rubix

Seven-component canvas for every Phase 01 engagement.

Rubix-developed canvas covering ambition, use-case portfolio, target operating model, governance, data, technology, and capability path.

Read more →

Feasibility × Impact Matrix

Rubix

Use-case prioritization on four dimensions.

Scoring on technical feasibility, data readiness, organizational readiness, and impact magnitude. Filters a long list to the 5–8 candidates worth building.

Read more →

Build / Buy / Partner

METHOD

The honest call per use case.

Decision framework for whether to build, license, or partner. Saves engagements from rebuilding what already exists, and prevents buying what cannot meet domain or sovereignty constraints.

Read more →

Value Chain Decomposition

METHOD

Map, find, score, sequence.

Map the operational value chain, find AI-amenable steps, score them, sequence them. The discipline that prevents AI from getting stuck in interesting-but-marginal use cases.

Read more →

Hub-and-Spoke CoE

Rubix

Target operating model that compounds, not centralizes.

Central platform and governance hub, with embedded function-specialist spokes. Designed so the client owns it by month 12.

Read more →

LLMOps Lifecycle

METHOD

Operational discipline for LLM-based systems.

Data, evaluation, deployment, monitoring, retraining. The Phase 02 and Phase 03 spine of every generation- and augmentation-pattern engagement.

Read more →

Eval-Driven Development

Rubix

Eval harness before the model.

Domain-specific evals are release-blocking gates. Faithfulness, citation accuracy, bilingual equivalence, false-positive thresholds. Calibrated per use case.

Read more →

70-20-10 Adoption Model

METHOD

Capability-building pattern.

70% on-the-job application, 20% peer learning and mentoring, 10% formal training. The discipline that produces a client team running its own platform by month 12.

Read more →

Bias & Fairness Review

Rubix

Release blocker, not an audit afterthought.

Performance reviewed across cohort attributes per deployment. In sensitive domains such as HSE, hiring, and healthcare, this gate is enforced absolutely.

Read more →

AI Maturity Model

Rubix

Five-level diagnostic from Experimenting to Optimizing.

Used in Phase 00 and Phase 01 to set the engagement's calibration. A client at Experimenting cannot deploy what a client at Operating can.

Read more →

NIST AI RMF

STD

U.S. federal AI risk standard.

We apply the four-function model (Govern, Map, Measure, Manage) per use case. The risk register fills before architecture finalizes.

Read more →

ISO/IEC 42001

STD

International standard for AI management systems.

Used as the governance baseline for enterprise-scale deployments. Mapped to control points in every Phase 03 design.

Read more →
The methodology is the asset

Every engagement walks the same path.

Eight ventures in our portfolio walk this path. Every engagement in our archive walks this path. The case is the receipt.