The library behind the work.
Every Rubix engagement draws from a defined set of frameworks. Some are international standards we apply. Others are Rubix-developed and refined across engagements. Each one is named where it is applied, and each one is explained here.
- Rubix-developed. Frameworks built and refined inside the firm.
- International standards. Externally audited, increasingly required.
- Methodologies. Established methods we apply where they fit.
Rubix-developed frameworks.
Frameworks Rubix has developed and refined across engagements. The institutional method.
AI Strategy Canvas
Seven-component canvas for every Phase 01 engagement.
Rubix-developed canvas covering ambition, use-case portfolio, target operating model, governance, data, technology, and capability path.
Feasibility × Impact Matrix
Use-case prioritization on four dimensions.
Scoring on technical feasibility, data readiness, organizational readiness, and impact magnitude. Filters a long list to the 5–8 candidates worth building.
Hub-and-Spoke CoE
Target operating model that compounds, not centralizes.
Central platform and governance hub, with embedded function-specialist spokes. Designed so the client owns it by month 12.
Eval-Driven Development
Eval harness before the model.
Domain-specific evals are release-blocking gates. Faithfulness, citation accuracy, bilingual equivalence, false-positive thresholds. Calibrated per use case.
Bias & Fairness Review
Release blocker, not an audit afterthought.
Performance reviewed across cohort attributes per deployment. In sensitive domains such as HSE, hiring, and healthcare, this gate is enforced absolutely.
AI Maturity Model
Five-level diagnostic from Experimenting to Optimizing.
Used in Phase 00 and Phase 01 to set the engagement's calibration. A client at Experimenting cannot deploy what a client at Operating can.
International standards.
External standards Rubix applies. Audited, recognized, increasingly required.
NIST AI RMF
U.S. federal AI risk standard.
We apply the four-function model (Govern, Map, Measure, Manage) per use case. The risk register fills before architecture finalizes.
ISO/IEC 42001
International standard for AI management systems.
Used as the governance baseline for enterprise-scale deployments. Mapped to control points in every Phase 03 design.
Established methodologies.
Established consulting methods Rubix uses where they fit. Battle-tested before AI was the conversation.
Build / Buy / Partner
The honest call per use case.
Decision framework for whether to build, license, or partner. Saves engagements from rebuilding what already exists, and prevents buying what cannot meet domain or sovereignty constraints.
Value Chain Decomposition
Map, find, score, sequence.
Map the operational value chain, find AI-amenable steps, score them, sequence them. The discipline that prevents AI from getting stuck in interesting-but-marginal use cases.
LLMOps Lifecycle
Operational discipline for LLM-based systems.
Data, evaluation, deployment, monitoring, retraining. The Phase 02 and Phase 03 spine of every generation- and augmentation-pattern engagement.
70-20-10 Adoption Model
Capability-building pattern.
70% on-the-job application, 20% peer learning and mentoring, 10% formal training. The discipline that produces a client team running its own platform by month 12.
They are the discipline behind it.
Each one is applied where it fits, named in every engagement, and reviewed against the outcomes it produced. The methodology is the asset. The library makes it explicit.