What Data Architects Get Wrong About Reusability
Introduction
Ask any data architect what they’re optimizing for, and you’ll hear the same word:
Reusability.
Reusable entities.
Reusable dimensions.
Reusable canonical models.
And yet—many of the most reusable models are the least usable.
This isn’t because reuse is bad.
It’s because reuse is often misunderstood.
The Reuse Trap
Reusability is usually interpreted as:
- One model for many use cases
- One definition for all contexts
- One structure to rule them all
In practice, this leads to:
- Overloaded entities
- Vague attributes
- Endless conditional logic
- Confusing metrics
A model that tries to serve everyone usually serves no one well.
Reuse Without Purpose Creates Abstraction Debt
Abstract models feel elegant early on.
But abstraction debt accumulates when:
- Business rules differ slightly across domains
- Timing semantics aren’t the same
- Aggregation logic changes by audience
- Regulatory requirements diverge
Each “small exception” chips away at trust.
Eventually, the reusable model becomes a liability.
Reusability Is Not the Same as Standardization
This is the most common mistake.
Standardization means:
- Shared definitions
- Consistent naming
- Agreed semantics
Reusability means:
- Shared structures
- Shared tables
- Shared logic
You can (and often should) standardize without reusing physical structures.
Context Is Not Reusable
Business context is almost never reusable.
Examples:
- “Customer” means different things in sales vs billing
- “Active” varies by reporting purpose
- “Revenue” changes by accounting treatment
- “Member” differs across healthcare lines
Trying to reuse context collapses meaning.
Instead, context must be explicit and scoped.
The Cost of Over-Generalized Entities
Over-generalized entities usually contain:
- Dozens of nullable columns
- Polymorphic identifiers
- Type flags to explain meaning
- Conditional joins everywhere
These models:
- Slow down queries
- Confuse analysts
- Increase error rates
- Hide business rules in SQL
They look reusable but behave unpredictably.
Reuse at the Wrong Layer
The most successful architectures reuse at higher layers, not lower ones.
Good reuse targets:
- Naming conventions
- Definitions
- Calculation logic
- Reference vocabularies
- Metric specifications
Poor reuse targets:
- Transaction tables
- Event semantics
- Grain definitions
- Temporal rules
Reuse meaning, not mechanics.
Versioning Is a Reusability Requirement
If something is truly reusable, it must be versioned.
Without versioning:
- Changes break downstream consumers
- Metrics shift silently
- Historical reports change
- Trust erodes
Most “reusable” models fail because they assume stability that never exists.
Reusability That Actually Works
High-performing teams design reuse intentionally:
- Reuse definitions, not raw tables
- Reuse patterns, not full schemas
- Reuse logic via views or semantic layers
- Reuse governance artifacts
- Allow divergence where context demands it
They optimize for clarity first, reuse second.
Reuse Is an Outcome, Not a Goal
The best reusable models weren’t designed to be reused.
They were designed to:
- Be explicit
- Be understandable
- Be verifiable
- Be aligned with the business
Reuse followed naturally.
Final Thoughts
Reusability isn’t about fewer tables.
It’s about fewer misunderstandings.
If reuse makes your model harder to explain, you’re reusing the wrong thing.
Design for meaning. Standardize definitions. Let reuse emerge—not dominate.
Explore standardized terms and definitions at
/definitions and /abbreviations
About the Author
Data modeling experts helping enterprises build better databases and data architectures.