A startup had one engineering team, two apps to ship, and a deadline that didn't care about either of those constraints. The solution wasn't to work twice as hard — it was to build once and use it everywhere.
The situation
The startup operated a service that had two distinct sides. On one side, there were customers — members of the public using the product. On the other, there were employees — internal staff managing operations, handling requests, and monitoring activity.
Both sides needed a native mobile app. They were different products with different interfaces and different user needs, but they shared the same backend, the same business logic, and large portions of the same data model.
The instinct in this situation is often to build two separate codebases and staff two separate teams. That approach is clean on paper. In practice, it doubles the maintenance burden, splits institutional knowledge, and means that every shared bug gets fixed twice — or more likely, once in one codebase and never in the other.
The startup chose a different approach: a single monorepo, two shipping apps, and a shared package layer that both applications drew from.
Structuring the monorepo
The repository was divided into three layers.
Shared packages sat at the base. These were modules with no knowledge of which app was consuming them — networking, authentication, data models, analytics, local storage, and a component library of generic UI elements. Any change here was immediately available to both apps. Any bug fixed here was fixed everywhere.
App-specific packages sat in the middle. The user-facing app and the employee app each had their own set of feature modules — flows, screens, and business logic that only made sense in the context of one product. These modules could import from the shared layer but never from each other.
App targets sat at the top. Two thin entry points — one per app — that assembled the relevant modules, applied the correct configuration, and produced the final build. As little logic as possible lived here. The targets were assemblers, not implementers.
This structure meant the boundary between shared and app-specific was explicit and enforced. It was not a convention that engineers were trusted to follow — it was a dependency rule the build system could verify.
Two teams, one repository
The engineering team was split into two squads: one focused on the user app, one on the employee app. Both squads worked in the same repository, on the same trunk, with the same tooling.
The shared package layer was owned jointly. Changes to shared packages required review from both squads — not as a bureaucratic gate, but as a practical check. An engineer working on the employee app was the right person to catch an assumption baked into a shared networking module that would break the user app.
In practice, most of the day-to-day work happened in squad-specific areas, and the shared layer was stable. The joint ownership model added very little overhead and prevented several significant issues that would otherwise have only surfaced in production.
Avoiding duplication without creating coupling
The hardest ongoing decision in a shared codebase is what belongs in the shared layer and what belongs in an app-specific layer.
The test used throughout the project was simple: if removing this from the shared layer would require two separate teams to implement it independently, it belongs in the shared layer. If it exists because one team wants it and the other doesn't care, it belongs in the app layer.
This rule eliminated two failure modes. The first is under-sharing — each team reimplements the same thing slightly differently, and the two versions gradually diverge. The second is over-sharing — one team's specific requirement gets generalised into a shared abstraction that neither team actually needs, creating complexity for the sake of principle.
What was shared
By the time both apps shipped, the shared layer contained:
Networking and API contracts. A single networking module handled authentication, request serialisation, error handling, and retry logic. Both apps made API calls through this module. A bug fixed in error handling was fixed for everyone. A new authentication flow was implemented once.
Data models. The core business entities — the objects that represented the things the startup's service actually dealt with — were defined once. The user app and the employee app had different views of these entities, but the entities themselves were shared. This prevented the subtle model drift that makes cross-app debugging painful.
Authentication and session management. Both apps authenticated against the same backend service. The logic for managing tokens, handling session expiry, and recovering from authentication failures lived in one place. This was one of the highest-risk areas of both products, and having a single implementation that was tested exhaustively was a significant advantage over maintaining two.
A component library. Generic UI building blocks — buttons, form inputs, loading states, error displays, typography — were implemented once and used across both apps. The user app and employee app had different visual identities, so the component library was themeable: the same components rendered differently depending on which app was consuming them. This meant design consistency within each app without sacrificing the distinct identity of each product.
Analytics and logging. Event tracking and structured logging were centralised. Both apps emitted events through the same interface, which meant the data pipeline received consistent, well-formed data regardless of which app generated it.
CI as the safety net
With two squads working in the same repository, the risk of one team's change breaking the other team's app was real and needed to be managed actively. The CI pipeline was the primary mechanism for doing this.
Every pull request triggered a full build of both applications. Tests ran across all three layers — shared packages, app-specific modules, and the assembled app targets. A change to a shared package that broke the employee app would be caught on the pull request that introduced it, not three days later when the employee team tried to merge their own work.
The test suite was structured in layers to match the codebase:
- Unit tests ran on every commit, covering individual functions and components in isolation. These were fast — the full suite ran in under two minutes — so they ran constantly.
- Integration tests ran on every pull request, testing the interaction between modules and against a staging backend. These took longer but caught a different class of problem.
- End-to-end tests ran nightly, driving both apps through their most critical user journeys against a full staging environment. These were the slowest and the most realistic.
Merges to the main branch required a passing suite at all three levels. This was occasionally inconvenient — a flaky end-to-end test could block a merge for an hour — but the discipline ensured that the main branch was always in a releasable state for both apps simultaneously.
Release coordination
Because both apps built from the same repository, releases could be coordinated. When a backend change required both apps to be updated, the change was made once, tested once, and released together. There was no risk of the user app being on an old version of a shared module while the employee app was on a new one — both apps always consumed the same version of every shared package.
This also simplified versioning. Shared packages had a single version number. When a new version was cut, both apps adopted it in the same release cycle. There was no matrix of compatible versions to manage.
Outcomes
Both apps shipped within two weeks of each other, eight months after the project began.
- Shared code accounted for roughly 40% of the total codebase. Networking, authentication, data models, and the component library together represented a significant portion of work that was written once and used twice.
- No divergence in shared business logic. In a parallel-codebase scenario, subtle differences in how two teams implement the same logic are inevitable. With a shared layer, they were structurally impossible.
- Both apps benefited from every shared improvement. Performance optimisations, security patches, and bug fixes in the shared layer applied to both products simultaneously.
- The CI pipeline caught every cross-app regression before it reached production. No change that broke the other squad's app survived code review — the build told the author before a human reviewer needed to.
- Onboarding new engineers was faster. A new engineer could contribute to either app after understanding the shared layer. There was no need to learn two separate codebases with two separate conventions.
What to consider before choosing a monorepo
A monorepo is not the right choice in every situation. It works best when the products sharing the repository have genuine common ground — shared backend, shared domain, shared team culture. When the products are truly independent, a monorepo adds coordination overhead without the benefit of code sharing.
In this case, the conditions were right. The two apps served different users but the same service. The same data, the same business rules, and the same engineering team connected them. A monorepo didn't force them together — it reflected a connection that already existed and made it easier to work with.
The structure paid for itself within the first month. By the time both apps shipped, neither squad could easily imagine having built their product in isolation.