Design A Stock Exchange System
Every millisecond matters when money is on the line. In most distributed systems, a brief delay or temporary inconsistency is a minor inconvenience that users barely notice. In a stock exchange, that same delay can mean millions of dollars changing hands unfairly, regulatory violations, or a complete erosion of market trust. This is precisely why interviewers reach for this problem when they want to separate candidates who truly understand systems from those who only know patterns.
Designing a stock exchange system forces you to confront constraints that most applications never encounter. Correctness is not a goal but a hard requirement. Ordering must be deterministic down to the microsecond. Latency directly affects fairness, and fairness has legal implications.
You cannot hand-wave your way through eventual consistency or hope that a cache will solve your problems. The system either works correctly under extreme pressure, or it fails catastrophically.
This guide walks you through exactly how to approach this problem in an interview setting. You will learn how to scope the problem effectively, design a matching engine that guarantees correctness, handle the scale of peak trading loads, and reason about fault tolerance when there is zero margin for error. By the end, you will have a structured framework for tackling one of the most demanding System Design questions you will ever face.
What interviewers are really testing
When interviewers ask you to design a stock exchange system, they are not testing your knowledge of financial markets or expecting you to build the next NASDAQ. They are evaluating whether you can design mission-critical, low-latency, stateful systems under extreme constraints. This problem serves as a litmus test for several core competencies that matter deeply in senior engineering roles.
The first thing interviewers look for is your ability to clarify requirements before diving into solutions. Strong candidates immediately recognize that this prompt is intentionally broad and begin asking targeted questions to narrow the scope. They want to see you demonstrate judgment about what matters most, rather than attempting to boil the ocean with an impossibly comprehensive design.
Beyond scoping, interviewers assess your intuition for correctness and ordering guarantees. In a stock exchange, “mostly correct” is not acceptable. Every trade must execute in a globally agreed-upon order, and every participant must see a logically consistent market.
Candidates who casually suggest eventual consistency or optimistic locking without understanding the implications are quickly exposed. Similarly, your awareness of latency-sensitive versus non-latency-sensitive paths reveals whether you understand how to architect systems where performance and correctness are tightly coupled.
Watch out: Many candidates treat this like a typical web application and suggest patterns like caching hot data or processing orders asynchronously. This immediately signals that they do not understand the fundamental constraints of financial systems, where even small timing differences can create unfair advantages.
The ability to reason about concurrency and isolation under pressure is another critical evaluation criterion. Stock exchanges handle thousands of orders per second, often with multiple orders arriving for the same instrument simultaneously.
Interviewers probe whether you understand how to maintain correctness without introducing bottlenecks, and whether you can explain tradeoffs when constraints conflict. Strong candidates signal early that this is not a typical CRUD system and explain how that fundamental difference shapes every architectural choice that follows.
Why financial systems raise the bar on correctness and latency
In many distributed systems, eventual consistency is not just acceptable but preferred because it enables better availability and partition tolerance. Users tolerate seeing stale data for a few seconds, and the system eventually converges to the correct state. Financial systems operate under fundamentally different constraints where this tolerance simply does not exist.
Every participant in a stock exchange must see a logically consistent view of the market at all times. If two traders submit orders at nearly the same time, the system must determine a definitive order and execute accordingly. There is no room for ambiguity or eventual reconciliation. A trade that executes out of order is not just a bug but a violation of market fairness that could trigger regulatory scrutiny and legal liability.
Latency compounds these challenges because timing differences have ethical and legal implications beyond mere user experience. In high-frequency trading environments, even microsecond advantages can translate into significant profits. This means the system must not only be fast but predictably fast, with minimal variance in response times. Tail latency at the 99th percentile matters as much as average latency because inconsistent performance creates unfair advantages for participants who happen to get faster responses.
Real-world context: Major exchanges like NASDAQ target order acknowledgment times under 250 microseconds. High-frequency trading firms invest millions in co-location services and specialized hardware to shave additional microseconds off their latency, demonstrating just how much timing matters in these systems.
Understanding this context helps you make better design decisions throughout the interview. When you recognize that performance and correctness are inseparable in financial systems, you naturally gravitate toward architectures that prioritize determinism over raw throughput. This mindset shift from typical web application thinking is exactly what interviewers want to see before you begin discussing specific components.
Clarifying requirements and defining scope
The prompt “design a stock exchange system” is deliberately open-ended, and how you handle this ambiguity reveals a great deal about your engineering maturity. Without careful scoping, this problem can balloon into market data distribution, regulatory compliance, clearing and settlement, real-time analytics, and dozens of other concerns that would take months to design properly. Strong candidates recognize this trap and immediately work to narrow the focus.
Core functional requirements
A reasonable interview scope centers on the trading and matching system, which is the heart of any exchange. You should explicitly confirm with your interviewer that you are focusing on order placement, matching, and execution rather than downstream systems.
The core capabilities typically include allowing users or brokers to submit buy and sell orders for financial instruments, validating these orders against basic constraints and risk limits, matching compatible orders according to deterministic price-time priority rules, and persisting the results so participants can see their executed trades.
Equally important is stating what falls outside your scope. Features like clearing and settlement, regulatory reporting, advanced derivatives pricing, and market data analytics dashboards should all be explicitly marked as out of scope unless your interviewer specifically asks to include them. Saying this out loud demonstrates control over complexity and prevents your design from becoming unfocused.
Pro tip: When scoping, phrase your boundaries as questions rather than statements. Ask “Should I include settlement and clearing, or focus on the core matching system?” This shows collaborative thinking while still demonstrating that you understand the problem’s natural boundaries.
Non-functional requirements that drive the design
Non-functional requirements are the real drivers of this system’s architecture, far more than functional requirements. For a stock exchange, these typically include extremely low and predictable latency measured in microseconds or low milliseconds, very high throughput during peak trading periods like market open, strong consistency with deterministic ordering guarantees, and high availability with safe recovery mechanisms.
You do not need to provide exact numbers, but you should reason about scale qualitatively and demonstrate awareness of realistic targets. For example, you might estimate that a mid-sized exchange handles roughly one billion orders per day across a hundred actively traded symbols.
This translates to approximately 10,000 to 50,000 orders per second during normal operation, with peak loads during market open potentially reaching 100,000 orders per second or higher. Framing these constraints early helps justify every architectural decision that follows.
Before diving into specific components, it helps to establish a shared understanding of the core concepts that will appear throughout the design. This mental model prevents confusion later when discussing matching logic, consistency requirements, and state transitions.
Core concepts and domain model
A clear domain model serves as the foundation for all subsequent discussions about architecture and implementation. Interviewers do not expect deep financial expertise, but they do expect precise definitions and a coherent understanding of how the core entities relate to each other. Taking a few minutes to establish this shared vocabulary pays dividends throughout the rest of the interview.
Orders represent a user’s intent to buy or sell a specific quantity of a financial instrument at a given price or under certain conditions. Once submitted, orders are essentially immutable except through explicit cancellation or modification requests. This immutability is crucial for auditability and deterministic replay.
Trades represent the result of successfully matching a buy order with a sell order. Unlike orders, trades are permanent records that must never be lost, duplicated, or modified after creation.
The order book is the central data structure that holds outstanding buy and sell orders for a specific instrument. Buy orders are organized with the highest prices first, then by earliest submission time within each price level. Sell orders follow the opposite pattern, with lowest prices first.
This structure ensures that the best available prices are always matched first and that ties are broken fairly based on arrival time. Critically, each instrument has its own independent order book, a fact that becomes important for scalability discussions later.
Order types and state transitions
At minimum, most interview designs support market orders that execute immediately at the best available price and limit orders that execute only at a specified price or better. You do not need to support advanced order types unless prompted, but awareness of them demonstrates domain knowledge.
Stop-loss orders trigger when prices reach a threshold. Fill-or-cancel orders must execute completely or not at all. Duration specifications like good-til-canceled or day-only determine how long orders remain active.
Orders move through a clear lifecycle that you should be able to articulate precisely. An order begins as submitted, passes through validation to become accepted into the order book, may be partially or fully executed through matching, and eventually reaches a terminal state of completed, canceled, or expired. Strong candidates emphasize that state transitions are atomic and irreversible once a trade executes. This clarity makes later discussions about fault tolerance and recovery much more straightforward.
Historical note: The price-time priority algorithm used in most modern exchanges dates back to early electronic trading systems in the 1970s and 1980s. It was designed to replace the chaotic open-outcry trading floors where execution order was effectively random, creating a fairer and more transparent market structure.
With the domain model established, we can now examine how these concepts translate into a system architecture that meets our stringent requirements.
High-level system architecture
The first architectural insight that interviewers look for is recognizing that the matching engine is the heart of the entire system. Everything else exists either to feed orders into it safely or to consume its output. Unlike typical web architectures built around request-response patterns and stateless services, a stock exchange is built around a tightly controlled, stateful component that must operate under strict guarantees.
Strong candidates explain early that the system is organized around a small number of latency-critical paths, with everything else deliberately kept out of those paths. This separation of concerns is fundamental to achieving both the performance and correctness requirements. Components on the critical path receive the most careful design attention and the most stringent performance budgets, while supporting components are free to use simpler approaches.
Core architectural components
The architecture divides naturally into several logical layers with distinct responsibilities. Client-facing components receive orders from users or brokers through trading gateways that perform authentication, authorization, and basic validation before forwarding orders deeper into the system. These gateways can be stateless and horizontally scaled because they do not maintain critical market state.
The matching engine sits at the core, processing validated orders, updating order books, and producing trades. This component is both extremely latency-sensitive and correctness-critical. It maintains in-memory state for order books to achieve ultra-low latency, with this state carefully managed, replicated, and checkpointed rather than treated like ordinary application data.
Persistence and replication layers store orders and trades durably so the system can recover from failures and provide the auditability required by regulators. Downstream consumers including market data publishers, reporting systems, and analytics tools receive trade events asynchronously and must never interfere with matching performance. This explicit separation of the latency-critical path from supporting components is the architectural insight that distinguishes strong candidates.
Read paths versus write paths
In a stock exchange, the write path is unambiguously the most important path. Submitting an order and matching it correctly must happen deterministically and quickly. This is where your latency budget is spent and where correctness guarantees must be absolute. Read paths for querying order status or viewing market data are important but secondary, and can often be served from replicas or derived data stores without the same stringent requirements.
Most components in the system should be stateless so they can scale horizontally and fail independently without affecting the overall system. The matching engine is the critical exception. Its statefulness is justified by the need for ultra-low latency access to order book data, but this state must be managed with extreme care.
Interviewers want to see that you understand when statefulness is justified and when it introduces unnecessary risk, and that you can explain the mechanisms used to make stateful components safe.
Watch out: A common mistake is suggesting that order books be stored in a distributed cache like Redis for “performance.” This introduces network latency on the critical path and creates consistency challenges that undermine the determinism guarantees the system requires.
With the high-level architecture established, we need to examine how orders enter the system and what safeguards ensure only valid orders reach the matching engine.
Order ingestion and validation pipeline
Order ingestion is far more than simply receiving HTTP requests. It is a carefully designed pipeline that ensures only valid, authorized, and safe orders ever reach the matching engine. Failures at this stage can corrupt the market by allowing invalid trades, creating regulatory liability, or introducing inconsistencies that propagate through the entire system. Strong candidates explain that ingestion is designed as a series of deterministic checks rather than best-effort processing.
The order submission flow
When a client submits an order, it first enters through a trading gateway that performs authentication to verify identity and authorization to confirm trading permissions for the requested instrument. The order then undergoes format validation including supported order types, instrument validity, and basic constraints like positive quantities and valid prices. These checks happen synchronously and must complete quickly since they are on the latency-critical path.
Risk checks follow validation and are often underestimated by candidates unfamiliar with financial systems. Before an order reaches the matching engine, the system must verify that executing it would not violate account constraints. A buy order must not exceed available funds, and a sell order must not exceed available holdings.
These pre-trade checks prevent the system from entering invalid states that would be difficult or impossible to unwind after execution. Strong candidates emphasize that risk checks are synchronous and blocking. If a check fails, the order is rejected immediately with a clear error response.
Real-world context: The 2012 Knight Capital incident, where a software deployment error caused the firm to lose $440 million in 45 minutes, demonstrates the catastrophic consequences of inadequate pre-trade risk checks. Modern exchanges implement multiple layers of circuit breakers and position limits to prevent similar runaway scenarios.
Ordering and sequencing guarantees
A critical question that interviewers often ask is how the system handles concurrent orders and preserves fairness. Strong answers explain that once an order passes validation, it receives a deterministic sequence number or timestamp that establishes its position relative to other orders for the same instrument. This sequencing is typically handled by a dedicated sequencer component that serializes all incoming orders for a given instrument.
The sequencer ensures that the matching engine processes orders in a well-defined order, preserving fairness and enabling deterministic replay. When two orders arrive at nearly the same time, “same time” is resolved by system-assigned sequencing rather than client timestamps, which could be spoofed or inconsistent. Invalid or rejected orders never affect system state, and rejection is treated as a first-class outcome with clear response messages that allow clients to retry or adjust their orders.
Now that we understand how orders safely enter the system, we can examine the most critical component.
Order matching engine design
The matching engine is the most critical and sensitive component in the entire system, and interviewers expect you to slow down here and reason carefully. This component must enforce strict price-time priority, guarantee deterministic execution regardless of load or timing, and operate under extremely low latency constraints. Strong candidates explicitly state that correctness always takes precedence over throughput because incorrect trades cannot be undone.
Order book structure and matching logic
Each instrument maintains its own order book containing outstanding buy and sell orders. The buy side is typically implemented as a max-heap or sorted structure with highest prices first, then by earliest submission time within each price level. The sell side uses a min-heap or sorted structure with lowest prices first, following the same time priority for ties. This organization ensures that the best available prices are always matched first and that fairness is maintained when multiple orders compete at the same price.
When a new order arrives, the matching engine attempts to match it against existing orders on the opposite side of the book. A buy order matches against the lowest available sell orders whose price is at or below the buy limit. Matching continues until the incoming order is fully filled or no compatible orders remain. Each match produces an immutable trade record, and partial fills are handled explicitly with remaining quantities retained in the order book if applicable.
| Order type | Matching behavior | Unfilled quantity handling |
|---|---|---|
| Market order | Executes immediately at best available prices | Typically canceled if book is empty |
| Limit order | Executes only at specified price or better | Remains in book until filled or canceled |
| Fill-or-cancel | Must execute completely in single match | Entire order canceled if not fully fillable |
| Stop-loss | Converts to market order when trigger price reached | Follows market order behavior after trigger |
Determinism and single-threaded processing
Determinism is non-negotiable. Given the same sequence of orders, the system must always produce exactly the same trades. This property is essential for auditing, regulatory compliance, and disaster recovery through replay. The matching engine cannot rely on nondeterministic factors such as thread scheduling, wall-clock time, or network timing to make decisions. All randomness must be eliminated from the matching logic.
Many production matching engines process orders for a given instrument in a single thread or event loop. This design choice might seem counterintuitive given the throughput requirements, but it dramatically simplifies correctness reasoning and eliminates entire categories of race conditions and deadlocks.
Scalability is achieved by partitioning work across instruments rather than parallelizing matching within a single order book. Each instrument’s matching can run on a dedicated core, and the system scales linearly as more instruments are added.
Pro tip: When discussing the single-threaded matching approach, explicitly mention that this is a deliberate tradeoff. You are accepting potentially lower peak throughput for a single instrument in exchange for dramatically simpler correctness guarantees and more predictable latency.
Once a trade executes, it must be persisted durably before the system acknowledges completion. Trade persistence and order book updates must be part of a single atomic operation to ensure trades are never lost and order books never enter inconsistent states. This atomicity guarantee is what enables deterministic recovery after failures. With the matching engine design established, we can now address how to scale the system for production workloads.
Scalability, performance, and low-latency design
Scaling a stock exchange requires different thinking than scaling typical web applications. The key insight is that order books are independent per instrument, which creates natural partition boundaries. Instead of trying to scale a single matching engine horizontally through sharding or distributed consensus, the system scales by assigning each instrument to a dedicated matching engine instance. This approach avoids shared state, eliminates cross-partition coordination, and allows linear scaling as trading volume grows.
Horizontal scaling patterns
While the matching engine is stateful and carefully controlled, most supporting components can scale horizontally without special coordination. Trading gateways, validation services, risk check services, and downstream consumers can all be stateless and replicated freely behind load balancers. Failures in one instance do not affect overall throughput because requests can be routed to healthy instances. The key principle is keeping the latency-critical path as minimal as possible while allowing non-critical components to absorb scale and handle failures gracefully.
Stock exchanges experience dramatic traffic bursts during specific events like market open, market close, and breaking news. A system designed for average load will fail catastrophically during these peaks.
Strong designs anticipate these spikes through pre-warming matching engines before market open, implementing load shedding policies that prioritize order submission over analytics queries, and maintaining capacity headroom that can absorb sudden demand increases without degradation.
Real-world context: Major exchanges typically provision for 10x their normal peak load to handle extreme events. During the March 2020 market volatility, some exchanges processed order volumes that were 5-6x their historical peaks, validating the importance of this capacity planning approach.
Hardware and system-level optimizations
At the latency targets required for competitive exchanges, software optimizations alone are insufficient. Production systems employ hardware-level techniques including CPU core pinning to eliminate context switching overhead, kernel bypass networking using technologies like DPDK to reduce network stack latency, memory-mapped files for persistence to minimize I/O overhead, and lock-free data structures to avoid contention in high-throughput scenarios. You do not need to design these in detail during an interview, but mentioning awareness of them demonstrates depth.
Shared resources introduce latency variance and cascading failure risk. Strong designs avoid global locks, centralized queues, or shared ID generators anywhere on the critical path. Sequencing and matching are localized per instrument, keeping the system predictable under load. Even logging and metrics collection are typically buffered and processed asynchronously to avoid introducing latency variance on the trading path.
With scalability addressed, we must now consider what happens when components fail and how the system maintains its correctness guarantees during recovery.
Correctness, consistency, and fault tolerance
Correctness in a stock exchange is absolute with no acceptable margin for error. Strong candidates define correctness explicitly before discussing mechanisms. Orders must match in strict price-time priority. Trades must execute exactly once without duplication or loss. System state must be reconstructable deterministically from durable records. This definition establishes the bar that all fault tolerance mechanisms must meet.
Strong consistency requirements
Unlike many distributed systems that can tolerate eventual consistency for improved availability, a stock exchange requires strong consistency for its core state. Order books, trade records, and account balances must reflect a single, globally consistent view at all times. This typically means synchronous writes and carefully controlled replication on the critical path, accepting the latency cost in exchange for correctness guarantees.
Strong candidates explain where different consistency levels are appropriate. The matching engine and trade persistence require strong consistency with synchronous replication. Market data distribution to external consumers can tolerate slight delays and use asynchronous replication. Analytics and reporting systems can work with eventually consistent replicas. This nuanced understanding of where to apply different consistency models demonstrates architectural maturity.
Watch out: Do not suggest using a distributed database with eventual consistency for the core order book state. Even brief inconsistencies between replicas could result in the same order being matched twice on different nodes, creating trades that cannot be reconciled.
Failure handling and recovery
Failures are inevitable in any distributed system, but incorrect recovery is completely unacceptable in a financial context. A strong design incorporates write-ahead logging to persist every state change before it takes effect, periodic snapshots of order book state to enable faster recovery, and deterministic replay of logs from the last snapshot to rebuild current state. This combination of event sourcing and checkpointing enables recovery without data loss.
Replication for high availability introduces additional complexity. For the matching engine specifically, many designs use a leader-follower pattern where a single leader processes all orders while followers maintain synchronized state through log replication.
Consensus protocols like Raft can coordinate failover when the leader becomes unavailable, though the latency overhead of distributed consensus must be weighed against availability requirements. Some exchanges accept brief trading halts during failover rather than introducing consensus latency on every operation.
Network retries and partial failures can cause duplicate messages even in well-designed systems. A strong design ensures that replaying an order or processing a duplicate message does not corrupt system state. This idempotency is typically enforced through unique identifiers on every operation and strict state machine transitions that reject operations that have already been applied. Mentioning idempotency and replay safety demonstrates awareness of the failure modes that occur in real distributed systems.
Beyond technical correctness, stock exchanges must also address security and compliance requirements that affect system design.
Security, compliance, and auditability
Security in a stock exchange is foundational rather than an afterthought because every order has direct financial consequences. Interviewers expect acknowledgment that the system operates in a hostile environment where participants have strong incentives to find exploits. Basic security principles include strong authentication of all users and brokers, authorization checks on every action verifying permission to trade specific instruments, encryption of all data in transit and at rest, and tamper-proof audit logs that capture every state transition.
While detailed market manipulation detection is beyond interview scope, acknowledging its existence demonstrates domain awareness. Strong candidates mention basic safeguards including rate limiting to prevent denial of service, anomaly detection for unusual order patterns, position limits and circuit breakers to prevent runaway losses, and validation that rejects malformed or potentially abusive orders. These controls exist at multiple layers throughout the system rather than as a single checkpoint.
Historical note: The Flash Crash of 2010, where the Dow Jones dropped nearly 1,000 points in minutes before recovering, led to significant regulatory changes including circuit breakers that automatically halt trading during extreme volatility. Modern exchange designs must accommodate these regulatory mechanisms.
Auditability deserves special emphasis because it affects system design throughout. Every order submission, modification, cancellation, match, and trade must be traceable for regulatory compliance and dispute resolution. This requirement drives the decision to use immutable, append-only logs for all state transitions rather than mutable database records.
When an interviewer asks how you would investigate a disputed trade, the answer should involve querying comprehensive audit logs that capture the complete history of every relevant order and the exact state of the order book at the time of execution.
With all the technical components covered, the final piece is understanding how to present this complex design effectively in an interview setting.
How to present a stock exchange system in an interview
Strong candidates establish the right framing from the very beginning of the discussion. Opening with explicit acknowledgment that this is a high-correctness, low-latency system signals maturity and frames every subsequent decision appropriately. A clear narrative typically progresses from scope clarification through domain concepts, then architecture centered on the matching engine, detailed discussion of ingestion and matching, and finally scaling and fault tolerance considerations.
Managing time and depth
This problem can easily consume an entire interview if not managed carefully. Strong candidates stay high-level on finance-specific details unless asked to elaborate, go deep on correctness guarantees and matching logic where it matters most, and avoid lengthy discussions of specific technologies or tooling unless directly relevant. Interviewers will guide you toward areas where they want more depth, so trust the process and respond to their cues.
Follow-up questions often target failure scenarios or extreme edge cases designed to test your reasoning under pressure. Strong candidates stay calm, restate the scenario to confirm understanding, and reason through the implications step by step. They do not contradict earlier statements or panic when encountering scenarios they had not anticipated. Demonstrating adaptability and showing that your design composes well under unexpected conditions are strong positive signals.
Pro tip: If asked about a scenario you had not considered, explicitly acknowledge it as an interesting edge case before reasoning through it. Saying “That’s a great edge case I hadn’t explicitly designed for, but let me think through how the system would handle it” shows intellectual honesty while giving you time to reason carefully.
Common mistakes to avoid
Several pitfalls consistently distinguish average candidates from excellent ones. Treating the system like a typical web application with caching layers and eventual consistency immediately raises red flags. Ignoring the importance of deterministic ordering or hand-waving the sequencing problem suggests lack of understanding of core requirements.
Assuming horizontal scaling patterns that work for stateless services will work for the matching engine shows insufficient appreciation for the constraints. Overcomplicating the design with unnecessary components or premature optimization without clear justification wastes time and obscures your thinking. Keeping the design as simple as possible while meeting all requirements demonstrates engineering judgment.
To deepen your preparation for this and similar problems, structured learning resources provide significant value. Grokking the System Design Interview on Educative offers curated patterns and step-by-step practice problems for building repeatable System Design intuition. Additional resources tailored to different experience levels include guides on the best System Design certifications, comprehensive courses, and practice platforms.
Conclusion
Designing a stock exchange system stands among the most demanding System Design interview questions because it eliminates the comfortable assumptions that make most distributed systems tractable. You cannot rely on eventual consistency when trades must execute in deterministic order. You cannot tolerate latency variance when microseconds affect fairness. You cannot accept data loss when every transaction has regulatory implications. This unforgiving environment forces you to reason precisely about correctness, isolation, and failure handling with no margin for error.
The future of exchange architecture points toward even lower latencies through hardware acceleration and FPGA-based matching engines, broader integration of event sourcing patterns for auditability, and increasing regulatory requirements for transparency and circuit breakers. Machine learning models are beginning to appear in fraud detection and anomaly monitoring, though the core matching logic remains deterministic and auditable by design. Understanding these trends demonstrates that you see System Design as an evolving discipline rather than a static set of patterns.
If you center your design around deterministic matching, strong consistency where it matters, and careful separation between latency-critical and supporting components, you will distinguish yourself from candidates who treat every problem the same way. The strongest answers are disciplined about scope, explicit about tradeoffs, and demonstrate that correctness always comes before cleverness.