The optimal policy in 1994 is : bind a high-bandwidth device (e.g., FDDI or UltraSCSI controller) to a dedicated CPU. That CPU runs the interrupt handler, the device driver's bottom half, and the user process that consumes the data. This "pipeline" design, seen in Sequent's DYNIX/ptx, can achieve 85% linear scaling for network I/O.
Consider the traditional sleep() / wakeup() mechanism. In a single-CPU UNIX, this was elegant. In an SMP, it requires a "rendezvous" interrupt to all CPUs, flushing TLBs and invalidating cache lines. A 1994 benchmark on an SGI Challenge (12x MIPS R4400) showed that a simple select() loop on 1000 file descriptors caused 40% of kernel time to be spent in cross-CPU TLB shootdowns.
The danger is . A misbehaving network card at 100Mbps can generate 150,000 interrupts per second. If all interrupts go to one CPU, that CPU is dead. The solution is interrupt coalescing (already in some Ethernet chips) and the use of "kernel threads" for bottom halves, allowing the interrupt dispatcher to merely wake a thread that runs on any CPU. unix systems for modern architectures -1994- pdf
Modern RISC CPUs are clocked at 66-200MHz, while DRAM access times hover at 60-80ns. The performance gap—the "memory wall"—is now two orders of magnitude. Consequently, the UNIX kernel’s data structures (process table, buffer cache, vnode/inode tables) must be arranged for L1/L2 cache locality.
The traditional BSD scheduler (O(N) priority recalculation every second) is fatal on a 16-CPU system. The 4.4BSD-Lite scheduler, while improved, still requires a global lock on the run queue. The optimal policy in 1994 is : bind
UNIX for Modern Architectures: Scalability, SMP, and the Post-RISC Era (1994)
The traditional UNIX buffer cache—a pool of memory pages used to cache disk blocks—is obsolete on modern architectures for two reasons. First, the virtual memory system can now page directly from the filesystem (using mmap() and clustered pageins). Second, on SMP systems, the buffer cache lock becomes a global bottleneck. Consider the traditional sleep() / wakeup() mechanism
UNIX in 1994 is like a 1960s muscle car with a new fuel-injected engine: powerful but dangerously unstable. The transition to fine-grained locking, 64-bit cleanliness, and interrupt affinity is painful. Many vendors will fail (NeXT, Apollo, perhaps even SVR4 itself). The survivors will be those who treat the kernel not as a monolithic program but as a concurrent data structure problem.