Abstract - After decades of continuous scaling, further advancement of silicon microelectronics across the entire spectrum of computing applications is today limited by power dissipation. While the tradeoff between power and performance is well-recognized, most recent studies focus on the extreme ends of this balance. By concentrating instead on an intermediate range, an ~8x improvement in power efficiency can be attained without system performance loss in parallelizable applications – those in which such efficiency is most critical. It is argued that power-efficient hardware is fundamentally limited by voltage scaling, which can be achieved only by blurring the boundaries between devices, circuits, and systems and cannot be realized by addressing any one area alone. By simultaneously considering all three perspectives, the major issues involved in improving power efficiency in light of performance and area constraints are identified. Solutions for the critical elements of a practical computing system are discussed, including the underlying logic device, associated cache memory, off-chip interconnect, and power delivery system. Going forward, further power reduction may demand radical changes in device technologies and computer architecture; hence, a few such promising methods are briefly considered.
Bio - Leland Chang received the B. S., M. S., and Ph.D. degrees in electrical engineering and computer sciences from the University of California, Berkeley, where his doctoral work focused on the FinFET and related thin-body device structures. He joined the IBM T. J. Watson Research Center in 2003, where he realized that embedded memory was just as important as (if not more so than) logic computation. These days, he splits his time between managing a technology group that dabbles all too often in circuits and systems and pursuing the never-ending quest for power efficiency in high performance systems.