In the 1960s, Gordon Moore, one of the co-founders of Intel, noticed that integrated circuits were becoming more complex at an exponential rate. He predicted that this growth would continue - that the number of transistors on a circuit would double every two years. This prediction came to be called Moore’s Law. Although Moore was focused on the number of transistors on a chip, others have expanded his idea into a prediction for overall performance increases. The combined benefits of increased complexity and speed in chips have resulted in processing power doubling every 18 months. Thus the term “Moore’s Law” is used to refer to this idea that overall processing power doubles every year and a half. Based on this idea that processing power doubles every 1.5 years, a processor that comes out today is roughly 4x more powerful than one from 3 years ago and 16x more powerful than one from six years ago. (Which helps explain why computers need to be upgraded so frequently.) Years 1.5 3 4.5 6 7.5 … 15 Doublings 1 2 3 4 5 … 10 Relative Power x2 x4 x8 x16 x32 … x1024 This continuous doubling and redoubling of computing power in chips has held steady for the past 40+ years. To put this kind of growth into perspective: If Moore’s Law applied to the air travel industry, a flight from New York to Paris that took 7 hours and cost $900 in 1978 would now require about 1/10th of a second and cost under a penny. ¶ Transistor count of various processors over the past 40 years. Note that the y-axis is a logarithmic scale - the straight line represents exponential growth. Unfortunately, the “free ride” of increased power that computer programmers and users have enjoyed has hit some speed bumps. Partly, this is the necessity of squeezing transistors into ever-smaller spaces. Currently, features on chips occupy ~20 nanometers, a span of fewer than 100 atoms, we do not have too much longer before the size of atoms becomes a barrier to making chips smaller. Equally importantly, making chips run faster requires more power. Power running through a chip results in waste heat that must be dissipated - only so much power can be used (and thus heat generated) before a chip becomes uncontrollably hot. The image below shows the path chip designers were on in the early 2000s… an unsustainable path in terms of how much power draw was being packed into ever-smaller spaces. ¶
Projected power density growth through the early 2000s. Red dots show the predicted path.
The slowing of Moore’s law has prompted many to ask, “Is Moore’s law dead? This, in fact, is not occurring. While Moore’s law is still delivering exponential improvements, the results are being delivered at a slower pace. The pace of technology innovation is NOT slowing down, however. Rather, the explosion of hyperconnectivity, big data, and artificial intelligence applications has increased the pace of innovation and the need for “Moore’s law-style” improvements in delivered technology. For many years, scale complexity drove Moore’s law and the semiconductor industry’s exponential technology growth. As the ability to scale a single chip slows, the industry is finding other methods of innovation to maintain exponential growth. This new design trend is driven by systemic complexity. Some aspects of this new approach to design have been dubbed “more than Moore.” This term refers primarily to 2.5D and 3D integration techniques. The complete landscape is far bigger and presents the opportunity for higher impact, however. At the 2021 SNUG World conference of worldwide Synopsys Users Group members, the chairman and co-CEO of Synopsys, Aart de Geus, presented a keynote address. In his presentation, de Geus observed that Moore’s law is now blending with new innovations that leverage systemic complexity. He coined the term SysMoore as a shorthand way to describe this new design paradigm. These trends and resultant terminology are summarized below. The SysMoore era will fuel semiconductor innovation for the foreseeable future. With it comes a wide range of design challenges that must be addressed. A semi-log plot of transistor counts for microprocessors against dates of introduction, nearly doubling every two years
Moore's law is the observation that the number of transistors in a dense integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience in production. The observation is named after Gordon Moore, the co-founder of Fairchild Semiconductor and Intel (and former CEO of the latter), who in 1965 posited a doubling every year in the number of components per integrated circuit,[a] and projected this rate of growth would continue for at least another decade. In 1975, looking forward to the next decade, he revised the forecast to doubling every two years, a compound annual growth rate (CAGR) of 41%. While Moore did not use empirical evidence in forecasting that the historical trend would continue, his prediction held since 1975 and has since become known as a "law". Moore's prediction has been used in the semiconductor industry to guide long-term planning and to set targets for research and development, thus functioning to some extent as a self-fulfilling prophecy. Advancements in digital electronics, such as the reduction in quality-adjusted microprocessor prices, the increase in memory capacity (RAM and flash), the improvement of sensors, and even the number and size of pixels in digital cameras, are strongly linked to Moore's law. These ongoing changes in digital electronics have been a driving force of technological and social change, productivity, and economic growth. Industry experts have not reached a consensus on exactly when Moore's law will cease to apply. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, slightly below the pace predicted by Moore's law. HistoryGordon Moore in 2004In 1959, Douglas Engelbart studied the projected downscaling of integrated circuit (IC) size, publishing his results in the article "Microelectronics, and the Art of Similitude".[2][3][4] Engelbart presented his findings at the 1960 International Solid-State Circuits Conference, where Moore was present in the audience.[5] In 1965, Gordon Moore, who at the time was working as the director of research and development at Fairchild Semiconductor, was asked to contribute to the thirty-fifth anniversary issue of Electronics magazine with a prediction on the future of the semiconductor components industry over the next ten years. His response was a brief article entitled "Cramming more components onto integrated circuits".[1][6][b] Within his editorial, he speculated that by 1975 it would be possible to contain as many as 65,000 components on a single quarter-square-inch (~1.6 square-centimeter) semiconductor.
Moore posited a log-linear relationship between device complexity (higher circuit density at reduced cost) and time.[9][10] In a 2015 interview, Moore noted of the 1965 article: "...I just did a wild extrapolation saying it’s going to continue to double every year for the next 10 years."[11] One historian of the law cites Stigler's law of eponymy, to introduce the fact that the regular doubling of components was known to many working in the field.[10] In 1974, Robert H. Dennard at IBM recognized the rapid MOSFET scaling technology and formulated what became known as Dennard scaling, which describes that as MOS transistors get smaller, their power density stays constant such that the power use remains in proportion with area.[12][13] Evidence from the semiconductor industry shows that this inverse relationship between power density and areal density broke down in the mid-2000s.[14] At the 1975 IEEE International Electron Devices Meeting, Moore revised his forecast rate,[15][16] predicting semiconductor complexity would continue to double annually until about 1980, after which it would decrease to a rate of doubling approximately every two years.[16][17][18] He outlined several contributing factors for this exponential behavior:[9][10]
Shortly after 1975, Caltech professor Carver Mead popularized the term "Moore's law".[19][20] Moore's law eventually came to be widely accepted as a goal for the semiconductor industry, and it was cited by competitive semiconductor manufacturers as they strove to increase processing power. Moore viewed his eponymous law as surprising and optimistic: "Moore's law is a violation of Murphy's law. Everything gets better and better."[21] The observation was even seen as a self-fulfilling prophecy.[22][23] The doubling period is often misquoted as 18 months because of a prediction by Moore's colleague, Intel executive David House. In 1975, House noted that Moore's revised law of doubling transistor count every 2 years in turn implied that computer chip performance would roughly double every 18 months[24] (with no increase in power consumption).[25] Mathematically, Moore's Law predicted that transistor count would double every 2 years due to shrinking transistor dimensions and other improvements. As a consequence of shrinking dimensions, Dennard scaling predicted that power consumption per unit area would remain constant. Combining these effects, David House deduced that computer chip performance would roughly double every 18 months. Also due to Dennard scaling, this increased performance would not be accompanied by increased power, i.e., the energy-efficiency of silicon-based computer chips roughly doubles every 18 months. Dennard scaling ended in the 2000s.[14] Koomey later showed that a similar rate of efficiency improvement predated silicon chips and Moore's Law, for technologies such as vacuum tubes. An Osborne Executive portable computer, from 1982, with a Zilog Z80 4 MHz CPU, and a 2007 Apple iPhone with a 412 MHz ARM11 CPU; the Executive has 100 times the weight, almost 500 times the volume, approximately 10 times the inflation adjusted cost, and 1/103rd the clock frequency of the smartphone.Microprocessor architects report that since around 2010, semiconductor advancement has slowed industry-wide below the pace predicted by Moore's law.[14] Brian Krzanich, the former CEO of Intel, cited Moore's 1975 revision as a precedent for the current deceleration, which results from technical challenges and is "a natural part of the history of Moore's law".[26][27][28] The rate of improvement in physical dimensions known as Dennard scaling also ended in the mid-2000s. As a result, much of the semiconductor industry has shifted its focus to the needs of major computing applications rather than semiconductor scaling.[22][29][14] Nevertheless, leading semiconductor manufacturers TSMC and Samsung Electronics have claimed to keep pace with Moore's law[30][31][32][33][34][35] with 10 nm and 7 nm nodes in mass production[30][31] and 5 nm nodes in risk production as of 2019[update].[36][37] Moore's second lawAs the cost of computer power to the consumer falls, the cost for producers to fulfill Moore's law follows an opposite trend: R&D, manufacturing, and test costs have increased steadily with each new generation of chips. Rising manufacturing costs are an important consideration for the sustaining of Moore's law.[38] This led to the formulation of Moore's second law, also called Rock's law, which is that the capital cost of a semiconductor fabrication plant also increases exponentially over time.[39][40] Major enabling factorsThe trend of MOSFET scaling for NAND flash memory allows the doubling of floating-gate MOSFET components manufactured in the same wafer area in less than 18 months.Numerous innovations by scientists and engineers have sustained Moore's law since the beginning of the IC era. Some of the key innovations are listed below, as examples of breakthroughs that have advanced integrated circuit and semiconductor device fabrication technology, allowing transistor counts to grow by more than seven orders of magnitude in less than five decades.
Computer industry technology road maps predicted in 2001 that Moore's law would continue for several generations of semiconductor chips.[64] Recent trendsA simulation of electron density as gate voltage (Vg) varies in a nanowire MOSFET. The threshold voltage is around 0.45 V. Nanowire MOSFETs lie toward the end of the ITRS road map for scaling devices below 10 nm gate lengths.One of the key challenges of engineering future nanoscale transistors is the design of gates. As device dimension shrinks, controlling the current flow in the thin channel becomes more difficult. Modern nanoscale transistors typically take the form of multi-gate MOSFETs, with the FinFET being the most common nanoscale transistor. The FinFET has gate dielectric on three sides of the channel. In comparison, the gate-all-around MOSFET (GAAFET) structure has even better gate control.
Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, below the pace predicted by Moore's law.[14] Brian Krzanich, the former CEO of Intel, announced, "Our cadence today is closer to two and a half years than two."[96] Intel stated in 2015 that improvements in MOSFET devices have slowed, starting at the 22 nm feature width around 2012, and continuing at 14 nm.[97] The physical limits to transistor scaling have been reached due to source-to-drain leakage, limited gate metals and limited options for channel material. Other approaches are being investigated, which do not rely on physical scaling. These include the spin state of electron spintronics, tunnel junctions, and advanced confinement of channel materials via nano-wire geometry.[98] Spin-based logic and memory options are being developed actively in labs.[99][100] Alternative materials researchThe vast majority of current transistors on ICs are composed principally of doped silicon and its alloys. As silicon is fabricated into single nanometer transistors, short-channel effects adversely change desired material properties of silicon as a functional transistor. Below are several non-silicon substitutes in the fabrication of small nanometer transistors. One proposed material is indium gallium arsenide, or InGaAs. Compared to their silicon and germanium counterparts, InGaAs transistors are more promising for future high-speed, low-power logic applications. Because of intrinsic characteristics of III-V compound semiconductors, quantum well and tunnel effect transistors based on InGaAs have been proposed as alternatives to more traditional MOSFET designs.
Biological computing research shows that biological material has superior information density and energy efficiency compared to silicon-based computing.[108] Various forms of graphene are being studied for graphene electronics, e.g. graphene nanoribbon transistors have shown great promise since its appearance in publications in 2008. (Bulk graphene has a band gap of zero and thus cannot be used in transistors because of its constant conductivity, an inability to turn off. The zigzag edges of the nanoribbons introduce localized energy states in the conduction and valence bands and thus a bandgap that enables switching when fabricated as a transistor. As an example, a typical GNR of width of 10 nm has a desirable bandgap energy of 0.4 eV.[109][110]) More research will need to be performed, however, on sub-50 nm graphene layers, as its resistivity value increases and thus electron mobility decreases.[109] Forecasts and roadmapsIn April 2005, Gordon Moore stated in an interview that the projection cannot be sustained indefinitely: "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens." He also noted that transistors eventually would reach the limits of miniaturization at atomic levels:
A 2015 survey[112] of fundamental limits projected that it would not continue past 2055. Nvidia CEO Jensen Huang declared Moore's law dead in 2022[113], although this statement was made in relation to justifying increasing prices of chips. Several days later Intel CEO Pat Gelsinger declared that Moore's law is not dead.[114] ConsequencesDigital electronics have contributed to world economic growth in the late twentieth and early twenty-first centuries.[115] The primary driving force of economic growth is the growth of productivity,[116] and Moore's law factors into productivity. Moore (1995) expected that "the rate of technological progress is going to be controlled from financial realities".[117] The reverse could and did occur around the late-1990s, however, with economists reporting that "Productivity growth is the key economic indicator of innovation."[118] Moore's law describes a driving force of technological and social change, productivity, and economic growth.[119][120][116] An acceleration in the rate of semiconductor progress contributed to a surge in U.S. productivity growth,[121][122][123] which reached 3.4% per year in 1997–2004, outpacing the 1.6% per year during both 1972–1996 and 2005–2013.[124] As economist Richard G. Anderson notes, "Numerous studies have traced the cause of the productivity acceleration to technological innovations in the production of semiconductors that sharply reduced the prices of such components and of the products that contain them (as well as expanding the capabilities of such products)."[125] The primary negative implication of Moore's law is that obsolescence pushes society up against the Limits to Growth. As technologies continue to rapidly "improve", they render predecessor technologies obsolete. In situations in which security and survivability of hardware or data are paramount, or in which resources are limited, rapid obsolescence often poses obstacles to smooth or continued operations.[126] Because of the intensive resource footprint and toxic materials used in the production of computers, obsolescence leads to serious harmful environmental impacts. Americans throw out 400,000 cell phones every day,[127] but this high level of obsolescence appears to companies as an opportunity to generate regular sales of expensive new equipment, instead of retaining one device for a longer period of time, leading to industry using planned obsolescence as a profit centre.[128] Intel transistor gate length trend – transistor scaling has slowed down significantly at advanced (smaller) nodes.An alternative source of improved performance is in microarchitecture techniques exploiting the growth of available transistor count. Out-of-order execution and on-chip caching and prefetching reduce the memory latency bottleneck at the expense of using more transistors and increasing the processor complexity. These increases are described empirically by Pollack's Rule, which states that performance increases due to microarchitecture techniques approximate the square root of the complexity (number of transistors or the area) of a processor.[129] For years, processor makers delivered increases in clock rates and instruction-level parallelism, so that single-threaded code executed faster on newer processors with no modification.[130] Now, to manage CPU power dissipation, processor makers favor multi-core chip designs, and software has to be written in a multi-threaded manner to take full advantage of the hardware. Many multi-threaded development paradigms introduce overhead, and will not see a linear increase in speed vs number of processors. This is particularly true while accessing shared or dependent resources, due to lock contention. This effect becomes more noticeable as the number of processors increases. There are cases where a roughly 45% increase in processor transistors has translated to roughly 10–20% increase in processing power.[131] On the other hand, manufacturers are adding specialized processing units to deal with features such as graphics, video, and cryptography. For one example, Intel's Parallel JavaScript extension not only adds support for multiple cores, but also for the other non-general processing features of their chips, as part of the migration in client side scripting toward HTML5.[132] Moore's law has affected the performance of other technologies significantly: Michael S. Malone wrote of a Moore's War following the apparent success of shock and awe in the early days of the Iraq War. Progress in the development of guided weapons depends on electronic technology.[133] Improvements in circuit density and low-power operation associated with Moore's law also have contributed to the development of technologies including mobile telephones[134] and 3-D printing.[135] Other formulations and similar observationsSeveral measures of digital technology are improving at exponential rates related to Moore's law, including the size, cost, density, and speed of components. Moore wrote only about the density of components, "a component being a transistor, resistor, diode or capacitor",[117] at minimum cost. Transistors per integrated circuit – The most popular formulation is of the doubling of the number of transistors on ICs every two years. At the end of the 1970s, Moore's law became known as the limit for the number of transistors on the most complex chips. The graph at the top shows this trend holds true today. As of 2017, the commercially available processor possessing the highest number of transistors is the 48 core Centriq with over 18 billion transistors.[136] Density at minimum cost per transistorThis is the formulation given in Moore's 1965 paper.[1] It is not just about the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest.[137] As more transistors are put on a chip, the cost to make each transistor decreases, but the chance that the chip will not work due to a defect increases. In 1965, Moore examined the density of transistors at which cost is minimized, and observed that, as transistors were made smaller through advances in photolithography, this number would increase at "a rate of roughly a factor of two per year".[1] Dennard scaling – This posits that power usage would decrease in proportion to area (both voltage and current being proportional to length) of transistors. Combined with Moore's law, performance per watt would grow at roughly the same rate as transistor density, doubling every 1–2 years. According to Dennard scaling transistor dimensions would be scaled by 30% (0.7x) every technology generation, thus reducing their area by 50%. This would reduce the delay by 30% (0.7x) and therefore increase operating frequency by about 40% (1.4x). Finally, to keep electric field constant, voltage would be reduced by 30%, reducing energy by 65% and power (at 1.4x frequency) by 50%.[c] Therefore, in every technology generation transistor density would double, circuit becomes 40% faster, while power consumption (with twice the number of transistors) stays the same.[138] Dennard scaling came to end in 2005–2010, due to leakage currents.[14] The exponential processor transistor growth predicted by Moore does not always translate into exponentially greater practical CPU performance. Since around 2005–2007, Dennard scaling has ended, so even though Moore's law continued for several years after that, it has not yielded dividends in improved performance.[12][139] The primary reason cited for the breakdown is that at small sizes, current leakage poses greater challenges, and also causes the chip to heat up, which creates a threat of thermal runaway and therefore, further increases energy costs.[12][139][14] The breakdown of Dennard scaling prompted a greater focus on multicore processors, but the gains offered by switching to more cores are lower than the gains that would be achieved had Dennard scaling continued.[140][141] In another departure from Dennard scaling, Intel microprocessors adopted a non-planar tri-gate FinFET at 22 nm in 2012 that is faster and consumes less power than a conventional planar transistor.[142] The rate of performance improvement for single-core microprocessors has slowed significantly.[143] Single-core performance was improving by 52% per year in 1986–2003 and 23% per year in 2003–2011, but slowed to just seven percent per year in 2011–2018.[143] Quality adjusted price of IT equipment – The price of information technology (IT), computers and peripheral equipment, adjusted for quality and inflation, declined 16% per year on average over the five decades from 1959 to 2009.[144][145] The pace accelerated, however, to 23% per year in 1995–1999 triggered by faster IT innovation,[118] and later, slowed to 2% per year in 2010–2013.[144][146] While quality-adjusted microprocessor price improvement continues,[147] the rate of improvement likewise varies, and is not linear on a log scale. Microprocessor price improvement accelerated during the late 1990s, reaching 60% per year (halving every nine months) versus the typical 30% improvement rate (halving every two years) during the years earlier and later.[148][149] Laptop microprocessors in particular improved 25–35% per year in 2004–2010, and slowed to 15–25% per year in 2010–2013.[150] The number of transistors per chip cannot explain quality-adjusted microprocessor prices fully.[148][151][152] Moore's 1995 paper does not limit Moore's law to strict linearity or to transistor count, "The definition of 'Moore's Law' has come to refer to almost anything related to the semiconductor industry that on a semi-log plot approximates a straight line. I hesitate to review its origins and by doing so restrict its definition."[117] Hard disk drive areal density – A similar prediction (sometimes called Kryder's law) was made in 2005 for hard disk drive areal density.[153] The prediction was later viewed as over-optimistic. Several decades of rapid progress in areal density slowed around 2010, from 30–100% per year to 10–15% per year, because of noise related to smaller grain size of the disk media, thermal stability, and writability using available magnetic fields.[154][155] Fiber-optic capacity – The number of bits per second that can be sent down an optical fiber increases exponentially, faster than Moore's law. Keck's law, in honor of Donald Keck.[156] Network capacity – According to Gerald Butters,[157][158] the former head of Lucent's Optical Networking Group at Bell Labs, there is another version, called Butters' Law of Photonics,[159] a formulation that deliberately parallels Moore's law. Butters' law says that the amount of data coming out of an optical fiber is doubling every nine months.[160] Thus, the cost of transmitting a bit over an optical network decreases by half every nine months. The availability of wavelength-division multiplexing (sometimes called WDM) increased the capacity that could be placed on a single fiber by as much as a factor of 100. Optical networking and dense wavelength-division multiplexing (DWDM) is rapidly bringing down the cost of networking, and further progress seems assured. As a result, the wholesale price of data traffic collapsed in the dot-com bubble. Nielsen's Law says that the bandwidth available to users increases by 50% annually.[161] Pixels per dollar – Similarly, Barry Hendy of Kodak Australia has plotted pixels per dollar as a basic measure of value for a digital camera, demonstrating the historical linearity (on a log scale) of this market and the opportunity to predict the future trend of digital camera price, LCD and LED screens, and resolution.[162][163][164][165] The great Moore's law compensator (TGMLC), also known as Wirth's law – generally is referred to as software bloat and is the principle that successive generations of computer software increase in size and complexity, thereby offsetting the performance gains predicted by Moore's law. In a 2008 article in InfoWorld, Randall C. Kennedy,[166] formerly of Intel, introduces this term using successive versions of Microsoft Office between the year 2000 and 2007 as his premise. Despite the gains in computational performance during this time period according to Moore's law, Office 2007 performed the same task at half the speed on a prototypical year 2007 computer as compared to Office 2000 on a year 2000 computer. Library expansion – was calculated in 1945 by Fremont Rider to double in capacity every 16 years, if sufficient space were made available.[167] He advocated replacing bulky, decaying printed works with miniaturized microform analog photographs, which could be duplicated on-demand for library patrons or other institutions. He did not foresee the digital technology that would follow decades later to replace analog microform with digital imaging, storage, and transmission media. Automated, potentially lossless digital technologies allowed vast increases in the rapidity of information growth in an era that now sometimes is called the Information Age. Carlson curve – is a term coined by The Economist[168] to describe the biotechnological equivalent of Moore's law, and is named after author Rob Carlson.[169] Carlson accurately predicted that the doubling time of DNA sequencing technologies (measured by cost and performance) would be at least as fast as Moore's law.[170] Carlson Curves illustrate the rapid (in some cases hyperexponential) decreases in cost, and increases in performance, of a variety of technologies, including DNA sequencing, DNA synthesis, and a range of physical and computational tools used in protein expression and in determining protein structures. Eroom's law – is a pharmaceutical drug development observation which was deliberately written as Moore's Law spelled backwards in order to contrast it with the exponential advancements of other forms of technology (such as transistors) over time. It states that the cost of developing a new drug roughly doubles every nine years. Experience curve effects says that each doubling of the cumulative production of virtually any product or service is accompanied by an approximate constant percentage reduction in the unit cost. The acknowledged first documented qualitative description of this dates from 1885.[171][172] A power curve was used to describe this phenomenon in a 1936 discussion of the cost of airplanes.[173] Edholm's law – Phil Edholm observed that the bandwidth of telecommunication networks (including the Internet) is doubling every 18 months.[174] The bandwidths of online communication networks has risen from bits per second to terabits per second. The rapid rise in online bandwidth is largely due to the same MOSFET scaling that enables Moore's law, as telecommunications networks are built from MOSFETs.[175] Haitz's law predicts that the brightness of LEDs increases as their manufacturing cost goes down. Swanson's law is the observation that the price of solar photovoltaic modules tends to drop 20 percent for every doubling of cumulative shipped volume. At present rates, costs go down 75% about every 10 years. See also
Notes
References
Further reading
External links
Wikibooks has a book on the topic of: The Information Age
Page 2
In semiconductor manufacturing, the International Technology Roadmap for Semiconductors defines the 7 nm process as the MOSFET technology node following the 10 nm node. It is based on FinFET (fin field-effect transistor) technology, a type of multi-gate MOSFET technology. Taiwan Semiconductor Manufacturing Company (TSMC) began production of 256 Mbit SRAM memory chips using a 7 nm process called N7 in June 2016,[1] before Samsung began mass production of their 7 nm process called 7LPP devices in 2018.[2] The first mainstream 7 nm mobile processor intended for mass market use, the Apple A12 Bionic, was released at Apple's September 2018 event.[3] Although Huawei announced its own 7 nm processor before the Apple A12 Bionic, the Kirin 980 on August 31, 2018, the Apple A12 Bionic was released for public, mass market use to consumers before the Kirin 980. Both chips are manufactured by TSMC.[4] AMD has released their "Rome" (EPYC 2) processors for servers and datacenters, which are based on TSMC's N7 node[5] and feature up to 64 cores and 128 threads. They have also released their "Matisse" consumer desktop processors with up to 16 cores and 32 threads. However, the I/O die on the Rome multi-chip module (MCM) is fabricated with the GlobalFoundries' 14 nm (14HP) process, while the Matisse's I/O die uses the GlobalFoundries' 12 nm (12LP+) process. The Radeon RX 5000 series is also based on TSMC's N7 process. Since 2009, however, "node" has become a commercial name for marketing purposes[6] that indicates new generations of process technologies, without any relation to gate length, metal pitch or gate pitch.[7][8][9] TSMC and Samsung's 10 nm (10 LPE) processes are somewhere between Intel's 14 nm and 10 nm processes in transistor density. History
Technology demos7 nm scale MOSFETs were first demonstrated by researchers in the early 2000s. In 2002, an IBM research team including Bruce Doris, Omer Dokumaci, Meikei Ieong and Anda Mocuta fabricated a 6 nm silicon-on-insulator (SOI) MOSFET.[10][11] In 2003, NEC's research team led by Hitoshi Wakabayashi and Shigeharu Yamagami fabricated a 5 nm MOSFET.[12][13] In July 2015, IBM announced that they had built the first functional transistors with 7 nm technology, using a silicon-germanium process.[14][15][16][17] In June 2016, TSMC had produced 256 Mbit SRAM memory cells at their 7 nm process,[1] with a cell area of 0.027 square micrometers (550 F2)[spelling?] with reasonable risk production yields.[18] Expected commercialization and technologiesIn April 2016, TSMC announced that 7 nm trial production would begin in the first half of 2017.[19] In April 2017, TSMC began risk production of 256 Mbit SRAM memory chips using a 7 nm (N7FF+) process,[1] with extreme ultraviolet lithography (EUV).[20] TSMC's 7 nm production plans, as of early 2017, were to use deep ultraviolet (DUV) immersion lithography initially on this process node (N7FF), and transition from risk to commercial volume manufacturing from Q2 2017 to Q2 2018. Also, their later generation 7 nm (N7FF+) production is planned to use EUV multiple patterning and to have an estimated transition from risk to volume manufacturing between 2018 and 2019.[21] In September 2016, GlobalFoundries announced trial production in the second half of 2017 and risk production in early 2018, with test chips already running.[22] In February 2017, Intel announced Fab 42 in Chandler, Arizona will produce microprocessors using 7 nm (Intel 4[23]) manufacturing process.[24] The company has not published any expected values for feature lengths at this process node. In April 2018, TSMC announced volume production of 7 nm (CLN7FF, N7) chips. In June 2018, the company announced mass production ramp up.[2] In May 2018, Samsung announced production of 7 nm (7LPP) chips this year. ASML Holding NV is their main supplier of EUV lithography machines.[25] In August 2018, GlobalFoundries announced it was stopping development of 7 nm chips, citing cost.[26] On October 28, 2018, Samsung announced their second generation 7 nm process (7LPP) had entered risk production and should enter mass production in 2019. On January 17, 2019, for the Q4 2018 earnings call, TSMC mentioned that different customers will have "different flavors" of second generation 7 nm.[27] On April 16, 2019, TSMC announced their 6 nm process called (CLN6FF, N6), which is expected to be in mass products from 2021.[28] N6 uses EUVL in up to 5 layers, compared to up to 4 layers in their N7+ process.[29] On July 28, 2019, TSMC announced their second gen 7 nm process called N7P, which is DUV-based like their N7 process.[30] Since N7P is fully IP-compatible with the original 7 nm, while N7+ (which uses EUV) is not, N7+ (announced earlier as '7 nm+') is a separate process from '7 nm'. N6 ('6 nm'), another EUV-based process, is planned to be released later than even TSMC's 5 nm (N5) process, with the IP-compatibility with N7. At their Q1 2019 earnings call, TSMC reiterated their Q4 2018 statement[27] that N7+ would generate less than $1 billion TWD in revenue in 2019.[31] On October 5, 2019, AMD announced their EPYC Roadmap, featuring Milan chips built using TSMC's N7+ process.[32] On October 7, 2019, TSMC announced they started delivering N7+ products to market in high volume.[33] On July 26, 2021, Intel announced their new manufacturing roadmap, renaming all of their future process nodes.[23] Intel's 10 nm Enhanced SuperFin (10ESF), which is roughly equivalent to TSMC's N7 process, would now be known as Intel 7, while their earlier 7 nm process would now be called Intel 4.[23][34] This means that their first processors based on the new 7 nm would start shipping by the second half of 2022. Intel earlier announced that they would launch 7 nm processors in 2023.[35] Technology commercializationIn June 2018, AMD announced 7 nm Radeon Instinct GPUs launching in the second half of 2018.[36] In August 2018, the company confirmed the release of the GPUs.[37] On August 21, 2018, Huawei announced their HiSilicon Kirin 980 SoC to be used in their Huawei Mate 20 and Mate 20 Pro built using TSMC's 7 nm (N7) process. On September 12, 2018, Apple announced their A12 Bionic chip used in iPhone XS and iPhone XR built using TSMC's 7 nm (N7) process. The A12 processor became the first 7 nm chip for mass market use as it released before the Huawei Mate 20.[38][39] On October 30, 2018, Apple announced their A12X Bionic chip used in iPad Pro built using TSMC's 7 nm (N7) process.[40] On December 4, 2018, Qualcomm announced their Snapdragon 855 and 8cx built using TSMC's 7 nm (N7) process.[41] The first mass product featuring the Snapdragon 855 was the Lenovo Z5 Pro GT, which was announced on December 18, 2018.[42] On May 29, 2019, MediaTek announced their 5G SoC built using a TSMC 7 nm process.[43] On July 7, 2019, AMD officially launched their Ryzen 3000 series of central processing units, based on the TSMC 7 nm process and Zen 2 microarchitecture. On August 6, 2019, Samsung announced their Exynos 9825 SoC, the first chip built using their 7LPP process. The Exynos 9825 is the first mass market chip built featuring EUVL.[44] On September 6, 2019, Huawei announced their HiSilicon Kirin 990 4G & 990 5G SoCs, built using TSMC's N7 and N7+ processes.[45] On September 10, 2019, Apple announced their A13 Bionic chip used in iPhone 11 and iPhone 11 Pro built using TSMC's 2nd gen N7P process.[46] 7 nm (N7 nodes) manufacturing made up 36% of TSMC's revenue in the second quarter of 2020.[47] On August 17, 2020, IBM announced their Power10 processor.[46] On July 26, 2021, Intel announced that their Alder Lake processors would be manufactured using their newly rebranded Intel 7 process, previously known as 10 nm Enhanced SuperFin.[23] These processors will be released in the second half of 2021. The company earlier confirmed a 7 nm, now called Intel 4,[23] microprocessor family called Meteor Lake to be released in 2023.[48][49] 7 nm patterning difficultiesPitch splitting issues. Successive litho-etch patterning is subject to overlay errors as well as the CD errors from different exposures. Spacer patterning issues. Spacer patterning has excellent CD control for features directly patterned by the spacer, but the spaces between spacers may be split into core and gap populations. Overlay error impact on line cut. An overlay error on a cut hole exposure could distort the line ends (top) or infringe on an adjacent line (bottom). Two-bar EUV patterning issues. In EUV lithography, a pair of features may not have both features in focus at the same time; one will have different size from the other, and both will shift differently through focus as well. 7 nm EUV stochastic failure probability. 7 nm features are expected to approach ~20 nm width. The probability of EUV stochastic failure is measurably high for the commonly applied dose of 30 mJ/cm2.The 7 nm foundry node is expected to utilize any of or a combination of the following patterning technologies: pitch splitting, self-aligned patterning, and EUV lithography. Each of these technologies carries significant challenges in critical dimension (CD) control as well as pattern placement, all involving neighboring features. Pitch splittingPitch splitting involves splitting features that are too close together onto different masks, which are exposed successively, followed by litho-etch processing. Due to the use of different exposures, there is always the risk of overlay error between the two exposures, as well as different CDs resulting from the different exposures. Spacer patterningSpacer patterning involves depositing a layer onto pre-patterned features, then etching back to form spacers on the sidewalls of those features, referred to as core features. After removing the core features, the spacers are used as an etch mask to define trenches in the underlying layer. While the spacer CD control is generally excellent, the trench CD may fall into one of two populations, due to the two possibilities of being located where a core feature was located or in the remaining gap. This is known as 'pitch walking'.[50] Generally pitch = core CD + gap CD + 2 * spacer CD, but this does not guarantee core CD = gap CD. For FEOL features like gate or active area isolation (e.g., fins), the trench CD is not as critical as the spacer-defined CD, in which case, spacer patterning is actually the preferred patterning approach. When self-aligned quadruple patterning (SAQP) is used, there is a second spacer that is utilized, replacing the first one. In this case, the core CD is replaced by core CD - 2* 2nd spacer CD, and the gap CD is replaced by gap CD - 2 * 2nd spacer CD. Thus, some feature dimensions are strictly defined by the second spacer CD, while the remaining feature dimensions are defined by the core CD, core pitch, and first and second spacer CD's. The core CD and core pitch are defined by conventional lithography, while the spacer CDs are independent of lithography. This is actually expected to have less variation than pitch splitting, where an additional exposure defines its own CD, both directly and through overlay. Spacer-defined lines also require cutting. The cut spots may shift at exposure, resulting in distorted line ends or intrusions into adjacent lines. Self-aligned litho-etch-litho-etch (SALELE) has been implemented for 7 nm BEOL patterning.[51] EUV lithographyExtreme ultraviolet lithography (also known as EUV or EUVL) is capable of resolving features below 20 nm in conventional lithography style. However, the 3D reflective nature of the EUV mask results in new anomalies in the imaging. One particular nuisance is the two-bar effect, where a pair of identical bar-shaped features do not focus identically. One feature is essentially in the 'shadow' of the other. Consequently, the two features generally have different CDs which change through focus, and these features also shift position through focus.[52][53][54] This effect may be similar to what may be encountered with pitch splitting. A related issue is the difference of best focus among features of different pitches.[55] EUV also has issues with reliably printing all features in a large population; some contacts may be completely missing or lines bridged. These are known as stochastic printing failures.[56][57] The defect level is on the order of 1K/mm2.[58] The tip-to-tip gap is hard to control for EUV, largely due to the illumination constraint.[59] A separate exposure(s) for cutting lines is preferred. Attenuated phase shift masks have been used in production for 90 nm node for adequate focus windows for arbitrarily pitched contacts with the ArF laser wavelength (193 nm),[60][61] whereas this resolution enhancement is not available for EUV.[62][63] At 2021 SPIE's EUV Lithography conference, it was reported by a TSMC customer that EUV contact yield was comparable to immersion multipatterning yield.[64] Comparison with previous nodesDue to these challenges, 7 nm poses unprecedented patterning difficulty in the back end of line (BEOL). The previous high-volume, long-lived foundry node (Samsung 10 nm, TSMC 16 nm) used pitch splitting for the tighter pitch metal layers.[65][66][67] Cycle time: immersion vs. EUV
Due to the immersion tools being faster presently, multipatterning is still used on most layers. On the layers requiring immersion quad-patterning, the layer completion throughput by EUV is comparable. On the other layers, immersion would be more productive at completing the layer even with multipatterning. 7 nm design rule management in volume productionThe 7 nm metal patterning currently practiced by TSMC involves self-aligned double patterning (SADP) lines with cuts inserted within a cell on a separate mask as needed to reduce cell height.[70] However, self-aligned quad patterning (SAQP) is used to form the fin, the most important factor to performance.[71] Design rule checks also allow via multi-patterning to be avoided, and provide enough clearances for cuts that only one cut mask is needed.[71] 7 nm process nodes and process offeringsThe naming of process nodes by 4 different manufacturers (TSMC, Samsung, SMIC, Intel) is partially marketing-driven and not directly related to any measurable distance on a chip – for example TSMC's 7 nm node was previously similar in some key dimensions to Intel's planned first-iteration 10 nm node, before Intel released further iterations, culminating in "10nm Enhanced SuperFin", which was later renamed to "Intel 7" for marketing reasons.[72][73] Since EUV implementation at 7 nm is still limited, multipatterning still plays an important part in cost and yield; EUV adds extra considerations. The resolution for most critical layers is still determined by multiple patterning. For example, for Samsung's 7 nm, even with EUV single-patterned 36 nm pitch layers, 44 nm pitch layers would still be quadruple patterned.[74]
GlobalFoundries' 7 nm 7LP (Leading Performance) process would have offered 40% higher performance or 60%+ lower power with a 2x scaling in density and at a 30-45+% lower cost per die over its 14 nm process. The Contacted Poly Pitch (CPP) would have been 56 nm and the Minimum Metal Pitch (MMP) would have been 40 nm, produced with Self-Aligned Double Patterning (SADP). A 6T SRAM cell would have been 0.269 square microns in size. GlobalFoundries planned to eventually use EUV lithography in an improved process called 7LP+.[92] GlobalFoundries later stopped all 7 nm and beyond process development.[93] Intel's new "Intel 7" process, previously known as 10 nm Enhanced SuperFin (10ESF), is based on its previous 10 nm node. The node will feature a 10-15% increase in performance per watt. Meanwhile, their old 7 nm process, now called "Intel 4", is expected to be released in 2023.[94] Few details about the Intel 4 node have been made public, although its transistor density has been estimated to be at least 202 million transistors per square millimeter.[23][95] As of 2020, Intel is experiencing problems with its Intel 4 process to the point of outsourcing production of its Ponte Vecchio GPUs.[96][97] References
External links
|