Most of the time, when we talk about the potential impact of next-generation technologies on future computers, we’re talking about transistor performance. This makes sense — transistor scaling is what Moore’s law covers, and improving transistor density and design is what drove the “better, faster, cheaper,” mantra for nearly 40 years. But transistors aren’t the only area of CPU design that could benefit from dramatic improvements to underlying technology — and a team of researchers at Stanford believes it can address another critical problem that’s holding modern chips back, by building connective structures via copper and graphene combined rather than just copper.
Every modern CPU is wired together via an extensive network of copper wires, dubbed “interconnects.” These tiny copper wires carry data across the processor and throughout the entire SoC. IBM and Motorola introduced copper interconnects in 1997, followed by Intel in 2000. In 2000, Intel processors contained roughly 1km of copper interconnects per square centimeter. In 2017, 14nm chips contain roughly 10km of wiring in the same space.
The problem with relying solely on copper wiring is that you can’t scale interconnect wires the way you can scale transistor size. As wires shrink, the amount of current per wire area (aka, its current density) increases. Increased current density means increased resistance, and increased resistance means increased heat. The exact impacts of this vary according to the processor’s design, layout, and how long the wires are. Local interconnects don’t suffer much, because the distance is so short, but so-called global interconnects that connect different regions of the chip together can be substantially affected.
Making a wire smaller means you’re reducing the amount of metal that’s available for electrons to flow through. Imagine two pipes — one with a one foot diameter, and one with a 10 foot diameter. At any flow rate (measured in gallons per minute), you have to move water through the smaller pipe at a higher velocity compared to the larger. This increases both friction within the pipe and the turbulence of the water flowing through it. In a wire, pushing the same amount of current a small wire increases the resistance (and the excess heat) compared to a larger wire.
Graphene-sheathed copper can also prevent copper from penetrating through the dielectric (insulating layer) and causing a breakdown. One of the major findings of Ling Li, the lead author on this latest paper, is that graphene can be directly integrated on to patterned Cu layers at temperatures below 400C. This is a significant step forward, because the previous methods of applying the graphene to the Cu wires was incompatible with traditional foundry BEOL (Back End Of Line) processing. The additional graphene bond helps prevent electromigration by creating a “pristine” interface between graphene and Cu, allowing current to flow down the graphene in addition to the copper wire.
Graphene is just 0.3mm per layer, compared to the industry standard of using 2nm walls comprised of tantalum nitride. Composite wires have been shown to have just half the resistance of non-graphene counterparts, which means we could see significant performance increases and power consumption decreases should this technology take off.
Needless to say, yes, graphene still faces some headwinds. It remains extremely difficult to produce in bulk, and is generally hard to work with. As node sizes fall and the RC delay becomes a larger and larger component of the problem, the industry will need to have solutions ready to go at the 5nm node or below. Technology like this could provide an effective path forward and allow chip manufacturers to open up the throttle slightly thanks to reduced power consumption and improved SoC performance. We’re a long way from commercial availability, but discoveries like this put us on the proper path.
Written by: Joel Hruska
Published by http://bit.ly/2liw3x1
This story has also been covered in the following article: “Graphene Could Buttress Next-Gen Computer Chip Wiring” ( Read More )