What happened to DNA computing?
For more than five decades, engineers have shrunk silicon-based transistors over and over again, creating progressively smaller, faster, and more energy-efficient computers in the process. But the long technological winning streak—and the miniaturization that has enabled it —can’t last forever. “There is a need for technology to beat silicon, because we are reaching tremendous limitations on it,” says Nicholas Malaya, a computational scientist at AMD in California.
What could this successor technology be? There has been no shortage of alternative computing approaches proposed over the last 50 years. Here are five of the more memorable ones. All had plenty of hype, only to be trounced by silicon. But perhaps there is hope for them yet.
Spintronics
Computer chips are built around strategies to control the flow of electrons—more specifically, their charge. In addition to charge, however, electrons also have angular momentum, or spin, which can be manipulated with magnetic fields. Spintronics emerged in the 1980s, with the idea that spin can be used to represent bits: one direction could represent 1 and the other 0.
In theory, spintronic transistors can be made small, allowing for densely packed chips. But in practice it has been tough to find the right substances to construct them. Researchers say that a lot of basic materials science still needs to be worked out.
Nevertheless, spintronic technologies have been commercialized in a few very specific areas, says Gregory Fuchs, an applied physicist at Cornell University in Ithaca, New York. So far, the biggest success for spintronics has been nonvolatile memory, the sort that prevents data loss in the case of power failure. STT-RAM (for “spin transfer torque random access memory”) has been in production since 2012 and can be found in cloud storage facilities.
Memristors
Classic electronics is based on three components: capacitor, resistor, and inductor. In 1971, the electrical engineer Leon Chua theorized a fourth component he called the memristor, for “memory resistor.” In 2008, researchers at Hewlett-Packard developed the first practical memristor, using titanium dioxide.
It was exciting because memristors can in theory be used for both memory and logic. The devices “remember” the last applied voltage, so they hold onto information even if powered down. They also differ from ordinary resistors in that their resistance can change depending on the amount of voltage applied. Such modulation can be used to perform logic operations. If done within a computer’s memory, those operations can cut down on how much data needs to be shuttled between memory and processor.
Memristors made their commercial debut as nonvolatile storage, called RRAM or ReRAM, for “resistive random access memory.” But the field is still moving forward. In 2019, researchers developed a 5,832-memristor chip that can be used for artificial intelligence.
Carbon nanotubes
Carbon isn’t an ideal semiconductor. But under the right conditions it can be made to form nanotubes that are excellent ones. Carbon nanotubes were first crafted into transistors in the early 2000s, and studies showed they could be 10 times more energy efficient than silicon.
In fact, of the five alternative transistors discussed here, carbon nanotubes may be the farthest along. In 2013, Stanford researchers built the world’s first functional computer powered entirely by carbon nanotube transistors, albeit a simple one.
But carbon nanotubes tend to roll into little balls and clump together like spaghetti. What’s more, most conventional synthesis methods make semiconducting and metallic nanotubes in a messy mix. Material scientists and engineers have been researching ways to correct and work around these imperfections. In 2019, MIT researchers used improved techniques to make a 16-bit microprocessor with more than 14,000 carbon nanotube transistors. That’s still far from a silicon chip with millions or billions of transistors, but it’s progress nonetheless.
DNA computing
In 1994, Leonard Adleman, a computer scientist at the University of Southern California in Los Angeles, made a computer out of a soup of DNA. He showed that DNA could self-assemble in a test tube to explore all possible paths in the famous “traveling salesman” problem. Experts predicted DNA computing would beat silicon-based technology, particularly with massively parallel computing. Later, researchers concluded that DNA computing isn’t fast enough to do that.
But DNA holds some advantages. Researchers have shown that it’s possible to encode poetry, GIFs, and digital movies into the molecules. The potential density is staggering. All of the world’s digital data could be stored in a coffee mug full of DNA, biological engineers at MIT estimated in a paper earlier this year. The catch is cost: one coauthor later said that DNA synthesis would need to be six orders of magnitude cheaper to compete with magnetic tape.
Unless researchers can cut the cost of DNA storage, the stuff of life will stay stuck in cells.
Molecular electronics
It’s a compelling vision: transistors keep getting smaller and smaller, so why not jump ahead and make them out of individual molecules? Nanometer-scale switches would make for a supremely cost-effective, densely packed chip. The chips might even be able to assemble themselves thanks to interactions between molecules.
Groups at Hewlett-Packard and elsewhere in the early 2000s raced to make the chemistry and electronics work together.
But after decades of work, the dream of molecular electronics is still just that. Researchers have found that single molecules can be finicky, working as transistors under only very narrow conditions. “No one has shown how single-molecule devices can be reliably integrated into massively parallel microelectronics,” says Richard McCreery, a chemist at the University of Alberta.
The dream of molecular electronics has not completely died, but these days it is largely relegated to the chemistry and physics labs, where researchers continue struggling to make endlessly fickle molecules behave.
What comes next?
Silicon still reigns supreme, but time is running out for everyone’s favorite semiconductor. The latest International Roadmap for Devices and Systems (IRDS) suggests that transistors are expected to stop shrinking after 2028 and that integrated circuits will need to be stacked in three dimensions to keep making faster and more efficient chips possible.
This might be the time when other computing devices find an opening, but only in conjunction with silicon technology. Researchers are exploring hybrid approaches to making chips. In 2017, researchers who had made progress with carbon nanotube transistors integrated them with layers of nonvolatile memristors and silicon devices—a prototype for an approach to improving speed and energy consumption in computing by moving away from traditional architecture.
Classic silicon-based chips will still make some progress, says AMD’s Malaya. But, he adds, “I think the future will be heterogeneous, in which all the technologies are used probably in a complementary way to traditional computing.”
In other words, the future will still be silicon. But it will be other things as well.
Lakshmi Chandrasekaran is a freelance science writer based in Chicago.