AMD’s Lisa Su Breaks Through the Silicon Ceiling – IEEE Spectrum

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
The CEO is the first woman to receive IEEE’s highest semiconductor award
When Lisa Su became CEO of Advanced Micro Devices in 2014, the company was on the brink of bankruptcy. Since then, AMD’s stock has soared—from less than US $2 per share to more than $110. The company is now a leader in high-performance computing.
Su received accolades for spearheading AMD’s turnaround, appearing on the Barron’s Top CEOs of 2021 list, Fortune‘s 2020 Most Powerful Women, and CNN’s Risk Takers.
She recently added another honor: the IEEE Robert N. Noyce Medal. Su is the first woman to receive the award, which recognizes her “leadership in groundbreaking semiconductor products and successful business strategies that contributed to the strength of the microelectronics industry.” Sponsored by Intel, the Noyce Medal is considered to be one of the semiconductor industry’s most prestigious honors.

“To be honest, I would have never imagined that I would receive the Noyce award,” the IEEE Fellow says. “It’s an honor of a lifetime. To have that recognition from my peers in the technical community is a humbling experience. But I love what I do and being able to contribute to the semiconductor industry.”
CLIMBING THE LEADERSHIP LADDER
Su has long had a practical bent. She decided to study electrical engineering, she says, because she was drawn to the prospect of building hardware.
“I felt like I was actually building and making things,” she says. She attended MIT, where she earned bachelor’s, master’s, and doctoral degrees, all in EE, in 1990, 1991, and 1994.
“It might surprise people that my parents would have preferred that I became a medical doctor,” she says, laughing. “That was the most well-respected profession when I was growing up. But I never really liked the sight of blood. I ended up getting a Ph.D., which I guess was the next best thing.”
Her interest in semiconductors was sparked at MIT. As a doctoral candidate, Su was one of the first researchers to look into silicon-on-insulator (SOI) technology, according to an MIT Technology Review article about her. The then-unproven technique increased transistors’ efficiency by building them atop layers of an insulating material. Today SOI is used either to boost the performance of microchips or to reduce their power requirements.
Su has spent most of her career working on semiconductor projects for large companies. Along the way, she evolved from researcher to manager to top executive. Looking back, Su divides her career path into two parts. The first 20 or so years she was involved in research and development; for the past 15 years, she has worked on the business side.
Her first job was with Texas Instruments, in Dallas, where she was a member of the technical staff at the company’s semiconductor process and device center. She joined in 1994, but after a year, she left for IBM, in New York. There, she was a staff member researching device physics. In 2000 she was assigned to be the technical assistant for IBM’s chief executive. She later was promoted to director of emerging projects.
She made the switch to management in 2006, when she was appointed vice president of IBM’s semiconductor research and development center in New York.
To better learn how to manage people, she took several leadership courses offered by the company.
“I remember thinking after every class that I had learned something that I could apply going forward,” she says.
Su says she doesn’t agree with the notion that leadership is an innate ability.
“I really do believe that you can be trained to be a good leader,” she says. “A lot of leadership isn’t all that intuitive, but over time you develop an intuition for things to look for. Experience helps.
“As engineers transition into business or management, you have to think about a different set of challenges that are not necessarily ‘How do you make your transistor go faster?’ but [instead] ‘How do you motivate teams?’ or ‘How do you understand more about what customers want?’ I’ve made my share of mistakes in those transitions, but I’ve also learned a lot.
“I’ve also learned something from every boss I’ve ever worked for.”
“Great leaders can actually have their teams do 120 percent more than what they thought was possible.”
One of the first places she got a chance to put her training into action was at Freescale Semiconductor, in Austin, Texas. In 2007 she took over as chief technology officer and oversaw the company’s research and development efforts. She was promoted to senior vice president and general manager of Freescale’s networking and multimedia group. In that role, she was responsible for global strategy, marketing, and engineering for the embedded communications and applications processor business.
She left in 2012 to join AMD, also in Austin, as senior vice president, overseeing the company’s global business units. Two years later she was appointed president and CEO, the first woman to run a Fortune 500 semiconductor company.
It took more than leadership skills to get to the top, she says.
“It’s a little bit of you have to be good [at what you do], but you also have to be lucky and be in the right place at the right time,” she says. “I was fortunate in that I had a lot of opportunities throughout my career.”
As CEO, she fosters a supportive and diverse culture at AMD.
“What I try to do is ensure that we’re giving people a lot of opportunities,” she says. “We have some very strong technical leaders at AMD who are women, so we’re making progress. But of course it’s nowhere near enough and it’s nowhere near fast enough. There’s always much more that can be done.”
Motivating employees is part of her job, she says.
“One of the things I believe is that great leaders can actually have their teams do 120 percent more than what they thought was possible,” she says. “What we try to do is to really inspire phenomenal and exceptional results.”
AMD’s business is booming, and Su is credited with expanding the market for the company’s chips beyond PCs to game consoles and embedded devices. AMD released products in 2017 with its Ryzen desktop processors and Epyc server processors for data centers. They are based on its Zen microarchitecture, which enabled the chips to quickly process more instructions than the competition. The Radeon line of graphics cards for gaming consoles debuted in 2000.
The company’s net income for last year was nearly $2.5 billion, according to Investor’s Business Daily.
WHAT’S AHEAD
Today AMD is focused on building the next generation of supercomputers—which Su says will be “important in many aspects of research going forward.”
Last year the company announced its advanced CPUs, GPUs, and software will be powering Lawrence Livermore National Laboratory’s El Capitan exascale-class supercomputer. Predicted to be the world’s fastest when it goes into service in 2023, El Capitan is expected to expand the use of artificial intelligence and machine learning.
There currently is a tightness in the semiconductor supply chain, Su acknowledges, but she says she doesn’t think the shortage will fundamentally change what the company does in terms of technology or product development.
“The way to think about semiconductor technology and road maps,” she says, “is that the decisions about the products that we’re building today were really decisions that were made three to five years ago. And the products or technical decisions that we’re making today will affect our products three to five years down the road.”
The semiconductor industry has never been more interesting, she says, even with Moore’s Law slowing down. Moore’s Law, she says, “requires all of us to think differently about how we get to that next level of innovation. And it’s not just about silicon innovation. It’s also about packaging innovation, system software, and bringing together all those disciplines. There’s a whole aspect to our work about just how to make our tools and our technologies easier to adopt.”
The COVID-19 pandemic has brought technology into the center of how people work, live, learn, and play, she notes.
“Our goal,” she says, “is to continue to make technology that touches more people’s lives.”
Su was recently appointed to serve on the President’s Council of Advisors on Science and Technology, a group of external advisers tasked with making science, technology, and innovation policy recommendations to the White House and President Biden.
IMPORTANT ASSOCIATION
Su joined IEEE while a student so she could access its technical content.
“IEEE publications were just the most important,” she says. “As a student, you wanted to publish in an IEEE journal or present at an IEEE conference. We all believed it was where people wanted to share their research.
“I think IEEE is still the foremost organization for bringing researchers together to share their findings, to network, and to develop and build relationships,” she says. “I’ve met many people through my IEEE connections, and they continue to be close colleagues. It’s just a great organization to move the industry forward.”
Su donated the cash prize of $20,000 she received as part of the Noyce medal to the IEEE Women in Engineering Fund managed through the IEEE Foundation.

This article was updated from an earlier version.
Kathy Pretz is editor in chief for The Institute, which covers all aspects of IEEE, its members, and the technology they’re involved in. She has a bachelor’s degree in applied communication from Rider University, in Lawrenceville, N.J., and holds a master’s degree in corporate and public communication from Monmouth University, in West Long Branch, N.J.
Please correct this typo: “Looking back, Lu….” I am a huge fan of Dr. Su, and I want you to spell her name correctly every single time!
Is there a way out of AI’s boom-and-bust cycle?
The 1958 perceptron was billed as “the first device to think as the human brain.” It didn’t quite live up to the hype.
In the summer of 1956, a group of mathematicians and computer scientists took over the top floor of the building that housed the math department of Dartmouth College. For about eight weeks, they imagined the possibilities of a new field of research. John McCarthy, then a young professor at Dartmouth, had coined the term “artificial intelligence” when he wrote his proposal for the workshop, which he said would explore the hypothesis that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

The researchers at that legendary meeting sketched out, in broad strokes, AI as we know it today. It gave rise to the first camp of investigators: the “symbolists,” whose expert systems reached a zenith in the 1980s. The years after the meeting also saw the emergence of the “connectionists,” who toiled for decades on the artificial neural networks that took off only recently. These two approaches were long seen as mutually exclusive, and competition for funding among researchers created animosity. Each side thought it was on the path to artificial general intelligence.

This article is part of our special report on AI, “The Great AI Reckoning.”

This article is part of our special report on AI, “The Great AI Reckoning.”
A look back at the decades since that meeting shows how often AI researchers’ hopes have been crushed—and how little those setbacks have deterred them. Today, even as AI is revolutionizing industries and threatening to upend the global labor market, many experts are wondering if today’s AI is reaching its limits. As Charles Choi delineates in “Seven Revealing Ways AIs Fail,” the weaknesses of today’s deep-learning systems are becoming more and more apparent. Yet there’s little sense of doom among researchers. Yes, it’s possible that we’re in for yet another AI winter in the not-so-distant future. But this might just be the time when inspired engineers finally usher us into an eternal summer of the machine mind.
Researchers developing symbolic AI set out to explicitly teach computers about the world. Their founding tenet held that knowledge can be represented by a set of rules, and computer programs can use logic to manipulate that knowledge. Leading symbolists Allen Newell and Herbert Simon argued that if a symbolic system had enough structured facts and premises, the aggregation would eventually produce broad intelligence.
The connectionists, on the other hand, inspired by biology, worked on “artificial neural networks” that would take in information and make sense of it themselves. The pioneering example was the perceptron, an experimental machine built by the Cornell psychologist Frank Rosenblatt with funding from the U.S. Navy. It had 400 light sensors that together acted as a retina, feeding information to about 1,000 “neurons” that did the processing and produced a single output. In 1958, a New York Times article quoted Rosenblatt as saying that “the machine would be the first device to think as the human brain.”
Image of Frank Rosenblatt with the device, perceptron. Frank Rosenblatt invented the perceptron, the first artificial neural network.Cornell University Division of Rare and Manuscript Collections
Unbridled optimism encouraged government agencies in the United States and United Kingdom to pour money into speculative research. In 1967, MIT professor Marvin Minsky wrote: “Within a generation…the problem of creating ‘artificial intelligence’ will be substantially solved.” Yet soon thereafter, government funding started drying up, driven by a sense that AI research wasn’t living up to its own hype. The 1970s saw the first AI winter.
True believers soldiered on, however. And by the early 1980s renewed enthusiasm brought a heyday for researchers in symbolic AI, who received acclaim and funding for “expert systems” that encoded the knowledge of a particular discipline, such as law or medicine. Investors hoped these systems would quickly find commercial applications. The most famous symbolic AI venture began in 1984, when the researcher Douglas Lenat began work on a project he named Cyc that aimed to encode common sense in a machine. To this very day, Lenat and his team continue to add terms (facts and concepts) to Cyc’s ontology and explain the relationships between them via rules. By 2017, the team had 1.5 million terms and 24.5 million rules. Yet Cyc is still nowhere near achieving general intelligence.
In the late 1980s, the cold winds of commerce brought on the second AI winter. The market for expert systems crashed because they required specialized hardware and couldn’t compete with the cheaper desktop computers that were becoming common. By the 1990s, it was no longer academically fashionable to be working on either symbolic AI or neural networks, because both strategies seemed to have flopped.
Image of men sitting on grass in front of a building for a 1956 workshop.
Image of Herbert Simon teaching in a classroom.
Image of Allen Newell teaching AI rules in a classroom. The field of AI began at a 1956 workshop [top] attended by, from left, Oliver Selfridge, Nathaniel Rochester, Ray Solomonoff, Marvin Minsky, an unidentified person, workshop organizer John McCarthy, and Claude Shannon. Symbolists such as Herbert Simon [middle] and Allen Newell [bottom] wanted to teach AI rules about the world.The Minsky Family; Carnegie Mellon University (2)
But the cheap computers that supplanted expert systems turned out to be a boon for the connectionists, who suddenly had access to enough computer power to run neural networks with many layers of artificial neurons. Such systems became known as deep neural networks, and the approach they enabled was called deep learning. Geoffrey Hinton, at the University of Toronto, applied a principle called back-propagation to make neural nets learn from their mistakes (see “How Deep Learning Works“).
One of Hinton’s postdocs, Yann LeCun, went on to AT&T Bell Laboratories in 1988, where he and a postdoc named Yoshua Bengio used neural nets for optical character recognition; U.S. banks soon adopted the technique for processing checks. Hinton, LeCun, and Bengio eventually won the 2019 Turing Award and are sometimes called the godfathers of deep learning.
But the neural-net advocates still had one big problem: They had a theoretical framework and growing computer power, but there wasn’t enough digital data in the world to train their systems, at least not for most applications. Spring had not yet arrived.
Over the last two decades, everything has changed. In particular, the World Wide Web blossomed, and suddenly, there was data everywhere. Digital cameras and then smartphones filled the Internet with images, websites such as Wikipedia and Reddit were full of freely accessible digital text, and YouTube had plenty of videos. Finally, there was enough data to train neural networks for a wide range of applications.
The other big development came courtesy of the gaming industry. Companies such as Nvidia had developed chips called graphics processing units (GPUs) for the heavy processing required to render images in video games. Game developers used GPUs to do sophisticated kinds of shading and geometric transformations. Computer scientists in need of serious compute power realized that they could essentially trick a GPU into doing other tasks—such as training neural networks. Nvidia noticed the trend and created CUDA, a platform that enabled researchers to use GPUs for general-purpose processing. Among these researchers was a Ph.D. student in Hinton’s lab named Alex Krizhevsky, who used CUDA to write the code for a neural network that blew everyone away in 2012.
Image of MIT professor, Marvin Minsky. MIT professor Marvin Minsky predicted in 1967 that true artificial intelligence would be created within a generation.The MIT Museum
He wrote it for the ImageNet competition, which challenged AI researchers to build computer-vision systems that could sort more than 1 million images into 1,000 categories of objects. While Krizhevsky’s AlexNet wasn’t the first neural net to be used for image recognition, its performance in the 2012 contest caught the world’s attention. AlexNet’s error rate was 15 percent, compared with the 26 percent error rate of the second-best entry. The neural net owed its runaway victory to GPU power and a “deep” structure of multiple layers containing 650,000 neurons in all. In the next year’s ImageNet competition, almost everyone used neural networks. By 2017, many of the contenders’ error rates had fallen to 5 percent, and the organizers ended the contest.
Deep learning took off. With the compute power of GPUs and plenty of digital data to train deep-learning systems, self-driving cars could navigate roads, voice assistants could recognize users’ speech, and Web browsers could translate between dozens of languages. AIs also trounced human champions at several games that were previously thought to be unwinnable by machines, including the ancient board game Go and the video game StarCraft II. The current boom in AI has touched every industry, offering new ways to recognize patterns and make complex decisions.
A look back across the decades shows how often AI researchers’ hopes have been crushed—and how little those setbacks have deterred them.
But the widening array of triumphs in deep learning have relied on increasing the number of layers in neural nets and increasing the GPU time dedicated to training them. One analysis from the AI research company OpenAI showed that the amount of computational power required to train the biggest AI systems doubled every two years until 2012—and after that it doubled every 3.4 months. As Neil C. Thompson and his colleagues write in “Deep Learning’s Diminishing Returns,” many researchers worry that AI’s computational needs are on an unsustainable trajectory. To avoid busting the planet’s energy budget, researchers need to bust out of the established ways of constructing these systems.
While it might seem as though the neural-net camp has definitively tromped the symbolists, in truth the battle’s outcome is not that simple. Take, for example, the robotic hand from OpenAI that made headlines for manipulating and solving a Rubik’s cube. The robot used neural nets and symbolic AI. It’s one of many new neuro-symbolic systems that use neural nets for perception and symbolic AI for reasoning, a hybrid approach that may offer gains in both efficiency and explainability.
Image of Douglas Lenat.
Image of Geoffrey Hinton.
Image of Yann LeCun.
Image of Yoshua Bengio. Neither symbolic AI projects such as Cyc from Douglas Lenat [top] nor the deep-learning advances pioneered by [from top] Geoffrey Hinton, Yann LeCun, and Yoshua Bengio have yet produced human-level intelligence.From top: Bob E. Daemmrich/Sygma/Getty Images; Christopher Wahl/The New York Times/Redux; Bruno Levy/REA/Redux; Cole Burston/Bloomberg/Getty Images
Although deep-learning systems tend to be black boxes that make inferences in opaque and mystifying ways, neuro-symbolic systems enable users to look under the hood and understand how the AI reached its conclusions. The U.S. Army is particularly wary of relying on black-box systems, as Evan Ackerman describes in “How the U.S. Army Is Turning Robots Into Team Players,” so Army researchers are investigating a variety of hybrid approaches to drive their robots and autonomous vehicles.
Imagine if you could take one of the U.S. Army’s road-clearing robots and ask it to make you a cup of coffee. That’s a laughable proposition today, because deep-learning systems are built for narrow purposes and can’t generalize their abilities from one task to another. What’s more, learning a new task usually requires an AI to erase everything it knows about how to solve its prior task, a conundrum called catastrophic forgetting. At DeepMind, Google’s London-based AI lab, the renowned roboticist Raia Hadsell is tackling this problem with a variety of sophisticated techniques. In “How DeepMind Is Reinventing the Robot,” Tom Chivers explains why this issue is so important for robots acting in the unpredictable real world. Other researchers are investigating new types of meta-learning in hopes of creating AI systems that learn how to learn and then apply that skill to any domain or task.
All these strategies may aid researchers’ attempts to meet their loftiest goal: building AI with the kind of fluid intelligence that we watch our children develop. Toddlers don’t need a massive amount of data to draw conclusions. They simply observe the world, create a mental model of how it works, take action, and use the results of their action to adjust that mental model. They iterate until they understand. This process is tremendously efficient and effective, and it’s well beyond the capabilities of even the most advanced AI today.
Although the current level of enthusiasm has earned AI its own Gartner hype cycle, and although the funding for AI has reached an all-time high, there’s scant evidence that there’s a fizzle in our future. Companies around the world are adopting AI systems because they see immediate improvements to their bottom lines, and they’ll never go back. It just remains to be seen whether researchers will find ways to adapt deep learning to make it more flexible and robust, or devise new approaches that haven’t yet been dreamed of in the 65-year-old quest to make machines more like us.
This article appears in the October 2021 print issue as “The Turbulent Past and Uncertain Future of AI.”

source