NVIDIA Corporation
Q4 2020 Earnings Call Transcript

Published:

  • Operator:
    Good afternoon. My name is Christina and I'm your conference operator today. Welcome to NVIDIA's financial results conference call. All lines have been placed on mute. After the speakers' remarks, there will be question-and-answer period. [Operator Instructions] Thank you.I'll now turn the call over to Simona Jankowski, Vice President of Investor Relations, to begin your conference.
  • Simona Jankowski:
    Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the fourth quarter of fiscal 2020. With me on the call today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the first quarter of fiscal 2021. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today February 13, 2020, based on information currently available to us. Except as required by law we assume no obligation to update any such statements.During this call we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.With that, let me turn the call over to Colette.
  • Colette Kress:
    Thanks, Simona. Q4 revenue was $3.11 billion, up 41% year-on-year and up 3% sequentially, well above our outlook, reflecting upside in our data center and gaming businesses. Full year revenue was $10.9 billion, down 7%. We recovered from the excess channel inventory in gaming and an earlier pause in hyperscale spending and exited the year with great momentum.Starting with gaming. Revenue of $1.49 billion was up 56% year-on-year and down 10% sequentially. Full year gaming revenue was $5.52 billion, down 12% from our prior year. We enjoyed strong end demand for our desktop and notebook GPUs.Let me give you some more details. Our gaming lineup was exceptionally well positioned for the holidays, with the unique ray tracing capabilities of our RTX GPUs and incredible performance at every price point. From the Singles Day shopping event in China, through the Christmas season in the West, channel demand was strong for our entire stack.Fueling this were new blockbuster games like Call of Duty
  • Operator:
    [Operator Instructions] And our first question comes from the line of Toshiya Hari with Goldman Sachs.
  • Toshiya Hari:
    Hi guys. Thanks very much for the question. I guess on data center Colette or Jensen, can you speak to some of the areas that drove the upside in the quarter? You talked about inference in both the T4 and the V100 having record quarters, but relative to your internal expectations. What were some of the businesses that drove the upside and if you can also speak to the breadth of your customer profile today relative to a couple of years ago, how that's expanded. That would be helpful as well. Thank you.
  • Jensen Huang:
    Yes. Toshiya thanks a lot for your question. The primary driver for growth is AI. There are four fundamental dynamics. The first is that the AI models that are being created are achieving breakthroughs and quite amazing breakthroughs in fact in natural language understanding, in conversational AI and recommendation systems.And you know this, but for the others in the audience, recommendation systems are essentially the engine of the Internet today. And the reason for that is because there are so many items in the world, whether it's a store or whether it's content or websites or information you are querying, there are hundreds of billions, trillions and depending on how you count it hundreds of trillions of items in the world.And there are billions of people each with their own characteristics and there are countless contexts. And between the items the people the users and the various contexts that we're in location and what you're looking for and weather or what's happening in the environment those kind of contexts affects the search query that -- the answer they provide you.The recommendation system is just foundational now to search. And some people have said this is the end of search in the beginning and the era of recommendation systems. Work is being done everywhere around the world in advancing recommendation systems. And very first time over the last year, it's been able to be done in deep learning.And so the first thing is just the breakthroughs in AI. The second is production AI which means that whereas, we had significant and we continue to have significant opportunities in training because the model is getting larger and there are more of them we're seeing a lot of these models going into production and that business is called inference.Inference as Colette mentioned, grew 4 times year-over-year. It's a substantial part of our business now. But one of the interesting statistics is TensorRT 7, the entire TensorRT download this year was about 500,000, a doubling over a year ago. What most people don't understand about Inference is, it's an incredibly complex computational problem, but it's an enormously complex software problem. And so, the second dynamic is moving from training or growing from training and models going into production called Inference.The third is the growth not just in hyperscale anymore, but in public cloud and in vertical industries. Public cloud because of thousands of AI start-ups that are now developing AI software in the cloud. And the OpEx model works much better for them as they're younger. When they become larger they could decide to build their own data center infrastructure on-prem, but the thousands of start-ups start their lives in the cloud.We're also seeing really great success in verticals. One of the most exciting vertical is logistics; logistics, retail, warehousing. We announced I think this quarter or last -- end of last quarter USPS, American Express, Walmart just large companies who have enormous amounts of data that they're trying to do data analytics on and do predictive analytics on.And so the third dynamic is the growth in -- beyond hyperscale and public cloud as well as vertical industries. And then the last dynamic is being talked about a lot and this is really, really exciting and it's called Edge AI. We used to call it industries and AI where the action is. But the industry is now called Edge AI. We're seeing a lot of excitement there.And the reason for that is, you need to have low latency. inference. You might not be able to stream data all the way to the cloud for cost reasons or data sovereignty reasons and you need the response time. And so those four dynamics around AI really drove our growth.
  • Operator:
    Your next question comes from the line of Joe Moore with Morgan Stanley.
  • Joe Moore:
    Great. Just following up on that. As you look back at the last 12 months and the deceleration that you saw in your HPC cloud business now that you have the perspective of seeing, what's driving the rebound any thoughts on what drove it to slow down in the first place? Was it just digestion?Was it sort of a handoff from image recognition to these newer applications that you just talked about? Just help us what happened there? And I guess as it pertains to the future do we think of this as a business that will have that kind of lumpiness to it?
  • Jensen Huang:
    Yes. That's a really good question. In fact if you look backwards we only have the benefit of history. The deep recommendation systems, the natural language understanding breakthroughs the conversational AI breakthroughs all happened in this last year. And the velocity by which the industry captured the benefits here and continue to evolve and advance from these what so-called transformer models was really quite incredible. And so the all of a sudden the number of breakthroughs in AI has just grown tremendously and these models have grown tremendously.Just this last week Microsoft announced that they're training a neural net model in collaboration with work that we did we call it Megatron increased the size's of the model from 7.5 billion parameters to 17.5 billion parameters. And the accuracy of their natural language understanding has just -- has really been boosted. And so the models are -- AI is finding really fantastic breakthroughs and models are getting bigger and there are more of them.And when you look back and look at when these breakthroughs happened it essentially happened this last year. The second, we've been working on inference for some time. And until this last year very few of those inference models went into production. And now we have deep learning models across all of the hyperscalers in production. And this last year we saw really great growth in inference.The third dynamic is public clouds. All these AI startups that are being started all over the world there's about 6,000 of them they're starting to develop and be able to put their models into production. And with the scale out of AWS we now have T4s in every single geography. So the combination of the availability of our GPUs in the cloud and the startups and vertical industries deploying their AI models into production the combination of all that just kind of came together. And all of that happened this last year. And as a result we had record sales of V100s and T4s. And so we're quite excited with the developments and it's all really powered by AI.
  • Operator:
    Your next question comes from the line of Vivek Arya with Bank of America Securities.
  • Vivek Arya:
    Thanks for taking my question and congratulations on returning the business back to the strong growth. Jensen, I wanted to ask about how you are positioned from a supply perspective for this coming year? Your main foundry is running pretty tight. How will you be able to support the 20% or so growth here that many investors are looking for? If you could just give us some commentary on how you're positioned from a supply perspective that will be very helpful.
  • Jensen Huang:
    Well I think we're in pretty good shape on supply. We surely won't have ample supply. It is true that the industry is tight and the combination of supporting multiple processes, multiple fabs across our partner TSMC. We've got a lot of different factories and a lot of different -- several different nodes of process qualified. I think we're in good shape. And so we just have to watch it closely. And we're working very closely with all of our customers and forecasting. And of course that gives us better visibility as well and -- but all of us have to do a better job forecasting and we're working very closely between our customers and our foundry partner TSMC.
  • Operator:
    Your next question comes from the line of Timothy Arcuri with UBS.
  • Timothy Arcuri:
    Hi. Thanks. Colette, I'm wondering if you can give us -- in data center if you can give us a little idea of what the mix was between industries and hyperscale? I think last quarter hyperscale was a little bit less than 50%. Can you give us maybe the mix or how much it was up something like that? Thanks.
  • Colette Kress:
    Yes. Tim thanks for the question. Similar to what we had seen last quarter with all things growing as we moved into this quarter growth in terms of the hyperscales continue to expand in terms of those vertical industries and even in the cloud instances. We're still looking at around the same split of 50-50 between our hyperscales and our vertical industries and maybe a little bit tad below 50 in terms of our total overall hyperscales.
  • Operator:
    Your next question comes from the line of Aaron Rakers with Wells Fargo.
  • Aaron Rakers:
    Yeah. Thanks for taking the question and congratulations on the results. When I look at the numbers the growth on an absolute basis sequentially in data center was almost 2x or north of 2x what we've seen in the past as far as the absolute sequential change. Through the course of this quarter you were pretty clear that you would expect to see an acceleration of growth in the December quarter.I'm just curious of how you think about that going into the April quarter? And how we should think about that growth rate through the course of this year? If you can give us any kind of framework.And Jensen just curious I mean as you think about the bigger picture where do you think we stand from an industry perspective today in terms of the amount or the attach rate of GPUs is for acceleration in the server market? And where do you think that might be looking out over the next three years or so? Thank you.
  • Jensen Huang:
    Thanks Aaron. Colette do you want to go first?
  • Colette Kress:
    Sure. When we think about going into Q1 and our data center overall growth, we do expect to see continued growth both going into Q1. We believe our visibility still remain positive quite well and we're expecting that as we move into it and go forward.
  • Jensen Huang:
    Yes. Aaron I believe that every query on the Internet will be accelerated someday. And at the very core of it most -- almost all queries will have some natural language understanding component to it. Almost all queries will have to sort through and make a recommendation from the trillions of possibilities filter it down and recommend a handful of recommended answers to your queries.Whether it's shopping or movies or just asking locations or even asking a question, the number of the possibilities of all the answers versus what is best answer is -- needs to be filtered down. And that filtering process is called recommendation. That recommendation system is really complex and deep learning is going to be involved in all that. That's the first thing. I believe that every query will be accelerated.The second is as you know CPU scaling has really slowed and there's just no two ways about it. It's not a marketing thing. It's a physics thing. And the ability for CPUs to continue to scale without increasing cost or increasing power has ended. And it's called the end of Dennard scaling. And so there has to be another approach.The combination of the emergence of deep learning and the use of artificial intelligence and the amount of computation that's necessary to for every single query, but the benefit that comes along with that and the end of Dennard scaling suggests that there needs to be another approach and we believe that approach is acceleration.Now, our approach for acceleration is fundamentally different than an accelerator. Notice we never say accelerator, we say accelerated computing. And the reason for that is because we believe that a software-defined data center will have all kinds of different AIs. The AIs will continue to evolve the models will continue to evolve and get larger and a software-defined data center needs to be programmable. It is one of the reasons why we've been so successful.And if you go back and think about all the questions that have been asked of me over the last three or four years around this area the consistency of the answer has to do with the programmability of architecture, the richness of the software, the difficulties of the compilers, the ever-growing size of the models, the diversity of the models, and the advances that these models are creating. And so we're seeing the beginning of a new computing era.A fixed function accelerator is simply not the right answer. And so we believe that the future is going to be accelerated. It's going to require an accelerated computing platform and software richness is really vital so that these data centers could be software defined.And so I think that we're in the early innings, the early innings, very, very early innings of this new future. And I think that accelerated computing is going to become more and more important.
  • Operator:
    Your next question comes from the line of Matt Ramsay with Cowen.
  • Matt Ramsay:
    Thank you very much. Good afternoon and obviously congratulations on the data center success. I wanted to ask a little bit Colette about the -- you took $100 million out for coronavirus, and I wanted to ask a little bit about how you got to that number. Really two pieces. One, if you could remind us, maybe in terms of units or revenue how -- what percentage of your gaming business is within China? And as you look at that $100 million that you put out of the guidance, are you thinking about that from a demand disruption perspective? Or are you thinking about it from something in the supply chain that might limit your sales? Thank you.
  • Colette Kress:
    Sure. Thanks for the question Matt. So, it's really still quite early in terms of trying to figure out what the impact from the overall coronavirus maybe. So, we're not necessarily precise in terms of our estimate. Yes, our estimates are split between an impact possibly on gaming and data center, and split pretty much equally. The $100 million also reflects what may be supply challenges or maybe overall demand. But we're still looking at those to get a better understanding where we think that might be.In terms of our business and our business makeup, yes, our overall China business for gaming is an important piece. We have about 30% of our overall China gaming as a percentage of our overall gaming business. For data center, it moves quite a bit. They are a very important market for us, but it moves from quarter-to-quarter just based on the overall end customer mix as well as the system builders that they may choose. So, it's a little harder to determine.
  • Operator:
    Your next question comes from the line of Harlan Sur with JPMorgan.
  • Harlan Sur:
    Good afternoon and congratulations on the strong results and guidance. On gaming -- yeah, no problem. Good to see the recent launch of your GeForce NOW service. But on the partnership with Tencent on cloud gaming, seems like Tencent should have a smoother transition to the cloud model. They are the largest gaming company in the world, so they own many of the games. They also have their own data center infrastructure already in place. But how is the NVIDIA team going to be supporting this partnership? Is it going to be deal your GeForce NOW hardware framework? Or will you just be supporting them with your standalone GPU products? And when do you expect the service to go mainstream?
  • Jensen Huang:
    Let's see. Tencent is the world's largest publisher. China represents about a third of the world's gaming and transitioning to the cloud is going to be a long-term journey. And the reason for that is because Internet connection is not consistent throughout the entire market. And a lot of application still needs to be onboarded and we're working very closely with them. We're super enthusiastic about it. If we're successful long term, we're talking about an extra one billion gamers that we might be able to reach. And so, I think that this is an exciting opportunity just the long-term journey.Now here in the West, we've had a lot more opportunity to refine the connections around the world and working through the beta centers, the local hubs as well as people's WiFi routers at home. And so, we've been in data for quite some time as you know. And here at West our platform is open. And we have several hundred games now, and we're in the process of onboarding another 1,500 games. We're the only cloud platform that's based on Windows, and allows us to be able to bring PC games to the cloud.And so the reach is -- we've had more experience here in the West with reach and we've had -- we obviously have a lot more games that we can onboard. But I'm super enthusiastic about the partnership we have with Tencent. Overall, our GeForce NOW, you guys saw the launch, the reception has been fantastic. The reviews have been fantastic.Our strategy has three components. There's the GeForce NOW service that we provide ourselves. We also have GeForce NOW alliances with telcos around the world to reach the regions around the world that we don't have a presence in. And that is going super well, and I'm excited about that. And then lastly, partnerships with large publishers, for example like Tencent. And we offer them our platform of course and a great deal of software and just a lot of engineering that has to be done in collaboration to refine the service.
  • Operator:
    Your next question comes from the line of C.J. Muse with Evercore.
  • C.J. Muse:
    Yeah. Good afternoon. Thank you for taking my question. I guess a question on the gaming side. If I look at your overall revenue guide, it would seem to suggest that you're looking for typically, I guess, better seasonal trends into April. And I guess can you speak to that?And then how are you seeing desktop gaming demand with recreation content becoming more available? How should we think about the growth trajectory through 2020? And then just really as a modeling question as part of gaming. With notebook now is third of the revenues, how should we think about kind of the seasonality going into April and July for that part of your business? Thank you.
  • Jensen Huang:
    Yes. So C.J., I want to go first and then Colette is going take it home here. So the first part of it is this, our gaming business has at the end -- I'm sorry. Okay. Our gaming business the end market demand is really terrific. It's really healthy. It's been healthy throughout the whole year. And it's pretty clear that RTX is doing fantastic. And it's very -- it's super clear now that ray tracing is the most important new feature of next-generation graphics. We have 30 -- over 30 games that have been announced, 11 games or so that have been shipped. The pipeline of ray tracing games that are going to be coming out is just really, really exciting.The second factor -- and one more thing about RTX, we finally have taken RTX down to $299. So it's now at the sweet spot of gaming. And so RTX is doing fantastic. The sell-through is fantastic all over the world.The second part of our business that is changing in gaming is the amount of notebook sales and the success of Nintendo Switch has really changed the profile of our overall gaming business. Our notebook business as Colette mentioned earlier, has seen double-digit growth for eight consecutive quarters and this is unquestionably a new gaming category. Like it's a new game console. This is going to be the largest game console in the world I believe. And the reason for that is because there are more people with laptops than there are of any other device.And so the fact that we've been able to get RTX into a thin and light notebook, a thin and light notebook is really a breakthrough. And it's one of the reasons why we're seeing such great success in notebook.Between the notebook business and our Nintendo Switch business the profile of gaming overall has changed and has become more seasonal. It's more seasonal, because devices systems like notebooks and Switch are built largely in two quarters Q2 and Q3. And they build largely in Q2 and Q3, because it takes a while to build them and ship them and put them into the hubs around the world. And they tend to build it ahead of the holiday season.And so that's one of the reasons why Q3 will tend to be larger and Q4 will tend to be more seasonal and Q1 will tend to be more seasonal than the past. But the end demand is fantastic. RTX is doing great. And part of it is just a result of the success of our notebooks. I'm going to hand it over to Colette.
  • Colette Kress:
    Yeah. So with that from a background and you think about all those different components that are within gaming the notebook, the overall Switch and of course all of the ray tracing that we have in terms of desktop, our normal seasonality as we look at Q1 for gaming with all those three pieces is usually sequentially down from Q4 sequentially down Q4 to Q1. This year the outlook assumes it will probably be a little bit more pronounced due to the coronavirus. So, in total, we're probably looking at Q1 to be in the low double-digit sequential decline in gaming.
  • Operator:
    Your next question comes from the line of Atif Malik with Citi.
  • Atif Malik:
    Hi. Thank you for taking my question. And good job on results and guide. On the same topic coronavirus. Colette, I'm a bit surprised that the guidance -- the range on the guidance is not wider versus historic. Can you just talk about why not widen the range? And what went into that $100 million hit from the coronavirus?
  • Colette Kress:
    So Atif, thanks for the question. Again, it's still very early regarding the coronavirus. Our thoughts are out with both the employees the families and others that are in China. So our discussions both with our supply chain that is very prominent in the overall Asia region, as well as our overall AIC makers, as well as our customers is as about as timely as we can be. And that went into our discussion and our thoughts on the overall guidance that we gave into our $100 million. We'll just have to see how the quarter comes through and we'll discuss more when we get to it. But at this time that was our best estimate at this time.
  • Operator:
    Your next question comes from the line of William Stein with SunTrust.
  • William Stein:
    Great. Thanks for taking my question. Jensen, I'd love to hear your thoughts as to how you anticipate the inference market playing out. Historically NVIDIA's had essentially all of the training market and little of the inference market in the last 1.5 years or so. I think that's changed where you've done much better in inference. Now you have the T4 in the cloud, you have EGX at the edge. And you have Jetson, I think is what it's called at the endpoint device. How do you anticipate that market for inference developing across those various positions? And how are you aligning your portfolio for that growth?
  • Jensen Huang:
    Yeah. Thanks a lot, Will. Let's see I think the -- historically inference has been a small part of our business because AI was still being developed. Deep learning AI is not -- historical AI, classical machine learning weren't particularly suited for GPUs and weren't particularly suited for acceleration. It wasn't until deep learning came along that the amount of computation necessary is just extraordinary.And the second factor is the type of AI models that were developed. Eventually the type of models related to natural language understanding and conversational AI and recommendation systems these require instantaneous response. The faster the answer, the more likely someone is going to click on the answer.And so you know that latency matters a great deal and it's measurable. The effect on the business is directly measurable. And so for conversational AI, for example, we've been able to reduce the latency of the entire pipeline from speech recognition to the language processing to -- for example fix the errors and such come up with a recommendation to text to speech to the voice synthesis. That entire pipeline could take several seconds.We run it so fast that it's possible now for us to process the entire pipeline within a couple of 100 to 300 milliseconds. That is in the realm of interactive conversation. Beyond that it's just simply too slow. And so the combination of AI models that are large and complex that are moving to inference moving to production. And then secondarily, conversational AI and latency sensitive models and applications where our GPUs are essential now moving forward, I think you're going to see a lot more opportunities for us in inference.The way to think about that long-term is acceleration is essential because of end of Dennard scaling. Process technology is going to demand that we compute in a different way. And the way that AI has evolved and deep learning, it suggests that acceleration on GPUs is just a really phenomenal approach.Data centers are going to have to be software-defined. And I think as I mentioned, I think I mentioned earlier to another question, I believe that in the future the data center will all be accelerated. It will be all running AI models and it will be software defined and will be programmable and having an accelerated computing platform is essential.As you move out to the edge, it really depends on whether your platform is software-defined whether it has to be programmable or whether it's fix functioned. There are many, many devices where the inference work is very specific. It could be something as simple as detecting changes in temperature or changes in sound or detecting motion. Those type of inference models are – could still be based on deep learning. It's function-specific. You don't have to change it very often, and you're running one or two models at any given point in time. And so those devices are going to be incredibly cost-effective. I believe those AI chips, you're going to have AI chips that are $0.50, $1 and you're just going to put it into something and it's going to be doing magical detections.The type of platforms that we're in, such as self-driving cars and robotics, the software is so complicated and there's so much evolution to come yet and it's going to constantly get better. Those software-defined platforms are really the ideal targets for us. And so we call it AI at the edge, edge computing devices. One of the edge computing devices, I'm very excited about is, what people call mobile edge or basically 5G telco edge. That data center will be programmable. We recently announced that we partnered with Ericsson and we're going to be accelerating the 5G stack. And so that needs to be a software-defined data center. It runs all kinds of applications, including 5G. And those applications are going to be – those opportunities are fantastic for us.
  • Operator:
    Your next question comes from the line of Mark Lipacis with Jefferies.
  • Mark Lipacis:
    Thanks for taking my question. Jensen, I guess, I had a question about your – how you think about the sustainability of your market position in the data center. And I guess in my simplistic view about 12 years ago you made out a consensus call to invest in CUDA software distribute it to universities. Neural networking took off and you were the de facto standard and here we are right now. And for me what's interesting to hear is that the demand that you're seeing today for your products is from markets that's just developed within the last year. And my question is like, how do you think about your investment your R&D investment strategy to make sure that you are staying way ahead of the market of the competition and even your customers who are investing in these markets too? Thank you.
  • Jensen Huang:
    Yeah. Thanks, Mark. Our company has to live 10 years ahead of the market. And so we have to imagine where the world is going to be in 10 years time, in five years time and work our way backwards. Now, our company is focused on one singular thing. The simplicity of it is incredible. And that one singular thing is accelerated computing. Accelerated computing and accelerated computing is all about the architecture of course. It's about the complicated systems that we're in, because throughput is high.When our acceleration we can – when we can compute 10, 20, 50, 100 times faster than the CPU, all of a sudden everything becomes a bottleneck. Memory's a bottleneck, networking's a bottleneck, storage is a bottleneck, everything is a bottleneck. And so we have to be – NVIDIA has to be a supremely good system designer. But the complexity of our stack which is the software stack above it is really where the investments over the course of the last – some 29 years now has really paid off.NVIDIA frankly has been an accelerated computing company since the day it was born. And so we – our company is constantly trying to expand the number of applications that we can accelerate. Of course, computer graphics was an original one and we're reinventing it with real-time ray tracing. We have rendering, which is a brand-new application that we're making great progress in.We just talked – I just mentioned 5G acceleration. Recently, we announced genomics computing. And so those are new applications that are really important to the future of computing. In the area of artificial intelligence, from image recognition to natural language understanding, to conversation, to recommendation systems, to robotics and animation, the number of applications that we're going to accelerate in the field of AI is really, really broad. And each one of them are making tremendous progress and getting more and more complex. And so the question about the sustainability of our company really comes down to two dimensions.Let's assume for the fact – let's assume for now that `accelerated computing is the path forward and we surely believe so. And there's a lot of evidence from the laws of physics to the laws of computer science that would suggest that accelerated computing is the right path forward. But this really basically comes down to two dimensions. One dimension is, are we continuing to expand? Are we continuing to expand the number of applications that we can accelerate? Whether it's AI or computer graphics or genomics or 5G for example.And then the number -- and then the second is those applications, are they getting more impactful and adopted by the ecosystem, the industry? And are they continuing to be more complex? Those dimensions, the number of applications and the impact of those applications and the evolution the growth of complexity of those applications, if those dynamics continue to grow, then I think we're going to do a good job. We're going to sustain. And so -- and I think when I spelled it out that way, it's basically the equation of growth of our company. I think it's fairly clear that the opportunities are fairly exciting ahead.
  • Operator:
    Your next question comes from the line of Blayne Curtis with Barclays.
  • Blayne Curtis:
    Thanks for squeezing me in. Jensen, I just wanted to ask you on the auto side. I think at least one of your customers might have slowed out their program. Just kind of curious as you look out the next couple of years, the challenges if the OEM is moving slower? And then just any perspective on the regulatory side has anything changed there would be helpful? Thanks.
  • Jensen Huang:
    I think that the automotive industry is struggling, but for all of the reasons that everybody knows. However, the enthusiasm to redefine and reinvent their business model has never been greater. Every single one of them, every single one of them would know now and they surely -- they've known for some time and autonomous capabilities is really the vehicle to do that. They need to be tech companies. Every car company wants to be a tech company. They need to be a tech company. Every car company needs to be software-defined. And the platform by which to do so is an electric vehicle with autonomous autopilot capability. That car has to be software-defined. And this is their future and they're racing to get there.And so although the automotive industry is struggling in near term, their opportunity has never been better in my opinion. The future of AV is more important than ever. The opportunity is very real. The benefits of autonomous is for whether it's safety, whether it's utility, whether it's cost reduction and productivity has never been more clear. And so, I think that I'm as enthusiastic as ever about the autonomous vehicles and the projects that we're working on are moving ahead. And so the near-term challenges of the automotive industry or whatever sales slowdown in China that they're experiencing, I feel badly about that. But the industry is as clearheaded about the importance of AV as ever.
  • Operator:
    I will now turn the call back over to Jensen for any closing remarks.
  • Jensen Huang:
    We had an excellent quarter with strong demand for NVIDIA RTX graphics and NVIDIA AI platforms and record data center revenue. NVIDIA RTX is reinventing computer graphics and the market's response is excellent, driving a powerful upgrade cycle in both gaming and professional graphics, while opening whole new opportunities for us to serve the huge community of independent creative workers and social content creators and new markets in rendering and cloud gaming.Our data center business is enjoying a new wave of growth powered by three key trends in AI natural language understanding, conversational AI, deep recommenders are changing the way people interact with the Internet. The public cloud demand for AI is growing rapidly. And as AI shifts from development to production, our inference business is gaining momentum. We'll be talking a lot more about these key trends and much more at next month's GTC Conference in San Jose. Come join me. You won't be disappointed. Thanks everyone.
  • Operator:
    Ladies and gentlemen, this concludes today's conference call. Thank you for participating. You may now disconnect.