NVIDIA Corporation
Q3 2017 Earnings Call Transcript

Published:

  • Operator:
    Good afternoon. My name is Victoria, and I'll be your conference operator today. Welcome you to the NVIDIA Financial Results Conference Call. All lines have been placed on mute. After the speakers' remarks there will be a question-and-answer period. I will now turn the call over to Arnab Chanda, Vice President of Investor Relations at NVIDIA. You may begin your conference.
  • Arnab K. Chanda:
    Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the third quarter of fiscal 2017. With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. It's also being recorded. You can hear a replay by telephone until the 17 November, 2016. The webcast will be available for replay up until next quarter's conference call to discuss Q4 financial results. The content of today's call is NVIDIA's property. It cannot be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These forward-looking statements are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Form 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All of our statements are made as of today, the 10th of November, 2016 based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary which is posted on our website. With that, let me turn the call over to Colette.
  • Colette M. Kress:
    Thanks, Arnab. Revenue reached a record in the third quarter, exceeding $2 billion for the first time. Driving this was success in our Pascal-based gaming platform and growth in our datacenter platform, reflecting the role of NVIDIA's GPU as the engine of AI computing. Q3 revenue increased 54% from a year earlier to $2 billion and was up 40% from the previous quarter. Strong year-over-year gains were achieved across all four of our platforms
  • Operator:
    Certainly. Your first question comes from Mark Lipacis from Jefferies.
  • Mark Lipacis:
    Thanks for taking my questions and congratulations on a great quarter. I think to start out, Jen-Hsun, maybe if you could help us understand. The datacenter business tripled year-over-year. What's going on in that business that's enabling that to happen? If you could maybe talk about – I don't know if it's on the technology side or the end market side. And maybe as part of that, you can help us maybe deconstruct the revenues and what's really driving that growth? And I had a follow-up too. Thanks.
  • Jen-Hsun Huang:
    Sure. A couple things. First of all, GPU computing is more important than ever. There are so many different types of applications that require GPU computing today, and it's permeating all over enterprise. There are several applications that we're really driving. One of them is graphics virtualization, application virtualization. Partnering with VMware and Citrix, we've essentially taken very compute-intensive, very graphics-intensive applications, virtualizing it and putting it into the datacenter. The second is computational sciences; using our GPU for general purpose scientific computing. And scientific computing, as you know, is not just for scientists; it's running equations and using numerics is a tool that is important to a large number of industries. And then, third, one of the most exciting things that we're doing because of deep learning, we've really ignited a wave of AI innovation all over the world. These several applications, graphics application and virtualization, computational science and data science has really driven our opportunity in the datacenter. The thing that made it possible, though, the thing that really made it possible was really the transformation of our company from a graphics processor to a general purpose processor. And then, on top of that, probably the more important part of that, is transforming from a chip company to a platform company. What made application and graphics virtualization possible is a complicated stack of software we call GRID. And you guys have heard me talk about it for several years now. And second, in the area of numerics and computational sciences, CUDA, our rich library of applications and libraries on top of numerics – numerical libraries on top of CUDA and all the tools that we have invested in, the ecosystem we've worked with, all the developers all around the world that now know how to use CUDA to develop applications makes that part of our business possible. And then third, our deep learning toolkit; the NVIDIA deep learning – GPU deep learning tool kit has made it possible for all frameworks in the world to get GPU acceleration. And with GPU acceleration, the benefit is incredible. It's not 20%, it's not 50%, it's 20 times, 50 times. And that translates to, most importantly, for researchers, the ability to gain access to insight much, much faster. Instead of months, it could be days. It's essentially like having a time machine. And secondarily, for IT managers, it translates to lower energy consumption and, most importantly, it translates to a substantial reduction in datacenter cost, whereas you have a rack of servers with GPUs, it replaces an entire basketball court of cluster of off-the-shelf servers, and so a pretty big deal. Great value proposition.
  • Operator:
    Your next question comes from the line of Vivek Arya with Bank of America Merrill Lynch.
  • Vivek Arya:
    Thanks for taking my question and congratulations on the consistent growth and execution. Jen-Hsun, one more on the datacenter business. It has obviously grown very strongly this year. But, in the past, it has been lumpy. So, for example, when I go back to your fiscal 2015, it grew 60% to 70% year-on-year. Last year, it grew about 7%. This year it's growing over 100%. How should we think about the diversity of customers and the diversity of applications to help us forecast how the business can grow over the next one or two years?
  • Jen-Hsun Huang:
    Yeah, I think embedded in your question, in fact, are many of the variables that influence our business. Especially in the beginning, several years ago when we started working on GPU computing and bringing this capability into datacenters, we relied on supercomputing centers in the beginning and then we relied on remote workstations, datacenter workstations, if you will; virtualized workstations. And then increasingly, we started relying on – we started seeing demand from hyperscale datacenters as they used our GPUs for deep learning and to develop their networks. And then, now, we're starting to see datacenters take advantage of our new GPUs, P40 and P4, to apply to operate to use the networks for inferencing in a large scale way. And so, I think we're moving, if you will, our datacenter business in multiple trajectories. The first trajectory is the number of applications we can run. Our GPUs now has the ability with one architecture to run all of those applications that I mentioned from graphics virtualization to scientific computing to AI. Second, we used to be in datacenters, but now we're in datacenters, supercomputing centers as well as hyperscale datacenters. And then third, the number of applications, industries that we affect is growing. It used to start with supercomputing. Now, we have supercomputing, we have automotive, we have oil and gas, we have energy discovery, we have financial services industry, we have, of course, one of the largest industries in the world, consumer Internet cloud services. And so we're starting to see applications in all of those different dimensions. And I think that the combination of those three things, the number of applications, the number of platforms and locations by which we have success, and then, of course, the number of industries that we affect, the combination of that should give us more upward directory in a consistent way. But I think the really – the mega point though is really the size of the industries we're now able to engage. In no time in the history of our company have we ever been able to engage industries of this magnitude. And so that's the exciting part I think in the final analysis.
  • Operator:
    Your next question comes from the line of Toshiya Hari with Goldman Sachs.
  • Toshiya Hari:
    Great. Thanks for taking my question and congratulations on a very strong quarter. Jen-Hsun, you've been on the road quite a bit over the past few months, and I'm sure you've had the opportunity to connect with many of your important customers and partners. Can you maybe share with us what you learned from the multiple trips and how your view on the company's long-term growth trajectory changed, if at all?
  • Jen-Hsun Huang:
    Yeah. Thanks a lot, Toshiya. First of all, the reason why I've been on the road for almost two months solid is because the request and the demand if you will from developers all over the world for a better understanding of GPU computing and getting access to our platform and learning about all of the various applications that GPUs can now accelerate. The demand is just really great. And we no longer could do GTC, which is our developer conference – essentially, our developer conference. We can no longer do GTC just here in Silicon Valley. And so we, this year, decided to take it on the road and we went to China, went to Taiwan, went to Japan, went to Korea. We had one in Australia and also one in India and Washington D.C., and Amsterdam for Europe. And so we pretty much covered the world with our first global developer conference. I would say, probably, the two themes that came out of it is that GPU acceleration, the GPU has really reached a tipping point. That it is so available everywhere; it's available in PCs, it's available from every computer company in the world, it's in the cloud, it's in the datacenter, it's in laptops. GPU is no longer a niche component. It is a large scale, massively available, general purpose computing platform. And so I think people realize now the benefits of GPU and that the incredible speedup or cost reduction, however, basically the opposite sides of a coin that you can get with GPUs, and so GPU computing. Number two is AI; just the incredible enthusiasm around AI. And the reason for that, of course, for everybody who knows already about AI, what I'm going to say is pretty clear, but there's a large number of applications, problems, challenges where a numerical approach is not available. A laws of physics based, equation-based approach is not available. And these problems are very complex. Oftentimes, the information is incomplete and there's no laws of physics around it. For example, what's the laws of physics of what I look like? What's the laws of physics for recommending tonight's movie? And so those kind – there's no laws of physics involved. And so the question is, how do you solve those kind of incomplete problems? There's no laws of physics equation that you can program into a car that causes the car to drive and drive properly. These are artificial intelligence problems. Search is an artificial intelligence problem. Recommendation is an artificial intelligence problem. And so now that GPU deep learning has ignited this capability and it has made it possible for machines to learn from a large amount of data and to determine the features by itself, to compute the features to recognize, GPU deep learning has really ignited this wave of AI revolution. And so I would say, the second thing that is just incredible enthusiasm around the world is learning how to use the GPU deep learning, how to use it to solve AI type problems and to do so in all of the industries that we know, from healthcare to transportation to entertainment to enterprise to you name it.
  • Operator:
    Your next question comes from the line Atif Malik with Citigroup.
  • Atif Malik:
    Hi. Thanks for taking my question and congratulation. You mentioned that a Maxwell upgrade was about 30% of your (27
  • Jen-Hsun Huang:
    Atif, first of all, there were several places where you cut out and this is one of those artificial intelligence problems. Because I heard incomplete information, but I'm going to infer from some of the important words that I did hear and I'm going to apply artificial – in this case – human intelligence to see if I can predict what it is that you were trying to ask. I think you were – the baseline – the basis of your question was that Maxwell, in the past, in the past, Maxwell, GPU during that generation, we saw an upgrade cycle about every two or three years. And we had an installed base of some 60 million, 80 million gamers during that time and several years have now gone by. And the question is what would be the upgrade cycle for Pascal and what would it look like? And there are several things that have changed that I think is important to note, and that could affect a Pascal upgrade. First of all, the increase in adoption, the number of units has grown and the number of the ASP has grown. And I think the reason for that is several folds. I think, one, the number of gamers in the world is growing. Everybody that is effectively born in the last 10, 15 years are likely to be a gamer. And so long as they have access to electricity and the Internet, they're very likely a gamer. The quality of games has grown significantly. And one of the factors of production value of games that has been possible is because the PC and the two game consoles, Xbox and PlayStation, and in the future – in the near-future, the Nintendo Switch, all of these architectures are common in the sense that they all use modern GPUs, they all use programmable shading and they all have basically similar features. They have very different design points, they have different capabilities, but they have very similar architectural features. As a result of that, game developers can target a much larger installed base with one common code base and, as a result, they can increase the production quality, production value of the games. The second – and one of the things that you might have noticed that recently PlayStation and Xbox both announced 4K versions, basically the Pro versions of their game console. That's really exciting for the game industry. It's really exciting for us, because what's going to happen is the production value of games will amp up and, as a result, it would increase the adoption of higher-end GPUs. So, I think that that's a very important positive. That's probably the second one. The first one being the number of gamers is growing. The second is game production value continues to grow. And then the third is gaming is no longer just about gaming. Gaming is part sports – part gaming, part sports and part social. There are a lot of people who play games just so they can hang out with their other friends who are playing games. And so it's a social phenomenon and then, of course, because games are – the quality of games, the complexity of games in some such as League of Legends, such as StarCraft, the real-time simulation, the real-time strategy component of it, the agility – the hand-eye coordination part of it, the incredible teamwork part of it is so great that it has become sport. And because there are so many people in gaming, because there is – it's a fun thing to do and it's hard to do, so it's hard to master, and the size of the industry is large, it's become a real sporting event. And one of the things that I'll predict is that one of these days I believe that gaming would likely be the world's largest sport industry. And the reason for that is because it's the largest industry. There are more people who play games and now enjoy games and watch other people play games than there are people who play football for example. And so I think it stands to reason that eSports will be the largest sporting industry in the world. And that's just a matter of time before it happens. And so I think all of these factors have been driving both the increase in the size of the market for us as well as the ASP of the GPUs for us.
  • Operator:
    Your next question comes from the line of Stephen Chin with UBS.
  • Stephen Chin:
    Hi. Thanks for taking my questions. Jen-Hsun, first question if I could on your comments regarding the GRID systems; you mentioned some accelerating demand in the manufacturing and automotive verticals. Just kind of wondering if you had any thoughts on what inning you're currently in, in terms of seeing a strong ramp-up towards a full run rate for those areas and especially for the broader corporate enterprise, end market vertical also? As a quick follow-up on the gaming side, I was wondering if you had any thoughts on whether or not there's still a big gap between the ramp-up of Pascal supply and the pent-up demand for those new products. Thank you.
  • Jen-Hsun Huang:
    Sure. So, I would say that we're probably in the first at bat of the first inning of GRID, and the reason for that is this. We've prepared ourselves. We went to spring training camp. We came up through the – they call it the farm league or something like that. I'm not really a baseball player, but I heard some people talk about it. And so I think we're probably at the first at bat at the first inning. The thing that – the reason why I'm excited about it is because I believe in the future applications are virtualized in the datacenter or in the cloud. On first principles, on first principles, I believe that data applications will be virtualized and that you'll be able to enjoy these applications irrespective of whether you're using a PC, a Chrome notebook, a Mac or a Linux Workstation. It simply won't matter. And yet, on the other hand, I believe that in the future, applications will become increasingly GPU accelerated. And so, how do you put something in the cloud that have no GPUs and how do you GPU accelerate these applications that are increasingly GPU accelerated? And so, the answer is of course putting GPUs in the cloud and putting GPUs in datacenters. And that's what GRID is all about. It's about virtualization, it's about putting GPUs in large scale datacenters and be able to virtualize the applications so that we can enjoy it on any computer, on any device and putting computing closer to the data. So, I think we're just in the beginning of that. And that could explain why GRID is, finally, after a long period of time of building the ecosystem, building the infrastructure, developing all the software, getting the quality of service to be just really exquisite, working with the ecosystem partners, it's really taken off. And I could surely expect to see it continue to grow at the rate that we're seeing for some time. In terms of Pascal, we are still ramping. Production is fully ramped in the sense that all of our products are fully qualified, they're on the market, they have been certified and qualified with OEMs. However, demand is still fairly high. And so we're going to continue to work hard, and our manufacturing partner TSMC is doing a great job for us. The yields are fantastic for 2016 FinFET, and they're just doing a fantastic job supporting us. And so we're just going to keep running at it.
  • Operator:
    Your next question comes from the line of Joe Moore with Morgan Stanley.
  • Joseph Moore:
    Yeah. Thank you very much. Great quarter by the way; I'm still amazed how good this is. Can you talk a little bit about the size of the inference opportunity? Obviously, you guys have done really well in training. I assume penetrating inference is reasonably early on. But can you talk about how you see GPUs competitively versus FPGAs on that side of it and how big you think that opportunity could become? Thank you.
  • Jen-Hsun Huang:
    Sure. I'll start backwards. I'll start backwards and answer the FPGA question first. FPGA is good in a lot of things, and anything that you could do in an FPGA if the market opportunity is large, you could always – it's always better to develop an ASIC. And FPGA is what you use when the volume is not large. FPGA is what you use when you're not certain about the functionality you want to put into something. FPGA is largely useful when the volume's not large, because you can build an ASIC and build a full custom chip that obviously can deliver more performance, not 20% more performance but 10 times better performance and better energy efficiency than you could using FPGAs. And so I think that's a well-known fact. Our strategy is very different than any of that. Our strategy is really about building a computing platform. Our GPU is not a specific function thing anymore. It's a general purpose parallel processor. CUDA can do molecular dynamics, it could do fluid dynamics, it could do partial differential equations, it could do linear algebra, it could do artificial intelligence, it could be used for seismic analysis, it could be used for computer graphics, even computer graphics, and so our GPU is incredibly flexible. And it's really designed for – it's designed specifically for parallel throughput computing. And by combining it with the CPU, we've created a computing platform that is both good at sequential information, sequential instruction processing as well as very high throughput data processing. And so we've created a computing architecture that's good at both of those things. The reason why we believe that's important is because several things. We want to build a computing platform that is useful to a large industry. You could use it for AI, you could use it for search, you could use it for video transcoding, you could use it for energy discovery, you could use it for health, you could use it for finance, you could use it for robotics, you could use it for all these different things. So, on the first principles, we're trying to build a computing platform. It's a computing architecture and not a dedicated application thingy. And most of the customers that we're calling on, most of the markets that we're addressing and the areas that we've highlighted are all computer users. They need to use and deploy a computing platform and have the benefit of being able to rapidly improve their AI networks. AI is still in the early days. It's the early days of early days. And so GPU deep learning is going through innovations at a very fast clip. Our GPU allows people to learn – to develop new networks and deploy new networks as quickly as possible. And so I think the way to think about it is, is think of our GPU as a computing platform. In terms of the market opportunity, the way I would look at it is this. The way I would look at it is, there are something along the lines of 5 million to 10 million hyperscale datacenter nodes. And I think – and you guys have heard me say this before – I think that tree is a new set of HPC clusters that have been added into these datacenters. And then the next thing that's going to happen is that you're going to see GPUs being added to a lot of these 5 million to 10 million nodes, so that you could accelerate every single query that will likely come into the datacenter will be an AI query in the future. And so, I think GPUs have an opportunity to see a fairly large hyperscale installed base. But, beyond that, there's the enterprise market. Although a lot of computing is done in the cloud, a great deal of computing, especially the type of computing that we're talking about here that requires a lot of data, we're a data throughput machine, the type of computers that we're talking about tends to be one (40
  • Operator:
    Your next question comes from the line of Craig Ellis with B. Riley & Company.
  • Craig A. Ellis:
    Thanks for taking the question and congratulations on the stellar execution. Jen-Hsun, I wanted to go back to the automotive business. In the past, the company has mentioned that revenues consist of display and then on the autopilot side, both consulting and product revenues, but I think much more intensively on the consulting side for now. But as we look ahead to Xavier and the announcement that you had made inter-quarter that that's coming late next year, how should we expect that the revenue mix would evolve, not just from consulting to product but from Parker towards Xavier?
  • Jen-Hsun Huang:
    Yeah, that's – I don't know that I have really granular breakdowns for you, Craig, partly because I'm just not sure. But I think the dynamics are that self-driving cars is probably the most single most disruptive event – the most disruptive dynamic that's happening in the automotive industry. It's almost impossible for me to imagine that in five years' time, a reasonably capable car will not have autonomous capability at some level, and a very significant level at that. And I think what Tesla has done by launching and having on the road in the very near-future here, a full autonomous driving capability using AI, that has sent a shock wave through the automotive industry. It's basically five years ahead. Anybody who's talking about 2021 and that's just a non-starter anymore. And I think that that's probably the most significant bit in the automotive industry. I just don't – anybody who is talking about autonomous capabilities in 2020 and 2021 is at the moment re-evaluating in a very significant way. And so I think that, of course, will change how our business profile ultimately looks. It depends on those factors. Our autonomous vehicle strategy is relatively clear, but let me explain it anyways. Number one, we believe that autonomous vehicles is not a detection problem. It's an AI computing problem. That it's not just about detecting objects, it's about perception of the environment around you, it's about reasoning, reasoning about what to do, what is happening and what to do and to take action based on that reasoning, and to be continuously learning. And so I think that AI computing requires a fair amount of computation. And anybody who thought that it would take only 1 watt or 2 watt, basically, the amount of energy of – well, I'm not even – one-third the energy of a cell phone, I think it's just unfortunate and it's not going to happen any time soon. And so I think people now recognize that AI computing is a very software-rich problem and it is a supremely exciting AI problem, and that deep learning and GPUs could add a lot of value. And it's going happen in 2017. It's not going to happen in 2021. And so I think number one. Number two, our strategy is to deploy a one architecture platform that is open that car companies could work on to leverage our software stack and create their network, their artificial intelligence network. And then we would address everything from highway cruising, excellent highway cruising to all the way to full autonomous to trucks to shuttles. And using one computing architecture, we could apply it for radar-based systems or radar plus cameras, radar plus cameras plus lidars, we could use it for all kinds of sensor fusion environments. And so I think – our strategy, I think, is really resonating well with the industry as people now realize that we need the computation capability five years earlier. That we – that it's not a detection problem, but it's an AI computing problem and that software is really intensive. That these three observations, I think, has put us in a really good position.
  • Operator:
    And your next question comes from Mitch Steves with RBC Capital Markets.
  • Mitch Steves:
    Hey, guys. Thanks for taking my question. Great quarter across the board. I did want to return to the automotive segment, because the datacenter segment is talked about at length. With the new DRIVE PX platform increasing potentially the ASPs, how do we just think about ASPs for automotive going forward? And if I recall, you guys had about $30 million in backlog in terms of cars, I'm not sure, if possible maybe you can update there as well.
  • Jen-Hsun Huang:
    Let's see. I guess our architecture for DRIVE PX, Mitch, is at scalable. And so you could start from one Parker SoC and that allows you to have surround camera. It allows you to use AI for highway cruising. And if you would like to have even more cameras, so that your functionality could be used more frequently in more conditions, you could always add more processors. And so we go from one to four processors. And if it's a fully autonomous driverless car, a driverless taxi, for example, you might need more than even four of our processors. You might need eight processors. You might need 12 processors. And the reason for that is because you need to reduce the circumstance by which autopilot doesn't work – doesn't turn on, excuse me, doesn't engage, and because you don't have a driver in the car at all. And so I think that depending on the application that you have, we'll have a different configuration and it's scalable. And it ranges from a few hundred dollars to a few thousand dollars. And so I think it just depends on what configuration people are trying to deploy. Now, for a few thousand dollars, the productivity of that vehicle is incredible as you can simply do the math. It's much more available, the cost of operations is reduced, and a few thousand dollars is surely almost nothing in the context of that use case.
  • Operator:
    Your next question comes from the line of Harlan Sur with JPMorgan.
  • Harlan Sur:
    Good afternoon. Congratulations on the solid execution and growth. Looking at some of your cloud customers, new services offerings, you guys mentioned AWS, the EC2 P2 platform, you have Microsoft Azure Cloud Services platforms, it's interesting because they're ramping new instances primarily using your K80 Accelerator platform, which means that the Maxwell-based and, the recently introduced, Pascal-based adoption curves are still way ahead of the team, which obviously is a great setup as it relates to the continued strong growth going forward. Can you just help us understand why the long design end-cycle times for these accelerators? And when do you expect the adoption curve for the Maxwell-based accelerators to start to kick in with some of your cloud customers?
  • Jen-Hsun Huang:
    Yeah, Harlan, good question. And it's exactly the reason why having started almost five years ago in working with all of these large scale datacenters is what it takes. And the reason for that is because several things has to happen. Applications has to be developed. Their hyperscale, which is their enterprise – their datacenter level software has to accommodate this new computing platform. The neural networks have to be developed and trained and ready for deployment. The GPUs have to be tested against every single datacenter and every single server configuration that they have. And it takes that type of time to deploy at the scales that we're talking about. And so I think that that's number one. The good news is, is that between Kepler and Maxwell and Pascal, the architecture is identical, even though the underlying architecture has been improved dramatically and the performance increases dramatically, the software layer is the same. And so that's – the adoption rate of our future generation is going be much, much faster and you'll see that. But it takes that long to integrate our software and our architecture and our GPUs into all of the datacenters around the world. It takes a lot of work. It takes a long time.
  • Operator:
    Your next question comes from the line of Romit Shah with Nomura.
  • Romit J. Shah:
    Yes. Thank you, Jen-Hsun. I just wanted to ask regarding the AutoPilot win, we know that you guys displaced Mobileye, and I was just curious if you could talk about why Tesla chose your GPU and what you can sort of give us in terms of the ramp and timing. And how does this – how would a ramp like this affect automotive gross margin?
  • Jen-Hsun Huang:
    I think there are three things that we offer today. The first thing is that it's not a detection problem it's an AI computing problem. And a computer has processors, and the architecture is coherent and you can program it, you can write software, you can compile to it. It's an AI computing problem. And our GPU computing architecture has the benefit of 10 years of refinement. In fact, this year is the 10-year anniversary of our first GPU, our first CUDA GPU called G80. And we've been working on this for 10 years. And so the number one is autonomous driving. Autonomous vehicles is a AI computing problem. It's not a detection problem. Second, car companies realize that they need to deliver, ultimately, a service, that the service is a network of cars by which they continuously improve. It's like phones. It's like phones. It's like set-top boxes. You have to maintain and serve that customer because they're interested in the service of autonomous driving. It's not a functionality. Autonomous driving is always being improved with better maps and better driving behavior and better perception capability and better AI. And so the software component of it, the software component of it and the ability for car companies to own their own software once they develop it on our platform is a real positive. And real positive to the point where it's enabling or it's essential for the future of the driving fleet. And then the third – to be able to continue to do OTA on. And third is simply the performance and energy level. I don't believe it's actually possible at this moment in time to deliver an AI computing platform of the performance level that is required to do autonomous driving and an energy efficiency level that is possible in a car and to put all that functionality together in a reasonable way. I believe DRIVE PX 2 is the only viable solution on the planet today. And so I – because Tesla had a great intention to deliver this level of capability to the world five years ahead of anybody else, we were a great partner for them. Okay? So those are probably the three reasons.
  • Operator:
    And your next question comes from the line of Matt Ramsay with Canaccord Genuity.
  • Matthew D. Ramsay:
    Thank you very much. Good afternoon. Jen-Hsun, I make an interesting observation about your commentary that your company has gone from a sort of a graphic accelerator company to a computing platform company, and I think that's fantastic. One of the things that I wonder as maybe AI and deep learning acceleration sort of standardize on your platform, what you're seeing and hearing in the Valley about startup activity and folks that are trying to innovate around the platform that you're bringing up both complementary to what you're doing and potentially really long-term competitive to what you're doing. I'd just love to hear your perspectives on that. Thanks.
  • Jen-Hsun Huang:
    Yeah, Matthew. I really appreciate that. We see just a large number of AI startups around the world. There's a very large number here in the United States, of course. There's quite a significant number in China. There is a very large number in Europe. There's a large number in Canada. It's pretty much a global event. The number of software companies that have now jumped onto using GPU deep learning and taking advantage of the computing platform that we've taken almost seven years to build and is really quite amazing. We're tracking about 1,500. We have a program called Inception. And Inception is our startup support program, if you will. They can get access to our early technology, they can get access to our expertise, our computing platform, and all that we've learned about deep learning we can share with many of these startups. As they're trying to use deep learning in industries from cyber security to genomics to consumer applications, computational finance to IoT, robotics, self-driving cars, the number of startups out there is really quite amazing. And so our deep learning platform is a real unique advantage for them because it's available in a PC. So you can – almost anybody with even a couple hundred dollars of spending money can get a startup going with a NVIDIA GPU that can do deep learning. It's available from system builders and server OEMs all over the world, HP, Dell, Cisco, IBM, system builders, small system builders, local system builders all over the world. And very importantly, it's available in cloud datacenters all over the world. So, the Amazon AWS, Microsoft's Azure cloud has a really fantastic implementation ready to scale out. You've got the IBM Cloud, you've got Alibaba Cloud. So, if you have a few dollars an hour for computing, you pretty much can get a company started and use the NVIDIA platform in all of these different places. And so it's an incredibly productive platform because of its performance. It works with every framework in the world. It's available basically everywhere. And so, as a result of that, we've given artificial intelligence startups anywhere on the planet the ability to jump on and create something. And so our – the availability, if you will, the marketization of deep learning, NVIDIA's GPU deep learning, is really quite enabling for startups.
  • Operator:
    And your last question comes from the line of David Wong with Wells Fargo.
  • David M. Wong:
    Thanks very much. It was really impressive that 60% growth in your gaming revenues. So does this imply that there was a 60% jump in cards that are being sold by online retailers and retail stores or does the growth reflect new channels through which NVIDIA gaming products are getting to customers?
  • Jen-Hsun Huang:
    It's largely the same channels. Our channel has been pretty stable for some time. And we have a large network. I appreciate the question. It's one of our great strengths, if you will. We cultivated over two decades a network of partners who take the GeForce platform out to the world. And you could access our GPUs, you can access GeForce and be part of the GeForce PC gaming platform from literally anywhere on the planet. And so that's a real advantage and we're really proud of them. I guess you could also say that Nintendo contributed a fair amount to that growth. And over the next – as you know, the Nintendo architecture and the company tends to stick with an architecture for a very long time. And so we've worked with them now for almost two years. Several hundred engineering years have gone into the development of this incredible game console. I really believe when everybody sees it and enjoy it, they're going be amazed by it. It's really like nothing they've ever played with before. And of course, the brand, their franchise and their game content is incredible. And so I think this is a relationship that will likely last two decades and I'm super excited about it.
  • Operator:
    We have no more time for questions.
  • Jen-Hsun Huang:
    Well, thank you very much for joining us today. I would leave you with several thoughts that, first, we're seeing growth across all of our platforms from gaming to pro-graphics to cars to datacenters. The transformation of our company from a chip company to a computing platform company is really gaining traction. Now you could see that the results of our work as a result of things like GameWorks and GFE and DriveWorks, all of the AI that goes on top of that, our graphics virtualization remoting platform called GRID to the NVIDIA GPU deep learning toolkit, are just really, really examples of how we've transformed a company from a chip to a computing platform company. In no time in the history of our company have we enjoyed and addressed an exciting large markets as we have today, whether it's artificial intelligence, self-driving cars, the gaming market, as it continues to grow and evolve and virtual reality. And of course, we all know now very well that GPU deep learning has ignited a wave of AI innovation all over the world. And our strategy and the thing that we've been working on for the last seven years is building an end-to-end AI computing platform, an end-to-end AI computing platform, starting from GPUs that we have optimized and evolved and enhanced for deep learning to system architectures, to algorithms for deep learning, to tools necessary for developers, to frameworks and the work that we do with all of the framework developers and AI researchers around the world, to servers, to cloud datacenters, to ecosystems and working with ISVs and startups, and all the way to evangelizing and teaching people how to use deep learning to revolutionize the software that they build. And we call that the Deep Learning Institute, the NVIDIA DLI. And so these are some of the high level points that I hope that you got, and I look forward to talking to you again next quarter.
  • Operator:
    This concludes today's conference call. You may now disconnect. We thank you for your participation.