Pacific Biosciences of California, Inc.
Q2 2012 Earnings Call Transcript

Published:

  • Operator:
    Good day, ladies and gentlemen, and welcome to the Pacific Biosciences of California, Inc. Second Quarter 2012 Earnings Conference Call. [Operator Instructions] As a reminder, this call is being recorded. I'd now like to turn the conference over to your host for today, Ms. Trevin Rard. Ma'am, you may begin.
  • Trevin Rard:
    Thank you. Good afternoon, and welcome to the Pacific Biosciences Second Quarter 2012 Conference Call. With me today are Mike Hunkapiller, our Chairman and CEO; Susan Barnes, our Chief Financial Officer; and Ben Gong, our Vice President of Finance and Treasurer. Before we begin, I would like to inform you that comments mentioned on today's call may be deemed to contain forward-looking statements. Forward-looking statements may contain words such as believe, may, estimate, anticipate, continue, intend, expect, plan, the negatives of these terms or other similar expressions and include the assumptions that underlie such statements. Such statements may include, but are not limited to, revenue, margin, cost and earnings forecast, future revenue implied by the company's backlog, expectations of future cash usage and other statements regarding future events and results. Actual results may differ materially from those expressed or implied as a result of certain risks and uncertainties. These risks and uncertainties are described in detail in the company's Securities and Exchange Commission filings, including the company's most recently filed quarterly report on Form 10-Q. The company undertakes no obligation to update, and prospective investors are cautioned not to place undue reliance, on such forward-looking statements. Please note that today's press release announcing our financial results for the second quarter 2012 is available on the Investors section of the company's website at www.pacb.com and have been included on a Form 8-K, which is available on the Securities and Exchange Commission's website at www.sec.gov. In addition, please note that today's call is being recorded and will be available for audio replay on the Investors section of the company's website shortly after the call. Investors electing to use the audio replay are cautioned that forward-looking statements made on today's call may differ or change materially after the completion of the live call and that Pacific Biosciences undertakes no obligation to update such forward-looking statements. At this time, I'd like to turn the call over to Mike.
  • Michael W. Hunkapiller:
    Thanks, Trevin. Good afternoon, and thank you for joining us today. We are continuing to make progress against the plans we laid out earlier this year. Highlights of our second quarter achievements are as follows. We installed 7 new PacBio RS systems, bringing our installed base up to 66 systems in total. We recorded over $7 million in total revenue with recurring revenue continuing to grow sequentially. We launched our latest software product release on time, which provides an automated way for our customers to detect and analyze base modifications. And finally, we had 3 important scientific papers, 2 published in Nature Biotechnology, and one in BioTechniques, which clearly highlighted the value of the PacBio RS in de novo genome assembly. I'll provide more details on this later in the call. As a reminder, at the beginning of the year, we established a small number of priorities that included
  • Susan K. Barnes:
    Thank you, Mike, and good afternoon, everyone. I will begin my remarks today with the financial overview of our second quarter that ended June 30, 2012. I will then provide details on our operating results for the quarter with a comparison to the first quarter of 2012 as well as a comparison to the second quarter of 2011. I will not be providing year-to-date comparisons as it was not until the second quarter of 2011 that we transitioned from a development organization to a commercial operating company. Given the difference in how we were operating during the first half of 2011 compared to the first half of 2012, a detailed year-to-date comparison will provide few if any additional insights into the progress of our operations. Finally, I will conclude my remarks with a brief discussion of our balance sheet. Starting with our second quarter financial highlights. During the second quarter, we recognized revenue of $7.3 million and incurred a net loss of $22.5 million, while cash used during the quarter totaled $24.2 million. Breaking down our revenue. Total revenue for the quarter was $7.3 million, a decrease of $2.7 million from the $10 million of revenue realized in Q1 and a decrease of $3.3 million from a $10.6 million recognized in Q2 of 2011. During the quarter, we recognized instrument revenue of $4.7 million, reflecting the installation of 7 PacBio RS instruments, compared to $7.9 million realized on 11 instruments in Q1. Our consumable revenue increased $1.2 million in Q2, up from $900,000 in Q1. The increase is representative of the higher installed base quarter-over-quarter and our customers' increase in the utilization of their instruments. Service revenue increased to $1.3 million in Q2, up from $1.1 million in Q1. This is also consistent with our growing installed base from 59 installed systems in Q1 to 66 in Q2. Finally, research grant income in the quarter was $200,000 compared with $300,000 realized last quarter. Overall, second quarter 2012 revenue decreased $3.3 million over the same quarter last year. In Q2 2011, we recognized revenue for our first 16 RS systems, 9 more than that in Q2 2012. This corresponding lower instrument revenue in 2012 was partially offset by higher consumable and service revenue associated with an installed base that is now 50 units higher than it was at the end of Q2 2011. Gross profit in the quarter was $300,000, representing gross margin of 4%. This is up $500,000 from a negative $200,000 or negative 2% gross margin for Q1. Year-over-year, gross margin was down from the 74% margin realized in Q2 of 2011. The gross margin decrease compared to last year reflects the margin benefit we had in last year's Q2 2011 results, as a result of the product inventory that was expensed in accordance with GAAP accounting in R&D during the period prior to our commercial launch. Moving to operating expenses. Operating expenses in the second quarter totaled $22.8 million, including a $2.5 million noncash stock-based compensation expense. This is a $4.5 million or 17% decrease from the $27.4 million recorded in Q1. R&D expenses decreased $800,000 during the quarter to $11.3 million. R&D expense for the quarter included $1.1 million of noncash stock compensation expense. Sales, general and administrative expenses for the quarter decreased $3.7 million to $11.6 million from the $15.3 million in Q1. Our Q2 SG&A costs were lower mainly due to the fact that first quarter SG&A expenses included litigation and settlement costs for the 2 patent disputes we mentioned in our last earnings call. SG&A expense for the current quarter includes $1.4 million of noncash stock-based compensation expense. Year-over-year, operating expenses decreased $7.7 million from the $30.6 million in Q2 2011. R&D expenses over the time decreased $8.3 million relating to a large investment in material and resources for the launch of the RS product in 2011. Lower R&D expenses in the second quarter of 2012 were partially offset by the higher cost of a growing commercial team that is selling, installing and supporting our products. Now turning to our balance sheet. Cash and investments totaled $137.1 million at the end of the second quarter, down $24.2 million from the previous quarter and down approximately $40 million from the end of last year. Cash used during the quarter reflects our second quarter net loss of $22.5 million. That's $4.3 million in noncash expenses comprised of $2.6 million of stock compensation expense and $1.7 million in depreciation. Cash used also reflects changes in working capital of $5.4 million stemming primarily from early payments received from customers in Q1 instead of Q2 and the payment of litigation expenses in Q2 that were incurred in Q1. Accounts receivable increased from $1.5 million at the end of Q1 to $3.4 million at the end of Q2. Average days sales outstanding remains well below the industry average of approximately 90 days. And lastly, inventory balances declined this quarter by $1.5 million to $10.3 million as of June 30, 2012, compared to $11.8 million at the end of the first quarter. This concludes my remarks on the financial results for the quarter, and I'd like to turn the call over to Ben.
  • Ben Gong:
    Thank you, Susan. As in previous calls, I will be providing guidance on our near-term financial performance. In brief review of our previous guidance on revenue, we have essentially forecasted the instrument backlog plus the rollout of service revenue and consumables. As of the end of the second quarter, we have worked down all of the instrument backlog from previous quarters. Therefore, for our third quarter, we expect our revenues to be primarily comprised of recurring service on consumable revenues. As we're entering the seasonally slower summer months, we expect our recurring revenues to be fairly flat compared with the second quarter. With little revenue contribution from instruments and relatively flat recurring revenues, we expect our third quarter revenues to decrease sequentially from Q2. Moving now to gross margin. Last quarter, we recorded approximately $7 million in revenue, and we were slightly above breakeven gross margin. With revenue decreasing in the third quarter, our gross margin will likely be negative in Q3. Our operating expenses came down during the second quarter as our litigation activities with Helicos and Life came to an end. We expect our third quarter operating expenses to be modestly higher than our second quarter expenses. Please keep in mind that our operating expenses may fluctuate quarter-to-quarter due to timing of when certain activities such as prototype expenses occur. Finally with regard to cash usage, as I mentioned in our last call, the timing of early cash collections, which led to lower cash usage in the first quarter, naturally evened out during the second quarter, so that for the first 6 months of the year, we consumed approximately $40 million in cash. Although our revenues are expected to be lower in the second half of the year, we continue to target approximately $20 million of cash used per quarter and $80 million of cash used for the year. And with that, we will open the call to your questions
  • Operator:
    [Operator Instructions] And our first question comes from the line of Amanda Murphy from William Blair.
  • Amanda Murphy:
    I just had a question on some of the base modifications upgrades that you've rolled out. I'm curious what the reaction has been to that. And also, is that -- and as you have progressed, is that something that you've seen drive additional interest in instrument purchases? I'm just trying to get an understanding of your comfort that the instrument sales will ultimately pick up over time.
  • Michael W. Hunkapiller:
    Get the questions in order there. So we began the rollout in middle of last month. And although software is available to everyone, not everyone has -- we haven't got it installed in all the sites yet, but in those we have, we've certainly seen a great interest in it. We have been working with several of the sites on giving them access to the early versions of the software that was run here, or else, we would just process data for people. And we're seeing that in -- of interest in almost all of the sort of bacterial and small genome samples in particular that we've been running pilot projects on. And then interest as well in the eukaryotic world. So I would say people recognize it clearly as one of the distinguishing features and really unique features of our technology, and it fits in well with the kind of studies that one does in a lot of different fields. They're extremely interested in it. So based on the interest that we've seen in terms of the number of pilot projects that we're running as part of our early sales process, I would say that there's very high interest in that.
  • Amanda Murphy:
    Got it. And in terms of thinking about methylation or base modification as a killer app for the RS, I mean, is there -- what else -- or is there anything else that you need to do in terms of whether it's software upgrade or enhancement to the platform?
  • Michael W. Hunkapiller:
    Well, so the software that we rolled out basically takes the sequence data, processes the kinetic information that's associated with that and flags individual positions that have an anomalous kinetic signature, which basically means that base additions wind up being much slower than what one would've expected for that particular sequence otherwise. What we've given people are our set of early tools, separate from the core or analysis package that we provide that they can get access to on our DevNet site, that allows them to go in and say, "Okay, this base is modified and it's modified by this type of epigenetic mark." And then it goes through and processes the motif around or the sequence around that position that would relate to a particular enzyme doing the modification. What we will do later in the year is have a fully automated version of that analysis, particularly for bacterial work, which basically go down and analyzes every modified base there and tells see it comes from this particular flavor of modification. The thing that we're also working on are a set of techniques that allow you to go after base modification in larger genomes. And right now, given the throughput of the RS, it's hard for us to tackle, say, a whole human genome and do a whole methylome analysis for that. But we've got enough capacity clearly to do targeted stretches of base modification areas. And we're working on sample prep techniques that are compatible with pulling those stretches out without doing amplification and feeding them into our sequencing workflow. So that will be one of the other things that we will be developing along with the generic base modification kinetic analysis that we can do for essentially any sample at this point.
  • Amanda Murphy:
    Are you able to give any perspective on timing? Or is it too early at this point for some of the -- second half.
  • Michael W. Hunkapiller:
    Well, it will be a gradual rollout. I mean, there are almost 2 dozen different kinds of base modifications that have been identified in various organisms. And we'll roll them out a few at a time just to make sure that we've got the right analytical capability to identify them specifically. But now that people have the raw kinetic data in a nice tabulated form directly out of the sequence work, we would expect also to see a lot of individual customers beginning to develop the analysis tools on their own for specific kinds of modifications that are important in their systems. It's going to be an evolving sort of thing. But on the bacterial world, people have already realized that right now, you can get not only a complete bacterial genome sequence of AGCs and Ts, but you can get essentially the complete epigenetic pattern that's there as well. So that's today. That's not in the future. That's already there. It's a little more cumbersome to get the full annotation out of it right now, for people that aren't familiar with using the kind of scripting informatics tools that you can get off of DevNet. But we already have a lot of people who are starting to use those as well. And our timing on the sort of full release of the automated form of that is a little bit later in the year.
  • Operator:
    Our next question comes from the line of Bill Quirk from Piper Jaffray.
  • William R. Quirk:
    Just a couple of questions for me. I guess first off, Mike, just building off of Amanda's questions, can you talk a little bit about the accuracy of the base modification software relative to some of the more traditional methods of detecting this?
  • Michael W. Hunkapiller:
    Well, in the bacterial world, the kind of modifications that dominate are methylation of adenosine residues, there is no methodology that's out there other than very, very laboriously for looking at individual sites that allows you to get that half those. And so -- but the accuracy is very high. We can see essentially all of the residues that are -- at the 99% level in most cases, that are modified across the organism, and you can certainly get enough to detail the specificity from a sequence perspective of the enzymes that are doing the modification. So I -- the thing we can do is we can do it base by base very accurately. And that's been highlighted in some of the papers. If you kind of go through the paper that Dr. Schadt and Dr. Waldor and Dr. Roberts have done, they pretty clearly demonstrate that.
  • William R. Quirk:
    All right. And then, Mike, understanding Ben's comments around expectations for relatively flat consumables, how should we be thinking about, as you're expanding your toolkit of applications here and then as we think about that as it relates to overall utilization, I guess I'd be naturally inclined to think we should start to see a little bit of a bump there. But would love to have your feedback.
  • Michael W. Hunkapiller:
    Well, if we have seen the bump, pretty substantial particularly since we got the C2 upgrades done starting in sort of the middle of Q1 and finishing in the early part of Q2. And our expectations are that's going to continue. I think we're just being a little cautious about what happens in the summer months, which, from my experience in this space, the summer months tend to be a slowdown in general in this space, particularly relative to consumables just because you have so many people who are off in Europe a good part of the quarter. But it's true in other places as well. So, with that said, I think we see enough indication that our customers are planning more and larger projects to run on the instruments, so we would expect that uptick in the consumables usage to continue.
  • William R. Quirk:
    Got it. And then just last question from me. Just thinking about the new automated system for some of the front end, when should we expect to see, I guess, kind of customer experience with that, Mike? Is it something that we could see as soon as ESHG [ph]? Or is it something that may be more likely, say, in February next year?
  • Michael W. Hunkapiller:
    Well, we've had it in beta, 2 sites, for all of this quarter. I think we mentioned that even in the last conference call. And we anticipate launching the product in fully automated commercial form fairly shortly. It takes a while to get through and get through the entire installed base and get them all upgraded, but I would expect that you would begin to, certainly by next February, hear a lot about it but even begin to get experience from the customers way before then.
  • Susan K. Barnes:
    Right. And the question would be will they have anything to present at ESHG [ph] given their cutoff dates on presentation. We're not sure right now how much they'd have prepped by then but...
  • William R. Quirk:
    But certainly, by February, we should have a fairly wide range of experiences?
  • Michael W. Hunkapiller:
    Yes. Well before then, we would expect that all of our customers would have had it and been using it.
  • Operator:
    Our next question comes from the line of Daniel Brennan from Morgan Stanley.
  • Daniel Brennan:
    Mike, can you comment on the dichotomy between the management's kind of bullishness about the really unique features of the RS, I mean, granted since you've been there, the focus is on fixing and restoring the liability of the installed base and -- but now that it's out in terms of the uniqueness, it just seems to be fairly dichotomous between how unique some of the aspects of the technology are and why you expect to see really 0 new customer growth. So I'm just wondering kind of what you're hearing back from customers why there is no traction yet?
  • Michael W. Hunkapiller:
    Well, I don't think that's quite what we said. So the guys had to do with the fact that for new customer orders, or new customer sales in this quarter, it has to do with the fact that even if we had a sale today, getting an order today, getting the install and the sign-off and all that kind of stuff, is a fairly involved process for something that's as complicated. There's a lot of site prep and all kinds of stuff that makes that long. And so we're just cautious relative to -- in particular, sales revenue in the really short term, not that we don't anticipate orders. I've done this before. I guess it's probably worth repeating again. So the process, that I'm familiar with in this space, the new technology, is that you go out there and initially everybody will believe everything about the technology that they want to believe. And so you get this -- if it's a really exciting potential technology, there's excitement about it, you can get a lot of early orders. And we've got a lot of those. And then you have to go through the reality of getting the technology out there, getting it installed, getting the people happy with it. And you're always going to have glitches in a technology with this level of complexity going out there. So you have to deal with those. And so we only released the product for commercial sale in May of last year. And we started off beginning of this year with the primary goal of making sure that we got through that first wave of things that we learned from the early installs that we need to fix. And so we really put a big focus on that. And the whole C2 upgrade thing, which was fairly extensive in what it changed, in chemistry to software and firmware, to some hardware issues, new version of the chip manufacturing process that made them much more consistent. But we have spent most of Q1 doing that. And so you only start getting the impact of that really in Q2 from the perspective of people starting to generate reasonable [ph] amounts of data from it. So then you go through the process of these guys who have the machine, or the early adopters, are your reference points for the next wave of customers who are not so ambitious and aggressive about taking on new technologies without their colleagues having done the proof on it themselves. And the process that, that involves is publications. But before that, you start getting at least voice publications at scientific meetings. And we've kind of highlighted over the next 2 or 3 months that, that's -- or the last couple of conference calls. But that really started to pick up dramatically, even with the early form of the chemistry and all the glitches that. And with C2, it's gotten substantially better than that. But then you go through the process of the publications and peer-reviewed journals, which is really the kind of gold standard for how you get the proof statements out there that you can hand to somebody who doesn't necessarily go to all the meetings that these guys go to and get talks at. And we're just starting to see the wave of that. So we mentioned 3 of them from 3 different labs that were all related in particular to the concept of how to take the informatics tools that have been out there for the last several years to deal with the short read technologies and alter them sufficiently so that they can take advantage of the long read sequences that you can get with PacBio. And they all 3 come to the conclusion that you can do things because you can combine the long reads with the short read technologies, whether they come from other suppliers or come from some of the ways that we can do short read consensus sequencing on the PacBio system. And you get analysis of genomes that you just couldn't see with the technologies from the short read approaches alone. And now, just happenstance, late breaking news that the group at the Broad published a paper, yesterday online in the journal Genome Research. Exactly the same conclusion using the assembly techniques that they have where they can say for the first time they can actually get finished bacterial genomes from short read -- shot gun sequence data by combining short read data with long read data. And in the issue of accuracy, not only can they pick out the issues of complex genetic structures that you can't see without the long read stuff. The accuracy of the finished sequence that they were getting, and they tested against 10 unknowns and 3 well-characterized reference genomes, and in those cases, they were able to identify, in one case, 200 errors in the gold standard NCBI-referenced genome with this combined approach. And sort of -- we're a little surprised that the standards out there weren't quite as good as people thought they were, but they kind of touted this as now a fully automated process. That you can get the bacterial genome sequences, for example, at a quality you could never get before. And so once those papers get out there, people begin to realize that the tools that go along with the sequencing technology are evolving pretty fast and they can get a lot more comfortable because most people don't have the expertise that the Broad and some of these other big centers have for developing informatics tools on their own. As we start incorporating those things into our product line in a direct, bundled way, it's that much easier for people. And they can go ahead and begin the process, which occurs sort of, at this point, of including the PacBio system into their research proposals that request funding from various and sundry agencies. So it's a process. It doesn't happen overnight. And I think we're encouraged by the fact that we see the steady tracking with what we expect from past experience happens to get the new technology widely adopted. And the fact that we're getting an increasing number of requests to run pilot projects internally to validate on specific small projects that the people are going to expand, assuming that they go ahead and buy it on our system is another good indication that, that message is getting out there. The technology is now ready for broader adoption. That said, we still got to go through the process with them of dealing with their funding cycles. And that's always the hardest thing to predict, particularly in these days of somewhat unpredictable budget constraints in at least the U.S. and Europe.
  • Daniel Brennan:
    Okay. Great. And maybe just secondly on the comments of the Los Alamos paper with the capability to kind of migrate over from CE for genome assembly. What's been -- I know it was just very recently come out, but can you kind of discuss like that particular application and whether or not that is something, say, over the next 12 months or so, could that -- do you think there could be some real traction on that? Or is that still kind of in the team [ph] that's going to take a lot of time to kind of educate and maybe refine the process? I'm not sure because it seems like that could be a real opportunity for the RS.
  • Michael W. Hunkapiller:
    Well, I think, actually, although they're the latest ones to publish on that. We mentioned the same thing, I believe, in the last quarter relative to the Department of Energy lab in Walnut Creek, the JGI site, who have done the same thing. And it's really the case in all of these papers we're trying to do assembly with -- even of bacteria, Short read data had just not worked. And you get -- depending on how good a reference you have, you can map a lot of your sequence data onto the reference. But that doesn't, by any means, give you a complete genome. It winds up still being in pieces because you can't deal with the rapid evolution of bacteria and the complexity of the genetic structures that happens during the process. And so what they've traditionally done in the past, what they needed to do with the finished assemblies, is go through the process of using Sanger technology, which at least gives you reasonable reads in terms of size. But it's a very slow, very laborious process on the front end and in the sequencing itself. And it's very expensive because of that. And so what we're seeing with the people who are doing the work in the microbial world across this whole span is that they're replacing the sort of medium-length approaches they had, either the 454 technology or the Sanger technology with the PacBio system because, a, it gives much, much longer reads; but b, it's much less expensive. And so it's kind of a double benefit to them. So I think that's one of the reasons that we've seen the uptake in the microbial world of usage of the system. And then when you layer on the ability to get the epigenetic data that they could not get at all, then that's, as I said in my comments, becoming pretty much of a sweet spot for us. And we're seeing an awful lot of the pilot projects we're doing from all over the world in that space.
  • Operator:
    [Operator Instructions] Our next question comes from the line of Tycho Peterson from JPMorgan.
  • Evan Lodes:
    It's Evan Lodes in for Tycho. First question would be on the new sample preparation and also the base modification products. Are those -- can you help us understand the economics of those?
  • Michael W. Hunkapiller:
    Economics in what sense?
  • Evan Lodes:
    In the sense of -- I guess starting with the sample prep product, is that going to be a separate system that is sold? Or will it simply be added [ph] to existing systems...
  • Michael W. Hunkapiller:
    Well, so it's actually a fairly modest upgrade to the existing system. The parts cost associated with it are actually very small. So we're going to do that as part of the warranty service on the system. It's just basically a change to one of the play -- areas on the work surface where the initial loading of the instrument templates and so forth are done on those SMRT sales. And the economic benefit to us is it makes it easier for people to get good results. And so it -- we see it as a potential source of, fairly substantial increase in the usage of the system.
  • Ben Gong:
    Yes, just adding to that, Evan, sometimes it can be pretty tough for people to prepare samples. And what Mike described in his prepared remarks is that the word automation here is actually a really good description of what it does. It automates a lot of the things that happened during sample prep, and it cleans up some impurities that often prevents people from getting good results. So that itself, combined with the fact that it really helps people run experiments with much less input sample, we think, is going to drive a fair amount of utilization on the systems.
  • Evan Lodes:
    Okay. That makes sense.
  • Michael W. Hunkapiller:
    I mean, so this is relatively simple, I think, understandable technical reason for why it's useful. So when we load samples onto the SMRT cells now, you do it by basically pipetting a solution that contains your prepared DNA and bound RNA [ph] polymerase on top of the SMRT cell chip. And the sample of the polymerase basically diffuse from that solution down in the bottom of the well where they bind onto the bottom of the certain 0-mode waveguide, and then you start sequencing eventually. And the way diffusion processes work is that smaller things can diffuse faster than big things. And our technology -- and so they tend to preferentially get to the hole before the big pieces did. And our technology's advantage is you can run long stretches of DNA sequence. But if you preferentially load little pieces, you'll lose some of that advantage. And so the trick that our guys came up with is if you bind the DNA onto the -- very lightly onto the surface of a magnetic bead and you sort of roll those beads over the surface of all the 0-mode waveguides in the chip, the little pieces can't reach far enough off the bead to get into the hole. The big pieces can. So you actually preferentially load big pieces relative to short pieces. And it's the short pieces that you tend to get as contaminants in your sample prep, either because the sample doesn't -- it fragments too much when you're doing the sheering of the DNA to get it ready to do the chemistry on or whatever, and anything else that's left on can kind of muck up the reaction as well, and you use the beads both as a purification scheme and as a clever way of loading big pieces versus little pieces. And as a benefit, it takes a lot less DNA to do that than it did before. So you get many benefits out of the approach. But the cost to us is a relatively simple modification to the hardware.
  • Susan K. Barnes:
    And then the other piece on your -- the DNA or the culling [ph] of methylated [ph] bases is just a different type of bioinformatic software set of tools. But there's no extra cost in terms of out of pockets to the customer. Again, it's driving more and more useful information.
  • Michael W. Hunkapiller:
    What we will have there as we go along are some specific sample preparation kits that go along when we're doing target selection to go after modification in specific regions that are tied to those. And so that's a source of the reagent revenue that we would expect to begin at some point. And the customers would obviously be paying for that, but there's not an upfront cost to the customer just to get the software.
  • Evan Lodes:
    Okay. And you've talked -- can you talk about the goals of the Imec partnership and what milestones we might be looking for? I think Ben mentioned something about prototyping expenses. Is that related?
  • Michael W. Hunkapiller:
    Well, I'll take the first crack at it, and then Ben can in chime in as well. So the goal of it is that overall, you want to continually increase the capacity from a DNA output perspective of the platform. And you need to, in order to do that, have effectively more sites that you're monitoring at any one point in time, right? So the current system, the chips have 150,000 0-mode waveguides in them. We actually read those in batches of 75,000 sequentially. And we have some roadway with the current manufacturing process to be able to increase that number. But realistically, we're not going to get up into the millions of ZMWs by the current fabrication technologies. And so the goal of the Imec approach, or collaboration, is to use basically a different process to generate the chips that will allow us to go up to a much, much higher density in the process. And chip manufacture takes both effort to do the design and the cost associated with that, upfront frequently.
  • Ben Gong:
    Yes, just my comment on just timing of when expenses happen. Just 2 of the things that we talked about that we're working on this quarter are that mag bead station and this Imec collaboration. So both of those things happening, starting at the same time, could drive some expenses in the near term.
  • Michael W. Hunkapiller:
    I mean, the obvious goal in the long run is to increase the ability to get more and more ZMWs in play. But what that does is it translates to higher throughput per sell. And so that opens up more and more opportunities for us because it gets us into very cost-effective ways of doing larger and larger genomes.
  • Operator:
    And with no additional questions in queue, I'd like to turn the conference back over to Mike Hunkapiller for any closing remarks.
  • Michael W. Hunkapiller:
    Okay. So in closing, we remain steadfast in our commitment to bringing this unique set of advantages of our SMRT technology and the products to our customers and the scientific community general. While our instrument bookings have been light in the first half of 2012, as we predicted, for example, in our last conference call, we're excited to see the progress our customers have made since we introduced our C2 upgrade. With a series of new product releases scheduled to go out during the second half of the year, we feel that we are well-positioned to drive business from them. Thank you for listening in, and we will talk again in 3 months' time.
  • Operator:
    Ladies and gentlemen, thank you for your participation in today's conference. This does conclude the program, and you may all disconnect. Have a great rest of the day.