Arcturus Therapeutics Holdings Inc.
Q2 2016 Earnings Call Transcript

Published:

  • Operator:
    Good day ladies and gentlemen, and welcome to the Alcobra's Second Quarter 2016 Earnings Results Conference Call. At this time, all participants are in a listen-only mode. Later, we will conduct a question-and-answer session, and instructions will follow at that time. [Operator Instructions]. I will now turn the call over to your host, Debbie Kaye. Please go ahead.
  • Debbie Kaye:
    Good morning and thank you, Stephanie. Before the market open this morning, Alcobra announced financial results for the second quarter and six months ended June 30, 2016. If you have not yet received this news release or if you would like to be added to the company's distribution list, please contact us at 646-597-6979. Before we begin, let me remind you that this conference call will contain forward-looking statements within the meaning of the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995 and other federal securities laws. Because such statements deal with future events and are based on Alcobra's current expectations, they are subject to various risks and uncertainties. Actual results, performance or achievements of Alcobra could differ materially from those described and/or implied by the statements on this conference call. For example, forward-looking statements include statements concerning among other things, the development of Alcobra's lead product candidate and its various indications, including the [indiscernible] design and outcome of clinical studies, timing of reporting results of such trials, Alcobra's ability to better design future clinical trials and reduce high placebo response, the benefits of utilizing certain methods and sites in Alcobra's clinical trials, the expected cost of clinical trials and levels of future expenses, reaching the milestones required for U.S. Food and Drug Administration approval, the potential of MDX to treat adult and pediatric attention deficit hyperactivity disorder and Fragile X syndrome, future uses of cash, and the sufficiency of the company's financial results as to meet certain milestones, and whether such milestones may be achieved at all. In addition, historical and interim results or conclusions from scientific research and clinical studies do not guarantee that future or final results would not suggest different conclusions or that historical and interim results referred to on this call would not be interpreted differently in light of additional research or otherwise. Further, although Alcobra has received fast track designation for MDX, for the treatment of Fragile X syndrome, Alcobra cannot guarantee that it will be able to maintain such designation due to reasons within or outside of its control. Also, while the FDA has indicated to Alcobra that positive efficacy results from certain clinical studies may be sufficient to demonstrate efficacy for approval of MDX, the FDA is not bound by these communications and, accordingly, may change its position in the future due to reasons within or outside the control of Alcobra. The forward-looking statements contained or implied on this call are subject to other risks and uncertainties, including those described in the Risk Factors section of Alcobra Limited's Annual Report on Form 20-F for the fiscal year ended December 31, 2015, filed with the Securities and Exchange Commission. Except as otherwise required by law, Alcobra disclaims any intention or obligation to update or revise any forward-looking statements, which speak only as of the date of this call, whether as of the result of new information, future events or circumstances or otherwise. Hosting today's call from Alcobra's senior management are Dr. Yaron Daniely, President and Chief Executive Officer; and Dr. Tomer Berkovitz, Chief Operating Officer and Chief Financial Officer. It is now my pleasure to turn the call over to Yaron. Yaron, please go ahead.
  • Yaron Daniely:
    Thank you, Debbie. Good morning everyone. Today, I will provide an update on our development programs for our lead drug candidate, Metadoxine Extended Release or MDX. I will then hand over the call to Tomer to review Alcobra's Q2 financials. During the past quarter, we have made significant progress with our pivotal Phase III adult ADHD study named MEASURE. As of yesterday, 718 subjects have already been screened for the study since its initiation, with 474 subjects already enrolled or post screening. After a slow start in enrollment, we are now screening close to 150 subjects per month, while maintaining our close monitoring of old sites and raters. With the target enrolment of up to 750 subjects, we are confident in our ability to complete enrolment by the end of the year, and expect data released in the first quarter of 2017. As Tomer will explain shortly, our measured cash burn through the trial period, provides us with sufficient capital into 2018, to continue executing the development program without the need to secure additional capital, shortly after data readout. Many of you have heard me describe before the design and operational changes we have made to the MEASURE study before its initiation, in order to mitigate both the magnitude of the placebo effect, and the overall variability in treatment response we observed in our first Phase III study. We have instituted multiple measures directed to mitigate the placebo response in the study, but the magnitude of the placebo response or the MDX response will only be known, when we unblind the study, after all subjects have completed treatment. However, the overall treatment response variability in the MEASURE study, that is the pooled standard deviation of the change in CAR score from baseline, is independent of treatment randomization and is therefore an outcome that we can and are monitoring closely, using the real time e-source system, that's being used in this trial. You may also recall that the MEASURE study is powered to detect the same [indiscernible] effect size observed in the ITT analysis of our first Phase III study. As a reminder, an effect size is calculated by dividing the difference between the two treatment group responses, by the pooled standard deviation of the response in the entire population. The level of response variability or pooled standard deviation, which we observed in our first Phase III study, was approximately 12 points and was significantly higher than in our previous controlled trials with MDX, and also hired in levels reported in other Phase III adult ADHD trials. Now one of the benefits of having the clinical data in the MEASURE study captured in real time by a tablet based instrument, is that the data needed to calculate the pooled standard deviation of all randomized subject in MEASURE is visible, and the pooled standard deviation can be calculated an tracked, while we remain blinded to the actual randomization assignments for each individual subjects. We have been pleased to observe that the pooled standard deviation in the MEASURE study has been appropriately and effectively controlled to-date, and has been stable at about 9 to 9.5. This level of variability is in line with our Phase IIB MDX study, as well as other successful ADHD drug trials. Moreover, if the observed pooled standard deviation remains lower than what we originally assumed for the purposes of powering the MEASURE study, a similar separation between the drug and placebo responses to our first Phase III study, would mean an increased probability of achieving a statistically significant P-value in the ongoing study. The success we are seeing to-date in mitigating the treatment response variability, together with additional features, such as a longer treatment duration, careful selection, training and real time monitoring of clinical sites, FDA approved enrichment methods, which include extreme placebo responders from the ITT population, as well as centralized audio tape review of subject interviews, give us confidence about the quality of the study. If MDX is approved by FDA for the treatment of ADHD, we believe that the consistent signal of efficacy seen in trials of MDX in ADHD subjects to-date, may position it as the drug of choice for ADHD patients, seeking a non-abusable therapy because of its tolerability, safety and rapid [indiscernible] effect profile. On other fronts, we have progressed preparations to launch the first of two pediatric registration studies this fall. We are awaiting the agency's feedback on the submitted revised pediatric study plan, and have a scheduled meeting with the agency in the fall, when we anticipate obtaining the go-ahead to proceed with the first trial at the time. If we receive the go-ahead, we would be prepared to enroll subjects in Q4 of this year. This first pediatric efficacy study targets enrolment of approximately 200 subjects, and share many common design features, with the adult MEASURE study. We are also awaiting FDA confirmation and the design elements of a pivotal study in adolescents and adults with Fragile X syndrome. As a reminder, the FDA has awarded MDX fast-track and orphan drug designations for this indication, and more recently, the European Commission also granted MDX orphan drug designation for Fragile X syndrome. MDX has demonstrated statistically significant improvements on a validated s scale of daily living skills, and a placebo controlled multi center study of 62 adolescents and adults with Fragile X syndrome. This concludes my operational update, I will now turn the call over to Tomer to discuss the financials. Tomer, please go ahead.
  • Tomer Berkovitz:
    Thank you, Yaron. Earlier this morning, we reported our second quarter 2016 results. We reported total operating expenses of $5.9 million for the second quarter of 2016, compared to $5.2 million in the second quarter of 2015, and in line with our average expense rates over the last few quarters. Total operating expenses included non-cash charges for stock based compensation of $0.6 million this quarter, and $0.4 million in the same quarter last year. Most of our operating expenses are driven by research and development activities in the second quarter of 2016, R&D expenses were $4.2 million compared to $3.7 million in the same quarter last year. R&D expenses this quarter increased also relative to our previous quarter, and consisted primarily of costs associated with the ongoing MEASURE study. As we stated previously, our R&D expenses for the rest of fiscal year 2016 are expected to further increase, as we complete enrolment in the MEASURE study and launch the first pediatric ADHD registration study. G&A expenses for the second quarter of 2016 were $1.4 million compared to $1.2 million in the same quarter last year. Pre-commercialization expenses were $0.3 million this quarter, in line with the same quarter last year. Finally, we ended the quarter with $61.1 million in cash, bank deposits and marketable securities, compared to $65.2 million at the end of the previous quarter. We still expect the total cost of the MEASURE study to be approximately $18 million and believe that our cash balance supports our currently planned activities through at least early 2018. I will now turn the call over to the operator for the Q&A session.
  • Operator:
    [Operator Instructions]. Our first question comes from Biren Amin with Jefferies. Your line is open.
  • Biren Amin:
    Yeah. Thanks guys for taking my question. Maybe Yaron, can you talk a little bit about the dropout rate in the trial and what you are seeing?
  • Yaron Daniely:
    Yeah. So we have seen in this trial, very similarly to all our previous trials, a relatively low drop-out rate, actually very-very low double digits overall. We continue to see kind of minor tolerability and other standard issues resulting in folks prematurely terminating from the study, in very small numbers. The statistical model in the study, the MMRM model, does not require subjects to stay for the entire duration of treatment, in order for their post randomization data to be included in the analysis.
  • Biren Amin:
    Okay. And then, when you gave us the number on the standard deviation -- as I recall in all the previous Phase III, you had some site issues with a few sites. Have you detected any site issues in this trial, as you track the standard deviation [indiscernible] variability?
  • Yaron Daniely:
    So we utilize kind of the state-of-the-art dashboard that looks at over 70 quality indicators in real time and maps in real time, any kind of potential quality issues that may require a kind of deeper dive. We are very kind of fortunate and happy that we have not had the need to intervene significantly or for any real extended period with any of our sites. We are not seeing any kind of major data outlier that have caused us to become concerned with the data flowing in. I mentioned before in calls, that I think one major fringe benefit of using this e-source system, is that some of the poor performing sites actually self-select out of participation in these kind of trials. When you know that your evaluations are being recorded and listened to and that someone knows the completeness and the quality of data that you enter into the tablet station, that tends to have better raters, better sites, better data quality than your average paper and pencil CRF trial.
  • Biren Amin:
    And then, with regards to timelines for the Phase III, you have pushed it out by about a quarter. Any reason for this, compared to your previous guidance for end of year data?
  • Yaron Daniely:
    Only that we have been tracking the summer months. We are wrapping up August, and we just have a lot more clarity on the timelines. I think I said, if you go back to previous comments that I have made, that as we get closer, we are going to get more confident about timelines. We are going to see the curves and what's left for enrollment versus what we have already done. I think it is at this stage, that we are able to really narrow it down and understand that although enrolment is very or extremely likely to be completed this year, we feel more comfortable to kind of represent data release in the first quarter, instead of having to rush and have everyone frantically working over the very end of December, to put data out. So we wanted to be transparent, provide the numbers to show the significant progress that has been made with screening numbers and enrolments, but also provide a realistic framework for data release.
  • Biren Amin:
    Got it. Thank you.
  • Operator:
    Our next question comes from Charles Duncan with Piper Jaffray. Your line is open.
  • Charles Duncan:
    Good morning guys. Thanks for taking my question, and thank you for the update in the prepared remarks, regarding response variability on a blinded basis that you are seeing so far, that's helpful. Yaron, I wondered if you could just remind us of it tolerating an effect size assumptions that you made in sizing the trial and projecting the roughly 750 targeted patients?
  • Yaron Daniely:
    Sure Charles. Thanks for the question. So the MEASURE trial was powered to demonstrate an effect size that's similar to the effect size that we observed in the first Phase-III study, which did not have sufficient patient numbers to yield this statistically significant outcome. So if you go back to the first Phase III study, we saw a little over two point delta between drug and placebo over a pooled standard deviation, that was a little over 12. Given the effect size there, we have calculated a number of randomized -- truly randomized subject that would be required to yield a statistically significant outcome, given the same results, the same delta between drug and placebo and pooled standard deviation, and enrolled or targeted the enrolment of that number -- to that number by the way of approximately 600, we have added a 20% margin to account for the variable placebo period enrichment tool that we are using in the study, based on published information as well as modeling, and that gave us the -- up to 750 number that is publicly being used. So that's the underlying assumption. Now, as I said previously, when you have a trial that you have certain assumptions for a pooled standard deviation and your observed standard deviation is actually somewhat lower than of course, the powering and the probability of hitting the critical P value, is greater, given the same delta, no change in anticipating a new delta between drug and placebo.
  • Charles Duncan:
    Yeah. Certainly agree with that, and it seems like it's going well. When you consider that effect size that you have seen in the past and that you are targeting for this trial, do you have any additional feedback that from the investigators beyond just statistical significance, as to clinically valuable component of that measure. Are those effect sizes going to be clinically meaningful, or is it beyond efficacy, going to the other things about Metadoxine in its profile, such as non-abusability etcetera, that really drives home the clinical value of the product profile?
  • Yaron Daniely:
    So that's a very important question, and we discuss that a lot, both with KOLs as well as with our investors. And there are some kind of generic rules for treating effect sizes, on kind of a mild, moderate and significant effect sizes. I think that in the world of stimulants, which is the dominating world of ADHD therapies to-date, the use of effect size to kind of compare and contrast different stimulant medications is relatively useful, because the vast majority of subjects, upward of 70% are responsive to stimulant medication, and therefore, the delta between the drug and the placebo groups, in clinical trials, actually do reflect what the clinician may expect to see, as a clinical benefit in their practice. For non-scheduled non-abusable drugs, the only approved one of course for adults is Straterra. That has been more of a challenge, because what Straterra has shown now in more than 10 years of being in the market, is that the response rates for these non-scheduled drugs is lower, it's not 70% or 80%, it's more like 40%. When you have only 40% of your treatment group responding, the effect size of the overall treatment group, is actually not a very useful way of describing the clinical benefit, because out of the 100 folks that get the drug, you are really providing benefit only to 40, and the other 60 essentially dilutes their clinical outcome, that the clinician may see in the clinic. In this case, I think what we end up discussing with our investigators, is what would be the clinical benefit of an approved MDX drug; and I think that we repeatedly here can sense it, that a non-abusable, non-scheduled drug that has distinct profiles for safety and tolerability, is not associated with any of the common or serious side effects of stimulants and approved non-schedule drugs, and has efficacy, meaningful efficacy, even in 40% or 50% of subjects and actually works quickly, let's say, first week or first two weeks. So that you can actually sample free to subjects to check, if they respond or not, before putting them on other treatment, would provide dramatic value and an important room in the arsenal of pharmacological treatments for ADHD subjects, whether they are kids or adults. So I am sorry for the long answer Charles, but it's kind of my way of explaining that, effect size used in powering statistical tests, is more useful for some settings, less useful for other settings. I agree with the latter part of your question, which is the value proposition of MDX would not necessarily be in the absolute effect size of the entire group. We certainly have to hit that statistical significance. But the clinical benefit would come from its distinct profile.
  • Charles Duncan:
    That's helpful, and its consistent with our power [indiscernible] diligence, not only in ADHD, but other indications such as schizophrenia. So appreciate the added color, Yaron. I will hop back in the queue.
  • Yaron Daniely:
    Thanks.
  • Operator:
    Our next question comes from Annabel Samimy with Stifel. Your line is open.
  • Annabel Samimy:
    Hi guys. Thanks for taking my question. I was just noting the number of patients that were enrolled, versus the number of patients that are screened. So is the rate of that conversion from screens enrolled higher than you typically see, and if so, what is it that you are excluding or preventing in terms of enrolment that may make your enrolment slower, and then I have a few follow-ups on that?
  • Yaron Daniely:
    Sure. Thanks Annabel. So the current rate, I mean you can do the calculation, its I think around 34%, and that is a moving target. So generally in clinical trials, we have seen it in our trials before, the screening failure rates as sites start their work in clinical trials, are relatively high, they start normally around 40%, as they get their bearing on the protocol, on the valuations, on which patients would work better in this trial and which won't -- as you may recall, there were some competing trials earlier on during the start of our enrollment. The screen failure rate has been trending at about 30% in the last few months, and that is very close, in fact the same as our assumption and what we are seeing for the sites that have been active for a little while. We are not doing anything, in terms of generating barriers that we feel are unnecessary or should be revised or eliminated midstream. We are very careful with trying to prevent any midstream changes or any lapsing of quality criteria in enrolling the right patients in this trial. We have the cash runway, we are now talking about a trial that's going to complete enrollment in two to three months, depending on your calculation. So there is really no benefit in making any changes now.
  • Annabel Samimy:
    Okay. And I just wanted to go back to a couple of your other comments; you had mentioned -- and first I missed the comment around response variability, and then you had mentioned in one of the answers that, there were patients that -- I guess, there weren't really -- if some of the patients that were presumably dropping out versus for being accounted in the MMRM measurement, can you just explain that? I think I heard it incorrectly, but maybe you can just repeat those comments?
  • Yaron Daniely:
    Yeah. So I am not sure exactly which comment you are referring to with regards to response variability, but just briefly explain. Again, the statistical significance of the trial is a function of the powering assumptions, which really look at two parameters, the numerator is the delta between drug and placebo, and the denominator is the pooled standard deviation of everyone in the trial, all theoretically 760 patients. And that denominator in our first Phase III study was around 12, which was extremely high compared to both our previous experience, as well as for example, the Straterra Phase III trials and other ADHD trials. And what I have updated on is that, since that measure, the pooled standard deviation on the primary endpoint is a blinded measure. I mean, its pooled, I mean you don't need to know who is on drug and who is on placebo, you can calculate it on the entire population together. That is a measure that we are able to see without breaking the blind, and that has been stable and consistent in the trial to-date, at or around 9 to 9.5, which is the level that we have seen before in our Phase-IIB trial and is the level that other successful ADHD Phase-III adult trials have reported. So we are pleased with being able to at least claim a certain victory over mitigating the variability of treatment response, that contributed to the lack of significance in the first study.
  • Annabel Samimy:
    Okay, great. That actually answers the question on the response variability that I thought I heard. So then, I guess the last question I had on pediatrics design, is there anything in particular that the FDA has requested that the difference within what you had expected? And I guess I am just a little bit surprised that the FDA has taken so long, where its helping you design or coming to some kind of agreement on the design of the study? Given that it seems to be mimicking the adult population?
  • Yaron Daniely:
    Yeah. Let me -- things that you are allowed to say Annabel, but I am not allowed to say. But what I will say is --
  • Annabel Samimy:
    Well, I can help you.
  • Yaron Daniely:
    Well the FDA -- I do not believe that FDA has any issues with the design of the study. The protocol is a standard protocol, it is like MEASURE, it is like many other pediatric Phase II, Phase III trials that have been done. The issue is not with the design of the study, the issue is with the finalization and confirmation of the entire pediatric study plan. And unfortunately, as of March 2016, the FDA published its guidance on PSPs or pediatric study plans, FDA has required the submission and review of PSPs, which essentially cover all pediatric clinical and non-clinical, pre-approval and post-approval studies, to be submitted at the end of Phase-II or about 60 days around the end of Phase-II meetings. And the review cycles for those back and forths on the PSP are only prescribed for the first cycle and unfortunately not specified for future cycles. And as I indicated in my prepared comments, anticipating the need to kind of just formalize and finalize and get at least the efficacy studies ongoing, while we sort out, potentially other issues, we have requested and scheduled a meeting at this fall with FDA, and we fully anticipate that we will get the go-ahead, and once we do, we will start enrolling in -- at least the first of the two registration studies in Q4.
  • Annabel Samimy:
    Okay. Great. Thank you very much.
  • Yaron Daniely:
    Sure.
  • Operator:
    Our next question comes from Mara Goldstein with Cantor Fitzgerald. Your line is open.
  • Mara Goldstein:
    Great. Thanks so much. I have two questions; the first is just a follow-up on the pediatric study, and as it relates to the therapeutic effect, are you incorporating into the design of that study a similar effect size for the pediatric population versus the adult population, and if not, how is that adjusted? And just on the size of the trial, and what your expectation is for the number of screens versus patients you will actually need to treat?
  • Yaron Daniely:
    Yeah. Thanks Mara. So the AL18 or the mini-MEASURE study, we expect to target enrollment of over 200; I think 216 subjects for randomization of a little under that number. The effect size that we are assuming for this study is slightly larger than the very moderate effect size that we have assumed for MEASURE, for the adult study. Pediatric ADHD subject have consistently shown better clinical outcomes for virtually every single ADHD product, that it's unclear whether that's true by logical differences or whether it's simply the very clear clinical benefit versus the virtual lack of placebo effect in pediatric trials. So if you look at pediatric Phase-III ADHD trials, you will see minimal to no placebo effect, as compared to the adult trials. This is because you are not asking, you are not interviewing an adult, they are about their internal feelings of recklessness or whether they have a hard time focusing. You are asking a caregiver, a parent, a teacher whether the kid, the first grader, the fourth grader is doing their homework, is on time, is remembering things, is sitting on a chair for more than 10 minutes of the time. These things are a lot more binary in a way, than adult ADHD. So with that in mind, we have taken somewhat of a greater effect size assumption in the pediatric study, but not dramatic, but we gave ourselves a little bit of a margin there. And I gave you the numbers, it's going to enroll a little over 200, which is just about a third the size of the MEASURE study.
  • Mara Goldstein:
    Okay. And is the dropout rate with that -- should we consider that to be less of a factor in the pediatric trial?
  • Yaron Daniely:
    The dropout rate in kids you're asking?
  • Mara Goldstein:
    Yeah.
  • Yaron Daniely:
    So again, we don't anticipate the dropout rate to be significantly different than the adult study. In fact, one would argue that, pediatric subjects don't decide if they dropout or not, their parents do, and given that we expect and hope that the safety and tolerability profile of MDX holds true in kids, then, we really don't anticipate much of a difference in terms of dropout rates between adults and children. But maybe I will connect it to the question Annabel asked before, about the statistical model, so the clarification I would make, is that the MMRM model, the mixed model, repeated measure model, provide which is the gold standard that FDA uses now in chronic treatment studies, provides you with useful data, even if subject has not completed the entire treatment course. So someone dropped out after, four, or six or eight weeks, and did not complete the full seven weeks of treatment, that is not kind of a subject that needs to be replaced or rejected. They provide data -- every datapoint is used in an MMRM model announced.
  • Mara Goldstein:
    Okay. And if I could just ask for just one more sort of point of clarification on this pooled analysis on the standard deviation, when you say stable over time, can you speak to how many opportunities you have had to view that data on a pooled basis, pooled blind basis?
  • Yaron Daniely:
    Whenever I want to. So again, just to be clear, the data -- so we are talking now about the Conners adult ADHD rating scale. That Conners adult ADHD rating scale was just collected every two weeks in the trial, at the primary endpoint, is logged -- is collected through a tablet that delivered the data in real time, on a blinded fashion. We have no idea who is taking what, but is listing the data in real time into the data management. Where it is validated clean, people are listening to the interviews, etcetera. But at any given movement, if you want, you can just take all the computers or everyone, or some populations or total populations, and you can calculate what is the pooled standard deviation of the change from baseline to the endpoint to the last visit of the [indiscernible] in the entire population, because you don't need to know the treatment randomizations, you are looking for the pooled standard deviation of the change of the entire population.
  • Mara Goldstein:
    Okay. So I guess I am just trying to understand, when you say stable, is it every single measure that you've looked at? Because if you are looking at it from the beginning to the end, and you're looking at different measures, are you seeing stability across the different -- on a discrete measure?
  • Yaron Daniely:
    Okay, sorry. Now I get your question. Its late here in Tel Aviv, you guys are fresh on the East Coast. So when I say stable, I mean that, as the numbers in the study grow, as the numbers of enrolled patients grow, that the changes of each incremental additional subject, to their group pooled standard deviation becomes smaller and smaller. So as of the first 100, the 150 subjects in the trial, we have not seen major shifts in the pooled standard deviation of the group, once we have completed enrollment of 200, and 250 and 300 and 400. That's what I meant by stable.
  • Mara Goldstein:
    All right. Thank you. I appreciate the clarification on that.
  • Yaron Daniely:
    Sure.
  • Operator:
    Our next question comes from Michael Higgins with Roth Capital Partners. Your line is open.
  • Michael Higgins:
    Thanks operator. Hi guys, how are you?
  • Yaron Daniely:
    Good.
  • Michael Higgins:
    Couple of quick questions here, first off, thanks for the updates on the MEASURE study, as we move to the Fragile X indication, have you met what the FDA recently agreed [indiscernible] last fall, are you waiting for any feedback from them or what's the next datapoint on this program?
  • Yaron Daniely:
    There was some static. You were asking about the Fragile X?
  • Michael Higgins:
    Yeah, if you had any meetings with the FDA recently? Are you waiting for feedback from them, how is the progress looking, in giving that trial study?
  • Yaron Daniely:
    Thanks. So the Fragile X program is conducted under a separate IND for FDAs instructions, and so we have separate meeting schedules for the Fragile X study. We also have the orphan and fast track designations, which allow us somewhat of a different route and different access to the agency. The last interaction we had with them, was a submission they requested with regards to clinical meaningfulness and kind of evaluations of the daily living skill domain in the [indiscernible] scale, which is the scale that we agreed in our fall of 2015 meeting, to use as a primary endpoint in the pivotal study, and we have not heard -- we have not had a response or interaction with them on Fragile X since. We do expect though so, we got an indication that by the time we meet with them on the ADHD IND in the fall, we should have our response on that as well.
  • Michael Higgins:
    Okay. And then, when would you be looking to start that trial, and is it your estimation that you will have the result in 2018?
  • Yaron Daniely:
    We really expect to start this trial, I would say, early in 2017. In our kind of internal discussions about this, the bandwidth for Alcobra as well as the plan for utilizing our resources to wrap up the MEASURE study start the pediatric ADHD registration study, and make sure that all other preparations are on track for completing this -- the ADHD work in 2017; really, does not allow us sufficient room and breadth to start another major pivotal program already in Q4. This signals that we will likely be able to start this study, to start this project in 2017, and then I would expect this study to take about a year. And again, that of course depends on how many sites and whether it's going to be U.S. and Israel as we did previously, or are we going to add a few European sites, given the new regulatory discussions that we have initiated in Europe, and the orphan designation there. So I think that the timing to look for at this point is Q1 2017, pretty much in line with when data from MEASURE is going to be reading out.
  • Michael Higgins:
    Okay. Very helpful. Thank you.
  • Yaron Daniely:
    Sure.
  • Operator:
    Our next question comes from Morgan Williams with Barclays. Your line is open.
  • Morgan Williams:
    Hi Yaron and Tomer, thank you for taking the question. So I just wanted to clarify, or I hope that you could provide more color. I know that there was a lot of competition for enrolment into the MEASURE trial, and that in recent months, you kind of shifted focus to certain sites, that might be able to help in the improvement [ph] And I was just wondering, are you starting to see that these certain types are contributing most of the recent patients that have been screened or enrolled into the study and maybe if you could provide some sort of detail on the geographic regions of those sites? And then, a question for Tomer I guess, how can we think about the magnitude of the R&D step-up over the remainder of the year?
  • Yaron Daniely:
    Okay, let me try and answer the first part of the question. So what we have seen until the early summer months, I think one study ended in April and the other in June, was that many of our key and original sites, were also participating, and at least one, if not more of the competing trials that were run by other companies. And what we have done is, we have tried -- we didn't want to leave those sites, we have tried to facilitate the enrolments in those sites, but we also resourced and relied more on our other sites. As those trials wrapped up enrollments by the beginning of the summer, what we have seen is, we have seen those site, which have been either preoccupied or really contributing very little to the MEASURE study, become more active, and this is why we have now, over the summer months, despite the fact that summer months are non-optimal for trial enrolment necessarily, have finally reached the intended screening rate of just close to 150 that we predict would be sustained for the next couple of months, until the enrollment is completed. So Morgan, we are actually seeing both, we are seeing both the sites from the competing trials, contributing more, as well as the sites that were less the point of competition, continuing to contribute very nicely to the trial.
  • Tomer Berkovitz:
    Morgan, so I am going to address the other question about the cash, it's Tomer.
  • Yaron Daniely:
    Sorry Tomer, she also talked about geographical. The sites are spread all over the U.S. with the addition of two Israeli sites. The [indiscernible] record has the city and state for all the -- you know, there is a long list there. Okay, sorry Tomer.
  • Tomer Berkovitz:
    Yeah. So Morgan, so if you look at our cash burn since the beginning of 2015 for the last six quarters, you are going to see that it has been pretty stable at around $4 million to $5 million. So what I wanted to say in my prepared comments, is that, in the remainder of the year, as we complete the enrollment in the MEASURE study, and as we launch the pediatric study, you are going to see an increase in the R&D spend. That increase can take the overall cash burn to around $7 million a quarter, with some potential fluctuations depending on the exact timing of the study's enrollment, as well as finalizing the protocol of the studies.
  • Morgan Williams:
    Okay. And then I guess, just Yaron, as a follow-up, I was asking more about, I guess, which site; but I guess you answered that question. Just thinking about the screening of the patients, do you have any insight? I know you do not know which drug the patients are randomized to other drug or the placebo, but do you have insight into which drug patients tend to be washing out from, in terms of their previous ADHD treatments?
  • Yaron Daniely:
    Sure. So that we know, again with an e-source system, the log for concomitant medication or recent medications used, diagnoses etcetera, is all visible. As was in our previous trial, I think would be common to most trials in adult ADHD today. The vast majority of subjects are not coming off, not washing off ADHD med. So around 80% of subjects would not be washing out or recently washing out of an approved ADHD med. Some of them come in with a diagnosis and off-treatment for many years. Some of them come in with diagnosis with no treatment. Some of them don't even come with a diagnosis, they undergo a third diagnosis in the initial stages of the trial. So we are seeing a relatively low amount, and folks are washing out of every ADHD med. Those folks that are washing out, are washing out of every ADHD med that you can think of, stimulants, Straterra, and our previous work and again these are small numbers, so it's hard to know, they do not really demonstrate a major difference in response or in patient quality between those patients who have had previous ADHD drug experience, and those who have not. And raters in the trial are trained to really evaluate the core 18 ADHD symptoms and try and really not facilitate any discussion about whether this feels like something that you have felt before or not; much of what subjects with ADHD feel like when they are on an ADD drug is an adverse event profile, not the ADHD benefit per se.
  • Morgan Williams:
    Okay. Thank you and thank you for answering all the questions.
  • Operator:
    Our next question comes from Charles Duncan with Piper Jaffray. Your line is open.
  • Charles Duncan:
    Okay. Thanks for taking the follow-up. One question on the pediatric program; I know the PSP has not yet been fully approved, but just -- could you clarify -- do you anticipate this study to be necessarily insufficient for regulatory submission, or would you anticipate a second study to be [indiscernible]?
  • Yaron Daniely:
    No. So as we have communicated, two efficacy studies, two short term efficacy studies are required for the pediatric program. This is the first one, we hope to launch it in Q4, the second one, we will launch one MEASURE readout, early in 2017. Both these studies will run during 2017.
  • Charles Duncan:
    And would they be similarly sized in terms of patient population expected to be enrolled, as well as costs?
  • Yaron Daniely:
    The pediatric, to the first study, the mini MEASURE study AL18 is the study that we are most clear about, as I have given the indication for both the design and the number of subject. It’s a relatively small study. With regards to the second study, again, given an effect size larger than the adult study, we certainly hope and expect would not be the size of the MEASURE study. But one of the benefits of staggering them a little bit, is the ability to use the findings from the MEASURE study in Q1, to more intelligently design and more appropriately power the second efficacy study. And so, it is less clear, or I would say, we have proposed a larger study at a second registration study for peds [ph]. But what I would like to say is that, given the timeline issue, we feel it would be best to wait until MEASURE reads out, get a good sense -- for example, what is the onset of effect, what is the utility at the variable placebo period, and what is the magnitude of the effect size, and then, we can completely finalize the size and the design features of that trial. It could be small, it could be large, it could be medium, I remind you that one of the two registration studies for Vidance [ph] was a classroom study, with under 100 children. So I think that there are opportunities that we may avail ourselves to, given that we will know a lot more about the efficacy of the drug in Q1.
  • Charles Duncan:
    That's helpful Yaron then. And just a quick question for Tomer; Tomer, appreciate the information for 2016 spend. But as you go out to 2017 and just considering the assumptions behind your 2018 kind of milestone for cash, could you fully consider the cost of the pediatric program in ADHD, as well as Fragile X, or will that be incremental?
  • Tomer Berkovitz:
    So yeah, in the projection that our current cash balance is going to be sufficient through early 2018, taking into account the completion of the MEASURE study. The pediatric study that we are about to launch, as well as the Fragile X study. So they are all taken into account.
  • Charles Duncan:
    Okay. Thanks for the added information.
  • Operator:
    Thank you. That concludes the Q&A session. I will now turn the call back over to Yaron Daniely for closing remarks.
  • Yaron Daniely:
    Great. I know this one was a little longer than usual. So I want to thank all of you for participating in this morning's call, and I look forward to continue communicating our progress, as we move forward with our development programs. Have a great day everyone.
  • Operator:
    Thank you. Ladies and gentlemen, that does conclude today's conference. You may all disconnect and everyone have a great day.