Next Generation Technologist

Next Generation Sequencing, Marketing, and the Genomic Revolution

April 7, 2014
by Dale Yuzuki
0 comments

Elana Simon at the American Association for Cancer Research

Elana Simon receiving the Junior Champion Cancer Research Award, AACR 2014

Elana Simon receiving the Junior Champion Cancer Research Award, AACR 2014

The reason I enjoy coming to large meetings like the American Association for Cancer Research (April 5-9 in San Diego) is the surprising things I learn. And the story of Elana Simon is one of them.

She did not appear on the program, but a new award was initiated this year by the AACR organization called the ‘Junior Champion Cancer Research Award’, and she was the recipient. Her story began when she was 12 years old, diagnosed with a rare Fibrolamellar Hepatocellular Carcinoma, a form of liver cancer. Surgery was their only treatment option. Continue Reading →

March 28, 2014
by Dale Yuzuki
0 comments

Notes from the NCI’s Third Symposium on Translational Genomics

Edison Liu, director of Jackson Laboratories Center for Personalized Medicine, at the NCI Third Symposium on Translational Genomics

Edison Liu, director of Jackson Laboratories Center for Personalized Medicine, at the NCI’s Third Symposium on Translational Genomics

Living in the Washington DC area is a privilege. As a native Californian who has been on the East Coast for about 7 years now, living in the Mid-Atlantic has been so enjoyable for many professional and personal reasons.

A case in point is proximity to the National Institutes of Health, and last week I had the opportunity to attend the NCI’s Third Symposium on Translational Genomics. With speakers like Edison Liu (the leader of the new Jackson Laboratory personalized medicine center in CT founded in 2011 with $1.1B in public and private funding), George Church (who I haven’t heard in-person since the 2012 AGBT meeting), and others who I have personally interacted with at the NCI in the past (Snorri Thorgeirsson, Louis Staudt and Jean Claude Zenklusen), I knew that this meeting was going to be worth attending. Continue Reading →

March 19, 2014
by Dale Yuzuki
2 Comments

WaferGen SmartChip TE™ – a PCR-based approach to target enrichment

A WaferGen chip, finger and photo courtesy Dale Yuzuki

A WaferGen chip, photo courtesy Dale Yuzuki

WaferGen is a California Bay-Area company that originally developed an idea similar to BioTrove, which was to create a solid substrate with nanoliter-sized wells for high throughput real-time PCR. WaferGen’s SmartChip™ has 5,184 wells (that’s a 54 multiple of 96), while BioTrove’s OpenArray™ has 3,072 (that’s a 32 multiple of 96). The concept is that each well contains a real-time assay master mix and the sample of interest, and a flexible format of sample number / real-time targets (either gene expression or end-point genotyping) can be performed in a single run. Continue Reading →

March 13, 2014
by Dale Yuzuki
1 Comment

Ion Chef™ System ships to first customers

Ion Chef Shipments going out (photo credit to Michael Aken of Life Technologies)

Ion Chef Shipments going out (photo credit to Michael Aken of Life Technologies)

Way back when I was pouring 35S-labeled dNTP Sanger sequencing polyacrylamide gels (and fond memories of using reagents like degassed acrylamide, TEMED and Silane), robotic automation was at that time only in an industrial or manufacturing context. Now there are many automated liquid handling companies (for example Beckman is a popular choice, but Tecan and Hamilton and others share the market), and many options when it comes to setting up reactions in a 96-well format.

But when it comes to taking a single next-generation sequencing library molecule, affixing that molecule to a bead or particle (or in the case of a 5500xl Wildfire or Illumina flowcell) and then amplifying that molecule 1000’s or 100’s of thousands of times is not a trivial task. This is typically termed ‘template preparation’ – alternatively Illumina calls it ‘cluster generation’. Continue Reading →

March 11, 2014
by Dale Yuzuki
2 Comments

Some clarifications about Ion Torrent PII and NextSeq 500

Yesterday’s Ion Torrent Proton PII™ and Illumina NextSeq 500™ post certainly got a reaction from several quarters, including detailed pricing information about the 1x75bp format for the high-throughput configuration on the consumables.

Instead of making edits to the original here are some clarifying points, as it is clear that Illumina is making a break from their prior pricing model that they used for the MiSeq.

For those not familiar with MiSeq pricing, to compete with the Ion Torrent PGM™ relatively inexpensive 314 and 316 chips (about $300 and $550 per run respectively at a current >60MB and >600MB throughput respectively), Illumina has a ‘micro’ and ‘nano’ sized flowcell / reagent kits with lower throughput, that have similar read numbers and throughput. However, due to the cost of the reagents and flowcell, the smaller-sized kits are still expensive to produce and to sell, with the least expensive ‘micro’ kit still costing on the order of $800 or $900 to run.

Thus I mistakenly assumed that a 1×75 high output run on the NextSeq 500 would be priced relatively high – I used $3500 per 1x75bp run for the ‘High Output’ mode – which was incorrect. Based upon some pricing information I was given, the reagents were not just 20% less but a full 66% less – one third the price I had originally surmised, or $1300 per 1x75bp run.

Also on the point of 400M reads in the high output mode, it is unfair to use the 400M read number when it is clear in Illumina’s documentation that the number is ‘up to 400M’ reads (referring to their spec sheet).  Reading closer, the throughput on 1×75 is 25-30GB in high output mode, and 25GB / 75bp = 330M reads. So using 330M reads at $1300 / run gives a $393 per 100M reads metric.

Doing a similar calculation for the Mid Output, 2x75bp configuration (and was told that that pricing is around $1000 per run) at 16GB, the calculation comes out to 106M reads at $1000 / run equating to $943 per 100M reads.

So the pricing per 100M reads on the high output mode in a 1x75bp configuration is aggressive pricing, sitting right on top of where the PII is. Doing other calculations for throughput (price per GB) appears to be much more in line with the prior model (i.e. fewer cycles = less efficient on from a cost per GB perspective).

And so I present to you and updated chart, having to add a large number of rows to accommodate all the different pricing permutations. For those of you who have never worked in marketing within a commercial life science research market company, the two most difficult things you will ever work on are naming products, and pricing them. I know the meaning of the phrase first-hand: “No one knows the price of anything”, because of course the price of anything is what individuals are willing to pay.

Revised version of a PII and NextSeq 500 comparison chart

Revised version of a PII and NextSeq 500 comparison chart

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

So the picture is not nearly so clear-cut in terms of pricing – and looking over the ‘cost per $100M reads’ for the different formats on the right, that $393 pricing does appear to be somewhat incongruent.

It is clear from this however that the Ion Proton with the PII will be less expensive to run, regardless of the mode (on the order of 30%, the difference between a $1,000 PII run and a $1300 1×75 High Output one), and less time to get data (8h vs. 11h), and on-par in terms of cost per 100M read metric.

I learned something yesterday – and every day that I learn something is a good thing. The NextSeq chemistry uses a fluor for ‘T’ base, a different fluor for the ‘C’ base, and the ‘A’ base is composed of two fluors that have similar emission / excitation spectra as the two prior bases but are not identical. Why this is done very likely resides in some complex explanation, perhaps involving biochemistry (the ability of a modified polymerase to accept a nucleotide decorated with two different fluors must be a wonder to behold at the atomic level), or optics (there is an incredible amount of engineering involved when you are using a system that started out with the first Solexa instruments basically a confocal microscope scanning device). Thinking through the chemistry, and how different it is compared to the existing HiSeq / MiSeq chemistry, it will be of interest to see what kind of GC sequence bias it has along with the primary modes of error.

On that note I’ll have to start searching for a NextSeq dataset and play around with it.

Way back when I first joined Life Technologies in early 2010, a person involved at Sanger during the Human Genome Project (HGP) days said during a SOLiD 4 training “when there’s money to be made, miracles happen” regarding technology development intersecting a free market. That point rings true again and again. It was last year during a talk Eric Green (Director of the National Human Genome Research Institute) was giving where he wondered whether the cost-per-MB curve that the NHGRI NISC regularly publishes had hit a plateau or not. It’s clear that market competition will be forcing prices down yet again.

Copy Protected by Chetans WP-Copyprotect.