A computer-based genomic annotation system, including a database configured to store genomic data, non-transitory memory configured to store instructions, and at least one processor coupled with the memory, the processor configured to implement the instructions in order to implement an annotation pipeline and at least one module filtering or analysis of the genomic data.
6. A system for testing for a genotype to phenotype association, the system comprising:
a database containing information defining physical and/or functional connections between a plurality of genes and/or a plurality of proteins;
at least one processor configured to:
select a genomic region based at least in part on the defined physical and/or functional connections in the database;
collapse at least some variant sites associated with the genomic region for a plurality of subjects classified at least in part by at least one phenotype, wherein, for each of the plurality of subjects, the collapsing comprises assigning a first category to a subject if the subject has at least one minor allele associated with the collapsed variant sites, and assigning a second category to the subject if the subject has no minor allele at any of the collapsed variant sites, wherein a category for each subject depends on one or more of a number of genomic variants associated with a set of genes for each subject, a functional significance of the genomic variants associated with the set of genes for each subject, a location of the genomic variants associated with the set of genes for each subject, and a frequency of the genomic variants associated with the set of genes for each subject; and
perform a statistical test for a genotype to phenotype association in the plurality of subjects.
1. A method of testing for a genotype to phenotype association, the method comprising:
with at least one processor, accessing a database containing information defining physical and/or functional connections between a plurality of genes and/or a plurality of proteins;
with at least one processor, selecting a genomic region based at least in part on the defined physical and/or functional connections in the database;
with at least one processor, collapsing at least some variant sites associated with the genomic region for a plurality of subjects classified at least in part by at least one phenotype, wherein, for each of the plurality of subjects, the collapsing comprises assigning a first category to a subject if the subject has at least one minor allele associated with the collapsed variant sites, and assigning a second category to the subject if the subject has no minor allele at any of the collapsed variant sites, wherein a category for each subject depends on one or more of a number of genomic variants associated with a set of genes for each subject, a functional significance of the genomic variants associated with the set of genes for each subject, a location of the genomic variants associated with the set of genes for each subject, and a frequency of the genomic variants associated with the set of genes for each subject; and
with at least one processor, performing a statistical test for a genotype to phenotype association in the plurality of subjects.
9. A system for testing for a genotype to phenotype association, the system comprising:
a first database containing a plurality of protein sets and/or gene sets and/or genomic regions of interest derived from a second database of information defining physical and/or functional connections between a plurality of genes and/or a plurality of proteins;
at least one processor configured to:
select a genomic region based at least in part on the plurality of protein sets and/or gene sets and/or genomic regions of interest in the first database;
collapse at least some variant sites associated with the genomic region for a plurality of subjects classified at least in part by at least one phenotype, wherein, for each of the plurality of subjects, the collapsing comprises assigning a first category to a subject if the subject has at least one minor allele associated with the collapsed variant sites, and assigning a second category to the subject if the subject has no minor allele at any of the collapsed variant sites, wherein a category for each subject depends on one or more of a number of genomic variants associated with a set of genes for each subject, a functional significance of the genomic variants associated with the set of genes for each subject, a location of the genomic variants associated with the set of genes for each subject, and a frequency of the genomic variants associated with the set of genes for each subject; and
perform a statistical test for a genotype to phenotype association in the plurality of subjects.
5. A method of treating subjects with a pharmaceutical comprising:
administering the pharmaceutical to a plurality of subjects;
associating some of the plurality of subjects with a phenotype based at least in part on observed responses of the plurality of subjects to the pharmaceutical;
with at least one processor, accessing a database containing information defining physical and/or functional connections between a plurality of genes and/or a plurality of proteins;
with the at least one processor, selecting a genomic region based at least in part on the defined physical and/or functional connections in the database;
with at least one processor, collapsing at least some variant sites associated with the genomic region for a plurality of subjects classified at least in part by at least one phenotype, wherein, for each of the plurality of subjects, the collapsing comprises assigning a first category to a subject if the subject has at least one minor allele associated with the collapsed variant sites, and assigning a second category to the subject if the subject has no minor allele at any of the collapsed variant sites, wherein a category for each subject depends on one or more of a number of genomic variants associated with a set of genes for each subject, a functional significance of the genomic variants associated with the set of genes for each subject, a location of the genomic variants associated with the set of genes for each subject, and a frequency of the genomic variants associated with the set of genes for each subject; and
with at least one processor, performing a statistical test for a genotype to phenotype association in the plurality of subjects; and
selectively administering the pharmaceutical to the plurality of subjects and/or additional subjects based at least in part on the genotype to phenotype association.
14. A method of testing for a genotype to phenotype association, the method comprising:
with at least one processor, accessing a database containing information defining physical and/or functional connections between a plurality of genes and/or a plurality of proteins and searching for sets of proteins and/or sets of genes connected by physical and/or functional connections in the database;
with the at least one processor, retrieving a plurality of protein sets and/or gene sets found in the searching;
with the at least one processor, selecting a genomic region of a series of genomic regions defined by the plurality of protein sets and/or gene sets found during the searching;
with at least one processor, collapsing at least some variant sites associated with the genomic region for a plurality of subjects classified at least in part by at least one phenotype, wherein, for each of the plurality of subjects, the collapsing comprises assigning a first category to a subject if the subject has at least one minor allele associated with the collapsed variant sites, and assigning a second category to the subject if the subject has no minor allele at any of the collapsed variant sites, wherein a category for each subject depends on one or more of a number of genomic variants associated with a set of genes for each subject, a functional significance of the genomic variants associated with the set of genes for each subject, a location of the genomic variants associated with the set of genes for each subject, and a frequency of the genomic variants associated with the set of genes for each subject; and
with the at least one processor, performing a series of collapsed variant statistical genotype to phenotype correlation tests in the plurality of subjects using the series of genomic regions defined by the plurality of protein sets and/or gene sets found during the searching.
2. The method of
3. The method of
4. The method of
7. The system of
8. The system of
10. The system of
select a second genomic region based at least in part on the plurality of protein sets and/or gene sets and/or genomic regions of interest in the first database; and
repeat the collapsing and performing for the second genomic region.
11. The system of
12. The system of
13. The system of
|
The present application claims priority to U.S. Provisional Application Ser. No. 61/852,255, filed on Mar. 15, 2013, and to U.S. Provisional Application Ser. No. 61/868,895, filed on Aug. 22, 2013. The content of each of these applications is hereby incorporated by reference in its entirety.
Field of the Invention
The present invention relates to DNA sequencing and more particularly to interpretation of the many genetic variants generated in any sequencing project.
An automated computational system for producing known and predicted information about genetic variants, otherwise known as variant annotations, is also described.
Description of the Related Art
Advances in high-throughput DNA sequencing technologies have enabled the identification of millions of genetic variants in an individual human genome. Reductions in sequencing costs and increases in sequencing efficiency have brought these capabilities within the grasp of individual laboratories looking to use DNA sequencing as a powerful tool in their research endeavors, yet very few laboratories have the computational expertise and infrastructure to make sense of the genetic variants identified through these studies. While increasingly sophisticated tools continue to be developed for sequence assembly and variant calling, interpretation of the massive number of genetic variants generated by any sequencing project remains a major challenge. This problem is especially pronounced in the interpretation of noncoding variants that likely explain a major proportion of heritability in common complex diseases. Because of the extreme difficulty and computational burden associated with interpreting regulatory variants and variations across collections of genes, genome sequencing studies have focused on the analysis of non-synonymous coding variants in single genes. This strategy has been effective in identifying mutations associated with rare and severe familial disorders; however, analysis of types of variants must be made accessible to the research community in order to address the locus and allelic heterogeneity that almost certainly underlies most common disease predisposition.
The availability of high-throughput DNA sequencing technologies has enabled nearly comprehensive investigations into the number and types of sequence variants possessed by individuals in different populations and with different diseases. For example, not only is it now possible to sequence a large number of genes in hundreds if not thousands of people, but it is also possible to sequence entire individual human genomes in the pursuit of inherited disease-causing variants or somatic cancer-causing variants. Whole genome sequencing as a relatively routine procedure may lie in the near future as high-throughput sequencing costs and efficiency continue to improve. In fact, as costs continue to decline, high-throughput sequencing is expected to become a commonly used tool, not only in human phenotype based sequencing projects, but also as an effective tool in forward genetics applications in model organisms, and for the diagnosis of diseases previously considered to be idiopathic, for which there are already some striking examples.
One particularly vexing problem that has accompanied the development and application of high-throughput sequencing is making sense of the millions of variants identified per genome. Recent successes at identifying variants associated with disease have generally been executed under clever yet restricted conditions. For example, a number of re-sequencing studies have focused on the identification of causal variants at significant genome-wide association study (GWAS) loci and have identified excesses of non-synonymous variants in nearby candidate genes. However, these potentially causal variants tend not to explain much more of the heritability than the GWAS tag SNP itself, a large proportion (˜80%) of GWAS hits are in intergenic regions with no protein-coding elements nearby, and, even with extremely large study populations, the GWAS strategy is not likely to individually identify tag-SNPs that explain even half the heritability of common diseases.
Nevertheless, GWAS has plenty left to offer in terms of identification of significant, or at least suspicious, candidate loci for re-sequencing studies. Other sequencing strategies have successfully identified non-synonymous variants associated with familial and/or severe disorders. However, if highly penetrant variants contribute to common disease predisposition, they should be detectable by linkage analysis. Linkage and straightforward association strategies have not identified the majority of variants predisposing to common diseases where variable penetrance, allelic and locus heterogeneity, epistasis, gene-gene interactions, and regulatory variation play a more important yet elusive role. If sequence-based association studies are to successfully identify variants associated with common diseases and expand our understanding of the heritable factors involved in disease predisposition, investigators must be armed with the tools necessary for identification of moderately penetrant disease causing variants, outside of GWAS hits, and beyond simple protein coding changes. In fact, the identification and interpretation of variants associated with inherited but not strongly familial disease, is a crucial step in translating sequencing efforts into a truly significant impact on public health.
If one accepts the rare variant hypothesis of disease predisposition, then one would expect rare variants predisposing to disease will be associated with high relative risk, but because of their low frequency, simple univariate analyses where each variant is tested for association with disease will require extremely large sample sizes to achieve sufficient power. This problem is compounded tremendously if disease predisposition results from the interaction and combination of extremely rare variants segregating and encountering one another throughout the population. Variant collapsing strategies have been shown to be a powerful approach to rare variant analysis; however, collapsing methods are extremely sensitive to the inclusion of noncausal variants within collapsed sets.
The key to unlocking the power of variant collapsing methods, and facilitating sequence based disease association studies in reasonable study sizes and at reasonable cost, is a logical approach to forming collapsed sets. In fact, regardless of the allelic frequency and penetrance landscape underlying common disease predisposing variants, set based analyses can expose what simple linkage or association studies have failed to reveal.
Recent successes in clinical genome sequencing, especially in family-based studies of individual with rare, severe and likely single-gene disorders, have highlighted the potential for genome sequencing to greatly improve molecular diagnosis and clinical decision making. However, these successes have relied on large bioinformatics teams and in-depth literatures surveys, an approach it is neither scalable nor rapid. The adoption of genome sequencing among the clinical community at large requires, among other things, the ability to rapidly identify a small set of candidate disease-causing (i.e., likely pathogenic) mutations from among the tens to hundreds of genes harboring variants consistent with plausible functional effects, inheritance patterns and population frequencies. Presented herein is a framework for the identification of rare disease-causing mutations, with a focus on phenotype-informed network raking (PIN Rank) algorithm for ordering candidate disease-causing mutations identified from genome sequencing. Our proposed algorithm's accuracy in prioritizing variations is demonstrated by applying it to a number of test cases in which the true causative variant is known.
Systems and methods that make variant interpretation as accessible as variant-generation from high-throughput DNA sequencing is described herein.
In one embodiment the present invention provides a computer-based genomic annotation system. The system includes a database configured to store genomic data, non-transitory memory configured to store instructions, and at least one processor coupled with the memory, the processor configured to implement the instructions in order to implement an annotation pipeline and at least one module for filtering or analysis of genomic data.
In another embodiment the invention provides a computer-based method for predicting a risk of an individual developing a disease. The method includes the steps of: obtaining genetic variant data describing a plurality of genetic variants in a genome of the individual, the genome including a plurality of genes; using a microprocessor, determining a percent functionality for each gene based on the genetic variant data; using a microprocessor, generating a weighted genetic network including the plurality of genes of the genome having weighted connections therebetween; using a microprocessor, obtaining a global centrality score for each of the plurality of genes in the weighted genetic network; using a microprocessor, generating a weighted genetic disease network including a plurality of genes having weighted connections therebetween; assigning a high importance score in the weighted genetic disease network for at least one gene that is associated with the disease; using a microprocessor, obtaining a disease-specific centrality score for each of the plurality of genes in the weighted genetic disease network; for each of the plurality of genes, determining, using a microprocessor, a difference between the global centrality score for the gene and the disease-specific centrality score for the gene and multiplying the difference by the percent functionality for the gene to produce a product for each gene; using a microprocessor, summing the products of the genes to produce a disease score for the individual; and predicting a risk of developing a disease for the individual based at least in part on the disease score.
Other aspects of the invention will become apparent by consideration of the detailed description and accompanying drawings.
Features, aspects, and embodiments are described in conjunction with the attached drawings, in which:
The systems and methods described herein comprise an annotation system that includes tools, algorithms, and report functions configured to perform holistic, in-depth, annotations and functional predictions on variants generated from high-throughput sequencing data in multiple arenas of genomic relevance, including coding and regulatory variants as well as the biological processes that connect them. Appropriate annotation of sequence variants is crucial in putting into perspective the overabundance of a particular variant or set of variants in diseased individuals or the belief that a particular variant is likely to have a molecular and/or biological effect. Due to the computational burden of variant annotation, both in terms of required computing power and storage of extremely large reference databases, the system can be implemented on a computational cluster. In certain aspects, access to the system can be through a web portal.
Authority (102) can be interfaced with one or more databases (108) configured to store data, computations, reports, etc. Further, authority (102) can include or can be interfaced with storage and memory (not shown), at least some of which is not transitory, configured to store instructions, algorithms (104), programs, data, etc. Algorithms (104) can be configured to perform the computational analysis, trending, analytics, etc., as described herein, from which various reports (106) and analysis can be generated.
In certain embodiments, system 100 can be implemented as or include a web server that can be accessed via the internet (110) via a terminal (112) in order to perform the computational analysis described herein. Further, the annotation pipeline may utilize, as input, a list or file of variants where each variant can be identified according to information such as its genomic location such as by the chromosome with which the variant is associated, and the start and end positions of the variant, and the alleles of the variant. Variants can include any defined modification to a DNA sequence as compared to one or more other reference DNA sequences. Variants may involve any number of adjacent or spaced apart bases or series of bases, and may include single nucleotide substitutions, insertions, deletions, and block substitutions of nucleotides, structural variants, copy number variants, etc. The system 100 may in some implementations generate the list or file of variants directly from sequence information that is retrieved or may be generated by an automated sequencing machine associated with the system 100. Variants can be provided to system 100 in a number of ways, including via another terminal or system (112, 114).
Annotation pipeline (202) can include at least the annotations illustrated in
The variants within the variant file (302) can then be associated with common polymorphisms deposited in The Single Nucleotide Polymorphism Database (dbSNP) in step 304. The dbSNP is a free public archive for genetic variation within and across different species developed and hosted by the National Center for Biotechnology Information (NCBI) in collaboration with the National Human Genome Research Institute (NHGRI). In addition to single nucleotide polymorphisms (SNPs), dbSNP also includes other types of variants or polymorphisms such as short deletion and insertion polymorphisms (indels/DIPs), multinucleotide polymorphisms (MNPs), heterozygous sequences, and named variants. The dbSNP accepts apparently neutral polymorphisms, polymorphisms corresponding to known phenotypes, and regions of no variation.
As of February 2010, dbSNP included over 184 million submissions representing more than 64 million distinct variants for 55 organisms, including Homo sapiens, Mus musculus, Oryza sativa, and many other species.
In system 100, annotations for dbSNP polymorphisms can be precomputed, e.g., using the systems and methods described herein. Thus, in step (306) in order to significantly speed up the annotation process, the variants in variant file (302) can be compared to precomputed variant annotation information. In some implementations, annotations for previously unobserved variants can be pre-computed. For example, all possible coding variants could be precomputed. The precomputed annotations may also, for example, have been previously computed during the process of annotating a variant file produced from a different individual's genome. These precomputed annotations can include functional element associations and impact predictions, to be described below, as well as clinical annotations for known phenotype associations, drug metabolism effects including clinical associations listed in The Pharmacogenomics Knowledge Base (PharmGKB), the Genome-Wide Association Studies (GWAS) catalog, and the National Institute of Health (NIH) Genetic Association Database, or both. In this implementation, the database 108 of
In certain embodiments, annotations for variants not found in dbSNP can be computed de novo in an automated parallel computing environment as described below. For the tools described below, alternate tools and or extensions may be applied.
Genomic element mapping and conservation analysis can then occur in step (308). Annotation pipeline (202) can be configured to begin this step by mapping variants to a known gene from, e.g., the University of California, Santa Cruz (UCSC) Genome Browser known gene database. In advantageous implementations, every (or at least substantially every) variant in the variant file 302 is mapped to a gene, even if the location of a particular variant has no apparent connection to any expressed, regulatory, or other known functionally important portion of any known gene. Traditionally, variants are only mapped to a gene if they are found to overlap some functionally significant portion of the gene, often only exons. Although it has long been recognized that variants associated with regulatory or other functionally significant gene segments that are unrelated to the amino acid sequence of the associated transcript(s) can contribute to disease and other phenotypic manifestations, this fact has not been applied significantly to variant annotation, especially not in the whole genome context. Accordingly, the variants of the variant file can be associated with transcripts of the “nearest” gene (or perhaps more than one gene). The “nearest” gene can be defined in a variety of ways. In one implementation, the system may find the closest start or stop codon of a gene in a set of known genes (such as the set of known genes in the UCSC Genome Browser), and map the variant to the gene associated with the nearest stop or start codon. If the variant falls within a known gene, its position within gene elements (e.g. exons, introns, untranslated regions, etc.) can be recorded for future impact predictions depending on the impacted gene element. If the gene has multiple transcripts, impact information can be computed separately for each transcript. Furthermore, variants falling within an exon between the start and stop codons for the gene can be analyzed for their impact on the amino acid sequence (e.g. synonymous, nonsynonymous, nonsense, frameshift, in-frame, and intercodon). It will be appreciated, however, that a variant may not fall between a start and stop codon of any gene, may be in an intergenic region, and may in fact not be in a transcribed region of the genome at all. Even in this case, though, the system may map the variant to the gene having the closest stop or start codon. This has surprisingly been found quite useful because the functional effect of variants across the whole genome remains highly uncertain, and this allows previously unknown functional associations to be uncovered in a wide variety of applications. In some implementations, variants are mapped to genes based on considerations other than or in addition to small linear distance along the base sequence from the variant. As one example, rather than mapping to the gene that is the closest linear distance from the variant, one could map to a gene with an estimated small 3D distance in the chromosome. Experimental data providing 3D associations between different sections of the DNA sequence when in the chromosome is available, and could be leveraged to map some or all of the variants in a variant file to one or more genes. Any type of information that may indicate that the location of a variant could make the variant relevant to a known gene could be used to map a gene to the variant, whether statistical, regulatory, or related to physical interactions such as protein-protein interactions or DNA proximity interactions. Annotations mapping the variant to multiple different genes could be created, with different measures of gene to variant relevance being used for each mapping.
All variants can also be associated with conservation information in two ways. First, variants can be associated with conserved elements from the PhastCons conserved elements (28way, 44way, 28wayPlacental, 44wayPlacental, and 44wayPrimates). These conserved elements represent potential functional elements preserved across species. Conservation can also assessed at the specific nucleotide positions impacted by the variant using the phyloP method. The same conservation levels as PhastCons can be used in order to gain higher resolution into the potential functional importance of the specific nucleotide impacted by the variant.
Next, gene-based annotations can be produced in step (310) based on the gene(s) nearest to each variant. These annotations are relevant regardless of the specific gene element impacted by the variant. First, if the gene is associated with a phenotype in the Online Mendelian Inheritance in Man (OMIM) database, the OMIM identifier is reported. Other gene-based annotations that can be included are known associations with disease in the Human Gene Mutation Database (HGMD) or the GWAS catalog and known associations with drug metabolism in the Pharmacogenomics Knowledge Base (PharmGKB). Gene function can then be annotated in one of many ways. Certain examples include: (1) PFAM protein domains identified by InterProScan; (2) Gene Ontology Molecular Functions associated with those protein domains, and (3) Gene Ontology Biological Processes associated with the gene. These annotations provide higher level functional and biological categories that can be used to connect disparate variants with one another in set-based analyses.
Once the gene-based annotations are produced in step (310), various annotation processes can take place in parallel on all or some of the variants in order to provide annotation information that can then be synthesized and used by various other modules, e.g., depicted in
For example, in step (320), variants, regardless of their genomic position, can be associated with predicted transcription factor binding sites (TFBS) and scored for their potential impact on transcription factor binding. In certain embodiments, predicted TFBS can be pre-computed by utilizing the human transcription factors listed in the JASPAR database and The Transcription Factor Database (TRANSFAC) transcription-factor binding profile to scan the human genome using the Metrics for Object-Oriented Design (MOODS) algorithm. The probability that a site corresponds to a TFBS is calculated by MOODS based on the background distribution of nucleotides in the human genome.
TFBS can be identified based on a relaxed threshold (p-value <0.0002) in conserved, hypersensitive, or promoter regions, and at a more stringent threshold (p-value <0.00001) for other locations in order to capture sites that are more likely to correspond to true functional TFBS. Conserved sites correspond to the phastCons conserved elements; hypersensitive sites correspond to Encode DNASE hypersensitive sites annotated in UCSC genome browser, while promoters correspond to regions annotated by TRANSPro, and 2 kb upstream of known gene transcription start sites, identified by SwitchGear Genomics ENCODE tracks. The potential impact of variants on TFBS can be scored in step (330) by calculating the difference between the mutant and wild-type sequence scores using the position weighted matrix method described in 16(1):16-23, incorporated herein by reference) and shown to identify regulatory variants in Andersen et al. (Andersen M C, Engström P G, Lithwick S, Arenillas D, Eriksson P, Lenhard B, Wasserman W W, Odeberg J. In silico detection of sequence variations modifying transcriptional regulation. PLoS Comput Biol. 2008 January; 4(1):e5, incorporated herein by reference).
Variants falling near exon-intron boundaries can then be evaluated in step (318) for their impact on splicing, for example, using the maximum entropy method of maxENTscan. Maximum entropy scores can be calculated for the wild-type and mutant sequence independently, and can be compared to predict the variant's impact on splicing in step (328). Changes from a positive wild-type score to a negative mutant score can suggest a splice site disruption. Variants falling within exons are also analyzed in steps (312) and (322) for their impact on exonic splicing enhancers (ESE) and/or exonic splicing silencers (ESS). The number of ESE and ESS sequences created or destroyed based on the hexanucleotides reported as potential exonic splicing regulatory elements (e.g. as shown by Stadler M B, Shomron N, Yeo G W, Schneider A, Xiao X, Burge C B. Inference of splicing regulatory activities by sequence neighborhood analysis. PLoS Genet. 2006 Nov. 24; 2(11):e191, incorporated herein by reference) has been shown to be the most informative for identification of splice-affecting variants by Woolfe et al. (Woolfe A, Mullikin J C, Elnitski L. Genomic features defining exonic variants that modulate splicing. Genome Biol. 2010; 11(2):R20, incorporated herein by reference).
Variants falling within 3′UTRs can be analyzed in steps (314) and (324) for their impact on microRNA binding in two different manners. First, 3′UTRs can be associated with pre-computed microRNA binding sites using the targetScan algorithm and database. Variant 3′UTR sequences can be rescanned by targetScan in order to determine if microRNA binding sites are lost due to the impact of the variation. Second, the binding strength of the microRNA with its wild-type and variant binding site can be calculated by the RNAcofold algorithm to return a MG score for the change in microRNA binding strength induced by introduction of the variant.
While interpretation of frame shift and nonsense mutations is fairly straightforward, the functional impact of nonsynonymous changes and in-frame indels or multi-nucleotide substitutions is highly variable. Currently, annotation pipeline (202) can be configured to use the PolyPhen-2 algorithm, which performs favorably in comparison to other available algorithms, for prioritization of nonsynonymous single nucleotide substitutions in steps (316) and (326). PolyPhen-2 utilizes a combination of sequence-based and structural predictors to predict the probability that an amino acid substitution is damaging, and classifies variants as benign, possibly damaging, or probably damaging. These outputs can be reported by annotation pipeline (202) along with the probability score of the variant being deleterious.
A major drawback to predictors such as PolyPhen-2 is the inability to address more complex amino acid substitutions. To address this issue, annotation pipeline (202) can also be configured to report the Log R.E-value score of variants in step (326), which is the log ratio of the E-value of the HMMER match of PFAM protein motifs between the variant and wild-type amino acid sequences. This score has been shown to be capable of accurately identifying known deleterious mutations. More importantly, this score measures the fit of a full protein sequence to a PFAM motif, therefore multinucleotide substitutions are capable of being scored by this approach. As phased genomes gain in prevalence, phased nonsynonymous variants can be analyzed for their combined impact on protein function.
Annotation pipeline (202) output can feed directly into statistical and bioinformatic strategies for variant analysis. Annotation and prioritization of variants prior to statistical analysis can be performed and can be crucial to the success of sequence based disease association studies. Annotation and prioritization is also directly applicable to the identification of causal variants in post-GWAS/linkage sequencing studies, forward genetic screens, carrier testing, or even the identification of causal variants in clinical sequencing applications such as unknown disease diagnosis and cancer driver identification.
This multi-tiered approach to variant annotation (
Annotation execution in pipeline (202) proceeds in highly parallel fashion and includes classes of variant annotations that are entirely independent of one another, serially dependent annotations whose execution are dependent upon the completion and status of prior annotations, and synthetic annotations that generate new information through the combination of multiple annotation outputs. A description of the annotation processes used in pipeline (202) in terms of classes of information is presented first, followed by a description of the computational processes producing those results.
The physical mapping information provides the most basic level of information regarding the location of the variant and its relationship with basic elements. The chromosome, physical start position, physical end position, variant type (e.g. snp, insertion, deletion, substitution structural variants, copy number variants, etc.) and reference and alternate alleles are supplied by one of a few standard file formats, including VCF (variant call format) (see: www.1000genomes.org), Complete Genomics native file format, or basic tab delimited BED-like file format. Additionally, haplotype information, used to track whether variants occur in cis or trans relative to one another, can be provided and that information will be conserved. In certain embodiments, system 100 can take into account multiple variants on the same haplotype.
The above location information is utilized to execute a basic mapping step which determines what the nearest gene/transcripts are, what type of gene (coding vs. non-coding) is nearby, the location of the variant relative to the gene (exonic, intronic, upstream, downstream) and the distance from the gene-body. Determinations are based upon physical distances within the genome. Either known or predicted genes can be utilized for this step, in one specific implementation the UCSC Known Genes database is used, which is a compilation of information from RefSeq, GenBank, Consensus CDS project, UniProt, and other sources of evidence for genes and whether they are coding vs. non-coding genes. Custom conversion scripts included in pipeline (202) can transform the various variant data files into a common input format.
Non-limiting examples of alternative reference genomes include a version of the human genome or different species. Non-limiting examples of alternative gene databases include ENSEMBL and predicted genes such as, Genscan for mapping. Alternate variant input formats for input include copy number variation, structural variation, and upstream integration of variant calling from sequence data or alignment files. Non-limiting examples of other gene-types or more fine-grain determination of gene-types include, microRNA, snoRNA, piwiRNA, pseudogenes, physical mapping to chromosome bands, STS markers, distance to recombination hotspots, other physical landmarks in the genome, mapping to predicted exons or predicted open reading frames and ancestral nucleotide status such as Neanderthals, Chimp, etc.
If a variant is mapped to an exon of a protein coding gene, its impact upon the coding sequence is inferred based upon the standard rules of the genetic code, and prediction of impact is performed based upon a series of functional impact prediction algorithms. First, the position of the variant within the protein sequence and the distance of the variant relative to the N-terminal and C-terminal ends of the protein are determined and used in determining the impact of truncating variants. Next, the basic coding impact, e.g. synonymous, nonsynonymous, frameshift, etc. is determined as well as the original and variant amino acids based on the standard genetic code. Then, dependent upon the status of the previous annotations a series of functional impact predictions are performed. For nonsynonymous variants, the predicted impact of the amino acid substitution on protein function is determined based upon the SIFT, Polyphen-2, and Condel algorithms. For coding variants, including in-frame insertions and deletions, a Log Ratio E-value score of variants, which is the log ratio of the E-value of the HMMER match of PFAM protein motifs between the variant and original amino acid sequences. This score has been shown to be capable of accurately identifying known deleterious mutations. One example of a suggested threshold is scores with a Log R.E-value greater than 0.7 for predicted damaging variants. More importantly, this score measures the fit of a full protein sequence to a PFAM motif, therefore multinucleotide substitutions or separate substitutions on the same haplotype are capable of being scored by this approach. As phased genomes gain in prevalence, phased nonsynonymous variants can be analyzed for their combined impact on protein function.
For truncating variants (nonsense and frameshift) the percentage of the conserved upstream and downstream coding sequence removed by the truncation (conserved elements), taking into account alternate start sites, is determined and utilized to predict whether the truncation is damaging or not. The threshold for prediction of a damaging truncation removal of >4% of the conserved portion of the protein, a threshold with the greatest accuracy as defined empirically.
Also contemplated is the generation or destruction of post-translational modification sites; inclusion of additional predictive algorithms, such as SNPs3 and MutationTaster; and the basic physiochemical properties of changed amino acids, such as hydrophobicity, polarity, or side-chain volume.
Variants falling near exon-intron boundaries are evaluated for their impact on splicing in a couple of ways. One method is a simple determination of whether or not the variant impacts the invariant splice donor and acceptor sequences—returning an annotation that a splice donor or acceptor is damaged. A second method is a prediction of the impact of variants nearby a gene splice junctions based on the maximum entropy method of maxENTscan. Maximum entropy scores are calculated for the original and variant sequence independently, and considered for their impact on splicing. Changes from a positive original score to a negative variant score suggest a splice site disruption. Variants falling within exons are also analyzed for their impact on exonic splicing enhancers and/or silencers (ESE/ESS). The number of ESE and ESS sequences created or destroyed is based on the hexanucleotides reported as potential exonic splicing regulatory elements and shown to be the most informative for identification of splice-affecting variants.
Also contemplated is splice site generation, noncanonical splice sites, intronic splicing enhancers/silencers, splicing cofactor binding sites. Non-limiting alternative splicing prediction tools include, NNSplice and ESE-Finder.
Regional information refers to sequence-based, cross-species inferred and structural characteristics of the specific region of the genome containing the genetic variant. Two primary annotations are the repeat structure of the genomic region and its conservation across species. Segmental duplications, duplicated regions of the genome which increase the likelihood of mismapped reads and false variant calls, are annotated. Variants are also associated with conservation information in two ways. First, variants are associated with conserved elements from the phastCons conserved elements at various depths of conservation. These conserved elements represent potential functional elements preserved across species. Conservation is also assessed at the specific nucleotide positions impacted by the variant by the phyloP method.
Also included is mapability, recombination rate, other conservation levels, simple repeats, mobile elements, complex repeats, such as satellite sequences, nuclear mitochondrial sequence, horizontally transferred sequence from viruses and other organisms.
Population-based information refers to known rates and identifiers in populations already sequenced or genotyped. These variants are generally associated with dbSNP identifiers, however the system 100 platform also dynamically tracks and updates the allele frequency of variants processed through the system and derived from reference panels. System 100 reports the population allele frequencies for HapMap populations as well as allele frequencies in available reference populations including the 1,000 genomes project, publically available genomes provided by Complete Genomics, and Wellderly samples.
Also included are positive/negative/purifying selection rates.
All variants, regardless of their genomic position, are associated with predicted transcription factor binding sites (TFBS) and scored for their potential impact on transcription factor binding. Predicted TFBS are pre-computed by utilizing the human transcription factors listed in the JASPAR and TRANSFAC transcription-factor binding profile to scan the human genome using the MOODS algorithm. The probability that a site corresponds to a TFBS is calculated by MOODS based on the background distribution of nucleotides in the human genome.
TFBS are called at a relaxed threshold within (p-value <1·10−6) in conserved, hypersensitive, or promoter regions, and at a more stringent threshold (p-value <1·10−8) for other locations in order to capture sites that are more likely to correspond to true functional TFBS. Conserved and hypersensitive sites correspond to the phastCons conserved elements, Encode DNASE hypersensitive sites annotated in UCSC genome browser, while promoters corresponds to 2 kb upstream of known gene transcription start sites, promoter regions annotated by TRANSPro, and transcription start sites identified by SwitchGear Genomics ENCODE tracks.
The potential impact of variants on TFBS are scored by calculating the difference between the variant and original sequence scores using a position weighted matrix method and shown to identify regulatory variants. A suggested threshold for damaged TFBS is either deleted TFBS or those with a delta score of less than −7.0. Variants known to influence expression levels, as determined by eQTL analyses are also annotated from the NCBI GTEx database.
Variants falling within 3′Untranslated Regions (3′UTRs) are analyzed for their impact on microRNA binding in two different ways. First, 3′UTRs are associated with pre-computed microRNA binding sites using the TargetScan algorithm and database. Variant 3′UTR sequences are rescanned by TargetScan in order to determine if microRNA binding sites are lost due to the impact of the variation. Directly impacted microRNA binding sites are listed as well, and the binding strength of the microRNA with its original and variant binding site is calculated by the RNAcofold algorithm to return a ΔΔG score for the change in microRNA binding strength induced by introduction of the variant. For microRNA transcripts (rather than their binding sites) bearing variants, a change in folding and binding energy, based on annealing with the consensus binding site, is also calculated by the RNAcofold algorithm. Moreover, a list of predicted lost and gained targets due to the new microRNA sequence is determined using the TargetScan algorithm to scan the novel microRNA sequence against known transcript 3′UTRs.
Also included is the location of enhancers, silencers, DNAse hypersensitivity sites, known TFBS based upon experimental data, long distance genome interaction sites, chromatin modification sites, ENCODE data, mRNA based predictions based on change in untranslated region motifs, change in RNA folding, translation efficiency due to synonymous mutations, or alternatively spliced exons are also included.
Clinical annotations include both return of information contained within clinical variant databases as well as predicted clinical influences based upon the synthesis of gene-phenotype relationships and gene-variant impact predictions. On a variant by variant basis, System 100 determines whether the specific reported variant is contained with the Human Gene Mutation Database (HGMD), PharmGKB, GET-Evidence, and the COSMIC Database. HGMD cross-reference returns the disease associated with the genetic variant, PharmGKB cross-reference returns the PharmGKB entry name and the drug whose metabolism is perturbed by the variant, GET-Evidence cross-reference returns the inheritance, penetrance, severity, and treatability of the variant and disease if it is known, and COSMIC Database cross reference returns the number of cancer samples in the COSMIC database bearing that specific variant.
If the nearest gene, rather than the specific variant, is a gene known to be clinically relevant, it's association with disease as annotated by either OMIM, HGMD, or the NCBI Genetic Association Database is returned. Finally, if the variant falls nearest to a gene associated with cancer, that information is returned via cross-reference to the COSMIC database, Memorial Sloan Kettering Cancer Center, Atlas Oncology (http://atlasgeneticsoncology.org), the Sanger Cancer Gene Census, or network residence nearby known cancer genes. Drugs known to target the gene are also returned from DrugBank.
Finally, two different modified American College of Medical Genetics (ACMG) scores are returned, one based upon variants, or variants in genes known to be causally associated with a phenotype (Clinical) and a second score which includes genes known to carry genetic variants that are statistically associated risk factors for the development of a disease (Research). The ACMG scoring guidelines with categories 1-6 are modified to include a 2* and 4* category to provide more granularity to variant stratification. However, variants of category 1-2* are of most clinical relevance and category 6 contains more common risk factors for disease. ACMG category 1 variants are rare (<1% allele frequency) variants with good evidence for their association with disease. ACMG category 2 includes more common variants (1-5% allele frequency) associated with disease as well as novel variants in known disease genes predicted to impact gene function by either removing a splice site donor or acceptor, producing an amino acid substitution predicted to functionally impact the protein, or truncating the protein in a damaging manner. Category 2* includes truncating variants not predicted to damage protein function. Variants less confidently predicted to be associated with disease, either through neutral coding changes or impact upon regulatory function are placed in categories 3, 4, and 4* with predicted neutral variants and known phenotype associated variants assigned to categories 5 and 6 respectively.
Also included is the use of mouse knockout phenotypes, model organism phenotypes, and predicted phenotypes.
This category of annotation includes information that can link genes and variants to one another based upon biological, molecular, and/or functional relationships. These relationships are useful for pathway or process based collapsed association methods or inferring the phenotypic influence of particular variants. In our specific implementation Reactome pathways and gene ontology biological processes of the nearest gene are utilized to provide biological relationships. Disease Ontology annotations are utilized to provide phenotypic relationships. Protein domain information and molecular functions (as annotated by Gene Ontology) utilized by the nearest gene are used to provide molecular and functional relationships.
Also included is the use of tissue expression levels, tissue specific expression status, other pathway and network resources, and co-expression networks.
It is one drawback to currently available annotation systems that they are limited in the annotations they compute, retrieve, and provide for a given variant. Different annotation information is stored in a variety of different databases, often maintained by particular institutions with specific interests. The system 100 aims to consolidate a wider range of annotation information than has generally been provided. Accordingly, in some implementations, each variant processed by the system is annotated with at least 50 predefined annotation categories, each of which may be one particular category of information within one of the eight annotation classes described above. Generally, although not necessarily, out of these 50 annotation categories, at least 10 will involve accessing a different separately maintained database to search for annotation information. Although the databases are separately maintained, most of the institutions that maintain these databases allow them to be mirrored at system 100 for faster direct access. In some implementations, each variant processed by the system is annotated with at least 55, 60, 65, 70, or 75 different annotation categories. In some embodiments found to be especially suitable, over 80 annotation categories are provided. Including more annotations is surprisingly valuable because it is often not known what kind of information may be desirable in future data mining projects through a database of annotation data. It will also be appreciated that even though at least 50, at least 80, or more annotation categories are provided for each variant and may be stored and output by the system in an annotation file, some number of the category entries for a given variant will contain no information because the variant will not appear in all databases, will not necessarily be in an exon, etc.
The computational processes underlying the system 100 output do not necessarily follow the structure given above in the annotation classes. Rather, annotation execution proceeds in highly parallel fashion on a high-performance computational cluster and includes classes of variant annotations that are entirely independent of one another, serially dependent annotations whose execution are dependent upon the completion and status of prior annotations, and synthetic annotations that generate new information through the combination of multiple annotation outputs.
The input requirements are a list of variants including the chromosome, start position, end position, and reference and variant alleles—transferred via a network connection for automated processing or provided via a local file.
One of a few standard file formats, including VCF (variant call format), Complete Genomics native file format, or basic tab delimited BED-like file format are accepted, but then converted to the following tab-delimited structure: (1) Haplotype number (can be a placeholder); (2) Chromosome (with syntax: “chr1”, “chr22”, “chrX”, etc.); (3) Start position (0-based coordinates); (4) End position (0-based coordinates); (5) Variant type (“snp”, “ins”, “del”, “delins”); (6) Reference allele(s); (7) Observed allele(s); and 8.) Notes.
A pre-annotated database (
Previously annotated variants are extracted from this database, and novel variant annotations are stored within the database upon completion in order to speed up the annotation of subsequent genomes. Annotations for variants not found in the database are computed de novo in an automated parallel computing environment as described below. Subsequent retrieval of pre-annotated variants contained within variant files submitted for annotation is based purely upon physical coordinate based queries against the pre-annotation database.
Transcript-based annotations (
Annotation depends upon a database of the physical location of genes and gene components and a measurement of the physical distance or occupancy of a variant relative to these gene components. In more complex instances, a reference genome sequence is utilized to extract the genomic sequence relevant to the annotation in question; the sequence is processed based upon the reported coding frame and then trimmed based upon proximity to gene components or converted to other biological sequences by utilizing the standard genetic code to produce input formats compatible with downstream tools. Based upon this information a series of transcript based annotations are produced.
The simplest case is annotations dependent upon the identity of the nearest gene. Prior knowledge regarding the nearest gene/transcript is produced from knowledge bases, including the type of gene, the relationship of the gene to phenotypes and biological processes as determined by clinical phenotype database (OMIM, HGMD, COSMIC, Disease Ontology, or other cancer gene databases), relationship of the gene to other compounds (e.g. DrugBank), and relationship of the nearest gene to biological pathways, networks, or molecular function (e.g. Gene Ontology, Reactome Pathways, etc.). These cross-references are based upon gene synonym tables, for example those provided by the UCSC Genome Browser.
More complex cases require utilization of the position of the variant relative to the transcript body as well as the sequence of the surrounding nucleotide context. In these cases, given the position of the variant within the transcript body, a series of annotations and predictions that are dependent upon the location of a variant relative to functional transcript elements and calculation of results based upon the specific surrounding nucleotide sequence are produced—these include inferred and predicted influences upon splicing, and splicing machinery, as well as inferred and predicted influences upon microRNA binding sites within the untranslated regions.
Finally, given the gene type and position within the transcript, a series of annotations are produced that depend upon defining the protein produced by the gene and the changes to the amino acid sequence based upon the standard genetic code. These annotations include the position of the variant within the protein sequence and the distance of the variant from the coding start and stop sites, the affected amino acids, and predictions which utilize the protein sequence and perturbed amino acids including functional predictions (SIFT, Polyphen-2, Condel), and protein domain matching and scoring (e.g. HMMER scanning of the protein sequence against protein family models).
The functional element-based annotation class (
More complex annotations rely upon the residence of a variant within a functional bin along with calculations that depend upon the identity of the original and variant sequence. These annotations including impact scoring for variants within transcription factor binding sites, or determination of the influence of variants upon microRNA genes. Thus, for functional element-based annotations, the locations of functional elements in the genome are either defined a priori in knowledge bases and mapped to the reference genome based upon physical location, or defined by searching for patterns within the reference genome that define functional elements. The bins defined by physical location coordinates are utilized to determine the residence of genetic variants within a functional element and the sequence of those bins is extracted from the reference genome, modified with the variant information, and scored using the various predictive algorithms described above.
A variant-based annotation (
Most clinical annotations, and other prior knowledge derived from variant-centric databases (such as the reported associations of variants with particular molecular or physiological phenotypes in HGMD, GET-Evidence, or eQTL databases) are variant-based. Cross-referencing is done based upon conversion to data within external knowledge bases to physical coordinates in the genome (as defined in Data Inputs), execution of the system 100 annotation pipeline (202), and deposition of the resulting annotations in the pre-annotated database.
Certain annotations depend upon the synthesis of multiple annotations, termed synthetic annotations (
Hybrid-synthetic annotations involve the combination and merging of any subset of the previously described annotations in order to produce a novel layer of annotation. One non-limiting example is the prediction of impact of truncating variants that rely upon the definition of a truncating variant, its position within the coding sequence, and the amount of flanking conserved sequence removed by the variant. In that example, production of a synthetic annotation requires information from the annotation process as well as reference to the various knowledge bases maintained by system 100.
There also exist annotations that are pure synthesis of variant based annotations completed in prior steps. One non-limiting example of this sort of annotation is ACMG scoring, which relies upon the identity of the nearest gene, the association of the nearest gene with disease phenotypes, and the predicted impact of the variant upon the gene based upon the above described coding, splicing, and regulatory prediction tools. These annotations rely upon the combination of previous annotation outputs through logical operators and rules to define a novel annotation result.
In summary, after a population based sequencing study is carried out and variants are identified, the variants can be passed through annotation pipeline (202), predicted functional variants filtered based on the prediction algorithms implemented in the pipeline, prioritized variant sets generated based on the linking elements generated by the pipeline, and finally statistical tests can be performed on these variant sets in order to identify variant sets associated with disease (226).
These results can then be used by the various modules illustrated in
Large population based sequencing studies are underway in order to identify mutations that underlie disease predisposition. If one accepts the rare variant hypothesis of disease predisposition, one would expect rare variants predisposing to disease will be associated with high relative risk, but because of their low frequency, simple univariate analyses where each variant is tested for association with disease will require extremely large sample sizes to achieve sufficient power. This problem is compounded tremendously if disease predisposition results from the interaction and combination of extremely rare variants segregating and encountering one another throughout the population. Variant collapsing strategies have been shown to be a powerful approach to rare variant analysis; however, collapsing methods are extremely sensitive to the inclusion of noncausal variants within collapsed sets. The key to unlocking the power of variant collapsing methods, and facilitating sequence based disease association studies in reasonable study sizes and at reasonable cost, is a logical approach to forming collapsed sets. In fact, regardless of the allelic frequency and penetrance landscape underlying common disease predisposing variants, set based analyses can expose what simple linkage or association studies have failed to reveal.
Another application of the Population Sequencing Module 24 is in the field of pharmacogenomics. It is known that the safety and efficacy of drug related therapies can be associated with the genetic makeup of the individual patients. This fact can be especially significant in clinical trials of new drugs and other therapies, but correlating variants in clinical trial subjects with safety and efficacy data gathered for the drug or therapy during the clinical trials has proven difficult. Variant collapsing methods are important tools for discovering relevant associations in this context, and these variant collapsing methods may depend on functional annotations to provide a logical approach to selecting the variants for collapsed sets. The annotations and algorithms described herein are especially valuable in this application.
In genomic association studies related to disease pre-disposition or drug trials for example, a group of subjects exhibiting a particular phenotype (typically referred to as the case subjects) and a group of subjects not exhibiting the phenotype (typically referred to as the control subjects) are sequenced, and the nature of the variant sites found in the genomes or portions thereof are compared to find a statistical correlation between the genomic variants of one group relative to the other. For the present discussion, a “variant site” is a location on the genome where at least one subject of the association study (case or control) has a genetic sequence different from a reference genetic sequence. A variety of well-known techniques have been used to examine the statistical significance of different observed minor allele frequencies (MAF) in a case group as compared to a control group for a given variant site or set of variant sites. When statistically significant, such a difference indicates that the variant(s) contribute to the phenotype.
Some techniques have been developed to address a problem with detecting rare variant correlations using the techniques applied to common variant correlations. This problem is that if an individual rare variant (generally considered less than 5% MAF, sometimes less than 1% MAF) is in fact physiologically associated with a phenotype of interest, extremely large numbers of subjects, typically thousands, would be required as cases and controls in an association study to detect the association with reasonable confidence. In one method to help address this issue with rare variant analysis, rather than testing rare variants individually, statistical power for association studies is improved by using multivariate techniques such as Hotelling's T2 test to detect an association between a combined set of rare variant sites and a phenotype.
Another method that has been utilized to improve statistical power is known as variant collapsing. When variant collapsing techniques are used, a set of variant sites will be collapsed effectively into a single variant site for statistical association analysis. In the simplest method of variant collapsing, a subject of the study is classified as “variant positive” if they have a minor allele at any of the collapsed variant sites. Subjects that have no minor allele at any of the collapsed variant sites are classified as “variant negative.” This variant site collapsing transforms a series of individual tests for phenotype association of each minor allele separately into a single test that looks for a statistically significant excess of variant positive subjects in the case group as compared to the control group. The term “collapsing” variant sites is thus used herein to mean converting subject status measured at a plurality of variant sites into at least a smaller plurality of subject classification variables, usually a single classification variable, for statistical association analysis. This technique increases the power of the statistical tests used to detect phenotype associations for rare variants because if more than one rare variant in the collapsed set has a physiological association with the phenotype, their effects on the statistical evaluation will be additive, increasing the detectable association between variant bearing subjects and case subjects.
It will be appreciated that the power of these statistical tests can depend strongly on the appropriate selection of the variant sites placed in a collapsed set. Ideally, the collapsed set would include all variant sites having a functional contribution to the phenotype, and would include no variant sites that do not contribute to the phenotype. Any misclassification of variant sites, e.g if the set includes non-contributing variant sites, or the set excludes contributing variant sites, reduces the power of the test.
It will also be appreciated that there is often little or no prior existing information available to reliably classify variant sites as contributing or not contributing to the phenotype. Researchers performing such a study generally select some region of interest of the genome for variant site collapsing and statistical analysis. The region of interest may be a gene, a set of genes associated with a biological pathway, a defined chromosomal region, or the like that they suspect may be associated with the phenotype. This requires significant research resources, cannot be automated, and is difficult to use when the basis for the phenotype may arise from a variety of physiological sources. This remains a significant hurdle in genetic association studies. To help resolve these difficulties, in some novel variant collapsing implementations set forth herein, a database of gene and/or protein connections can be utilized to define sets of variant sites for collapsing in an automated or semi-automated manner.
At block 514, a genomic region of interest is selected based at least in part on the connections in the database. If the stringDB database is being used as the connection database, for example, this process may involve three fundamental steps. First, a group of proteins are selected based at least in part on their connection to one another as defined in the database. More details on particular advantageous implementations of this step are provided below. Second, the proteins of this group are mapped to their associated coding genes (using, for example, the Ensemble database). Third, a genomic region defined by these associated coding genes may be selected as the region of interest for variant collapsing. It will be appreciated that the genomic region of interest may have non-contiguous segments, and need not be limited to the bases between the start and stop codons of the genes mapped from the selected group of proteins. As will be seen further below as well, a genomic region of interest need not even be a “region” per se, but means any definition of genomic content that defines the variant sites considered for membership in a collapsed set, which content is at least partially determined by the connections of the database.
For the protein group selection, the connections of the database are evaluated to define one or more sets of proteins, extracting a subgraph of nodes from the entire database. For example, “cliques” of proteins may be identified in the database. As used herein, a clique of a graph is a set of nodes of the graph wherein every pair of nodes in the set is connected by an edge. A “maximal clique” of a graph is a clique that is not entirely subsumed as a portion of a larger clique.
For any undirected graph made up of nodes and connections, algorithms such as the Bron-Kerbosch algorithm have been developed that can identify every maximal clique in the graph. To find every clique in the graph, maximal or not, each maximal clique identified from, for example, the Bron-Kerbosch algorithm, can be similarly processed as a separate graph to find the maximal cliques within each one, and so on, down to maximal cliques with only two vertices, although this method will produce duplicates that should be discarded. When processing the information provided in the database to identify interconnected protein sets, various different sets of connected proteins may be defined. As examples, each maximal clique present in the protein database can form a set, each maximal clique of the protein database larger than a threshold number of nodes can form a set, each clique (maximal or not) of the protein database larger than a threshold number of nodes can form a set, etc.
Although it is not necessary to rely on cliques to define the protein sets, in implementations where at least some sets are defined by cliques, these sets will each be formed by a very tightly interconnected set of proteins. The inventors have recognized that such a set is a promising candidate, based only on its multiple interconnections, for containing two or more variant sites that contribute to the presence of a phenotype among a population of subjects. These situations are where variant collapsing methods are especially able to detect correlations with higher confidence than single or multivariate analyses, and so such cliques contain especially promising variant sites for forming collapsed sets.
Defining sets of proteins in this manner has the advantage that the process can be automated, requiring little or no additional researcher input or knowledge regarding the biological significance of the connections or the biological significance of the proteins of the generated sets to the phenotype of interest in the association study being performed. This can be especially advantageous in association studies related to clinical trials and the development of genetic companion diagnostics. Although there may be currently known information about the biological target and mode of action of a given pharmaceutical, patient response differences and adverse reactions can be due to metabolism differences, interaction of the pharmaceutical with unknown targets, or other mechanisms that are often not well understood, especially at the clinical trial stage of drug development. The presently described method of generating sets of variant sites for collapsing using a pre-populated and extensive database, and without requiring additional insight or research into possible biological causes of the phenotype of interest can be very beneficial.
Furthermore, even though essentially complete automation is possible, the system and methods described herein can also support the use of additional biological information to assist in defining protein sets from such a database. For example, a set of proteins may be defined as one or more specifically identified proteins plus all proteins directly connected to those one or more identified proteins, which set does not necessarily form a clique. In these cases, some biological insight may be used to select the protein or proteins that are the foundation of the set definition. In some implementations, only the existence of the connections is considered when constructing a set. In some implementations however, information contained in the attributes of the connections can be considered when generating a clique or other subgraph of the database. For example, only connections having a connection confidence score above a threshold or only connections indicating a particular type of protein-protein relationship can be considered as part of the graph from which a clique or other subgraph is derived.
In a complex graph with a large number of nodes, exhaustive clique finding can be computationally expensive. However, this part of the process need not be performed again every time genomes are to be analyzed for an association study. Instead, the process of block 512 and some or all of block 514 can be performed beforehand to find the cliques or other desired sets of proteins. This process may be performed on a periodic schedule as new information is added to the database. The pre-computed protein sets and/or related gene sets and/or genomic regions of interest can be stored in a database and used for any one or series of association studies. Furthermore, additional biological information about the sets derived from the database can be stored with the pre-computed sets or regions, such as which biological pathways they are associated with. This information can be used to select set(s) and/or associated genomic region(s) that are used for variant collapsing.
Referring back to block 514, when defining a genomic region of interest in block 514, proteins that are part of only one set definition, or proteins that are part of multiple set definitions may be selected as the group of proteins for mapping to genes and defining, at least in part, the genomic region of interest. For example, proteins contained in only one clique may be selected as the group, or the proteins contained in two or more cliques may be selected as the group for mapping to genes for defining the region of interest for variant collapsing. As another example, sets of proteins defined by the nodes of maximal cliques containing a node or nodes corresponding to a selected one or more specifically identified proteins may be selected as the group. As another example, one or more cliques plus one or more additional specifically identified proteins may be selected for the group.
Moving now to block 516, at least some variant sites that are present in the selected genomic region of interest in case and control subjects of the association study are identified. In one implementation, the annotation pipeline described herein is used for this identification. In this implementation, the genomes of the case and control subjects are sequenced and variants in the genomes are annotated as described herein. As described above, one annotation may be one or more genes that have been associated with the variant during the annotation process described above. At previous block 514, a set of genes was identified as at least in part defining the genomic region of interest. The variant sites associated with the genomic region of interest at block 516 may be those variant sites of the case and control subjects that have been annotated with one of the genes that were identified in block 514 during the protein mapping process. Other methods may be used as well to select the variants associated with the genomic region of interest, such as defining segments of bases that span the genes identified in block 514 and including those variants of the case and control subjects that are located within those spans.
At block 518, at least some of the identified variant sites are collapsed for statistical correlation analysis. Although it is possible that all of the identified variant sites may be collapsed at block 518, it is typically advantageous to eliminate some of the identified variant sites from the collapsed set and remove them from consideration in the statistical analysis. For example, variant collapsing analysis works best when the minor allele frequencies of the collapsed set is low, generally under 5%, and often under 1%. If the variant sites identified in block 516 include variant sites where the minor allele frequency is greater than a threshold such as 1% or 5%, this variant site may be excluded from membership in the collapsed set. The allele frequency may be determined based on the allele frequency in the subject population, or in some cases may be provided as a variant annotation from the annotation pipeline process from databases utilized during the annotation process such as the 1000 Genome Project Database which may provide allele frequency for variants in their database and which may be included in the set of annotations for each variant when available.
It may also be advantageous to reject variant sites identified in block 516 from inclusion in the collapsed set that are less likely to be functionally relevant to the phenotype. The annotations produced by the annotation pipeline described herein for each variant can be used to filter the variant sites identified in block 516 to only retain, for example, non-synonymous SNP variants, or variants predicted for that and/or other reasons to be deleterious to protein function. The stringency of the annotation filtering utilized here is a balance between potentially excluding actually functional variant sites from the collapsed set with a filter, and potentially including actually non-functional variant sites in the collapsed set by limiting or foregoing this filtering. Both will reduce the power of the applied statistical test.
At block 520, testing for a statistically significant correlation between at least some of the identified variant sites and the phenotype classification of the case and control subjects is performed, using the collapsed variant set. If a simple collapsed variant analysis is performed where subjects are classified as variant bearing or non-variant bearing based on the presence of at least one minor allele at the variant sites of the collapsed set or the absence of any minor allele at the variant sites of the collapsed set, standard univariate tests such as a χ2 or Fisher's exact tests may be used to detect a correlation. In some implementations, the identified variants at block 516 can be further separated into groups to form separate collapsed sets, or one or more collapsed sets and one or more additional single variant sites. If separated in this way, the collapsed sets and individual variant sites can be statistically analyzed for phenotype correlation with a multivariate test such as Hottelling's T2. It has been suggested that rather than rejecting relatively high MAF variants from the analysis completely by removing them from the collapsed set, statistical power is improved when the higher MAF variant sites are retained in the analysis as individual variant sites for use in a multivariate analysis along with one or more collapsed sets of rare variant sites.
Use of the database defining protein and/or gene connections can further allow a logical progression through multiple repetitions of blocks 514 through 520 of
As one advantageous use of the above described methods, a genetic basis for a phenotype such as an adverse reaction to a drug being tested in a clinical trial can be explored in a thorough, logical, and more likely fruitful manner than current methods. If a genetic correlation is found, potential clinical trial subjects can be screened for one or more of the correlated variants. Prescreening to enrich a clinical trial with positive responders can also be accomplished with an understanding of genetic variant correlation with positive response achieved with an association study as described above. Results of an association study as described above can also be used in a clinical trial setting to provide information about other biological effects of the drug being tested, even if the detected correlation between the variants and the phenotype is not strong enough to justify genetic prescreening to select or exclude clinical trial participants. For example, identifying the proteins encoded by genes containing variants having a weak but identified correlation to subject response, either positive or negative, could identify previously unknown druggable mechanisms which can help guide further research on the pharmaceutical under investigation and/or related molecules.
The methods described above related to
The family sequencing module (204) can work in a manner similar to the population sequencing module (224), however in this case the diseases to be considered will generally be more severe, and the filtering of variants will follow a genetic model. For example, a family of four with unaffected parents, one affected child and one unaffected child, but having various different family structures can be accommodated by the analysis strategy implemented in certain embodiments of module (204).
Again, the process begins with a long list of variants (302) identified in the mother, father, and children. Variants are annotated through pipeline (202) as illustrated in
Other filtration schemes can be generated for different genetic models. For example, in a sex-specific inheritance model module, family sequencing (204) can be configured to look for genes on the X-chromosome where only one copy of the gene carries functional variants in the unaffected mother, no functional variants are carried by the unaffected father, and the functional variant from the mother is inherited by the affected son.
If multiple candidate genes are identified by the variant filtering schemes implemented in family sequencing module (204), then phenotypic information can be used to filter the variant list. This analysis can use the linking elements generated by annotation pipeline (202), or potentially the network based prediction of disease genes to be described below. For example, if the disease in question is an autoimmune disease, the candidate genes can be ranked based upon the annotation of immune related functions generated by the linking elements.
The purpose of the tumor-normal sequencing module (208) is to identify somatic mutations in the tumor and identify causative somatic mutations that may inform treatment strategies (210). First, the list of variants identified in the tumor and normal genomes can be used to isolate genetic variants observed only in the tumor genome. These tumor specific somatic variants can be passed through annotation pipeline (202) and filtered for functional variants or variants in genes known to be involved in tumorigenic processes. This list of variants can then be cross-referenced with a database of known gene-mutation-drug interactions (to be constructed) and passed through a decision tree to identify promising therapeutic interventions (210).
A sample decision tree would flow as follows: If EGFR is amplified→and KRAS is not mutated→recommend Cetuximab.
In a single genome sequencing setting, some conclusions can be made directly from the mutations observed in single genes, e.g. the Myriad BRCA test or other genetic tests for known drug metabolism variants. Annotation pipeline (202) can be configured to take these tests one step further by including predictions based on variants in genes known to be associated with specific diseases or drug metabolism phenotypes, but where the variant itself has not been specifically observed to impact disease or drug metabolism.
Mutations in BRCA are known to predispose to breast cancer. A woman's genome is sequenced, variants are passed through the pipeline (202), and mutations that are predicted to impact BRCA are observed. These variants are flagged, for risk of breast cancer and information regarding the mutation and predictions are returned.
CYP2C9 is the principle metabolism enzyme for Warfarin. Loss of function alleles are known to cause poor warfarin metabolism and sensitivity. Variants in CYP2C9 would be run through the annotation pipeline (202). Any variants not known to impact warfarin metabolism, but predicted to be damaging to CYP2C9 would be returned as predictions of sensitivity to Warfarin.
Annotation pipeline (202) can easily recreate the reports generated by DTC genotyping companies such as 23andMe, Navigenics, etc. Variants from a genome are passed through annotation pipeline (202), and associated with known SNPs in dbSNP (the first step described in the annotation pipeline section). The dbSNP ID's are cross-referenced with the catalog of known disease associations and their odds ratios in the Genome Wide Association catalog. The cumulative impact of those variants across the whole genome, using the odds ratios of each variant, can then be combined to produce a disease risk.
This can be an extension upon the 23andMe or Navigenics type predictions described above, using predictions for gene-disease association as well as predictions of variant impact on disease. Conventional genetic testing products focus on specific variants associated with disease or solely on genes known to be related to a disease and do not consider a whole genome perspective. The systems and methods described herein encompass analyses of specific variants and analyses of specific genes while producing more powerful risk predictions by including genes determined to be associated with disease by the network propagation strategy and interrogating possible forms of human genetic variation. In addition, most conventional approaches do not account for molecular networks and connections between genes attributable to their participation in common processes or biochemical pathways.
System 100 can be configured to process whole genome genetic variant data for an individual into disease risk quantification through computational analysis and network modeling. First, each variant can be processed through the annotation pipeline (202). After variants have been processed through this pipeline, the output is used to produce a weighted score for the combined impact of variants on each gene, which essentially represents an estimation of the percent functionality of each gene. These percent functionalities are used to produce a disease specific score as described below.
To identify genes that may be associated with specific diseases, first, a weighted genetic network is compiled through publicly available resources, where the connection between genes is weighted based upon the confidence that the connection truly exists. The importance of each gene within the network is calculated based upon the number of connections it makes to other genes, and the importance of the genes to which it is connected. For example, a pagerank algorithm, heat diffusion algorithm, or degree centrality calculation can be used to produce this global centrality score. Next, for each disease, a disease specific score is generated for each gene by assigning a high importance score to genes known to be associated with the particular disease and then the scores are propagated through the network to generate a disease specific centrality score. Again, these disease specific centrality scores can be generated by propagating information through a pagerank algorithm, heat diffusion algorithy, degree centrality calculation, or other network centrality measure. For each gene, the difference between its global centrality score and the disease specific centrality score represents its importance in mediating the disease in question.
A whole genome sum of products can then be generated to produce a final disease score for the individual. The product is the importance of the gene to the specific disease state in question multiplied by the percent dysfunction scored for that gene through the first phase of the algorithm. These products are summed up across genes to produce a final disease score for the individual. The relative disease score across individuals represents an approximate relative risk of disease.
The multi-tiered annotations provided by the annotation pipeline (202) can be leveraged to identify disease associated variants within (1) single cases of idiopathic disease, (2) small pedigrees including affected individuals, and (3) larger groups of unrelated individuals with and without disease. The manner in which the disease-associated variant is identified relies on the creation of appropriate filters and/or statistical techniques as described below.
Given the high number of novel or very rare mutations that are present in any one genome, it is necessary to derive an appropriate filtering algorithm in order to determine the likely disease-causing variant from this set. The filter can be based on the relative “scores” attributed to the degree of damage introduced in a genomic element by a given mutation. The algorithm can use single scores or a weighted combination or scores to rank the most likely disease variants.
The filtering algorithm described above can be augmented with pedigree information in the case that related individuals are available for genome sequencing. Mendel's laws of inheritance can be overlaid on the basic filter to identify only those variants that track with the disease model (e.g., homozygous recessive, compound heterozygous, autosomal dominant).
When sufficient numbers of unrelated individuals with and without disease are sequenced, it is possible to perform statistical tests to identify disease-associated variants based on the frequency of variants in cases versus controls (or population A versus population B). Within this extension, annotations produced by system 100 can provide the basis for collapsing sets of variant types (e.g., non-synonymous coding SNPs with an exon/gene/set of genes/pathway) into frequencies for use in downstream statistical tests.
As genomic knowledge increases within system 100 and without, higher-order annotations such as (1) tissue-specific and transcript-specific expression; (2) methylation status; and (3) other epigenomic features, can be incorporated into both the basic annotation pipeline (202) as well as filtering algorithms as appropriate.
It should also be noted that the existing annotation and filtering pipelines can be leveraged to build empirically-derived variant classifiers to identify likely disease-associated variants based on features shared with known disease-causing mutations.
Thus, variant file (302) data can be stored in, e.g., database (108) as can the annotated and filtered data produced by annotation pipeline (202) and the various modules (204-226). This data can then be used to refine the annotation and filtering algorithms. For example, as the amount of data in database (108) increases and new links and patterns in the data can be identified that can further inform the annotation and filtering being performed by system 100. Accordingly, algorithms (104) can comprise analytics that are configured to identify and refine linking information and patterns in the annotated and filtered data stored in database (108).
Thus, the invention provides, among other things, a genomic annotation system and a computer-based method for predicting a risk of an individual developing a disease.
Having now generally described the invention, the same will be more readily understood through reference to the following examples which are provided by way of illustration, and are not intended to be limiting of the present invention, unless specified.
Publicly available complete genome sequence data was obtained on 69 individuals of high quality (˜60× coverage, ˜97% bases called) produced by Complete Genomics, Inc. (CGI) by downloading data from the company's website (http://www.completegenomics.com/sequence-data/download-data/). The assembly of the genomes as well as variant calling for them has been described in the literature. (Drmanac et al., 2010, Roach et al., 2010)
The genotypes from the available “MasterVar Beta” files provided by CGI were directly used. Additional filtering steps for the analysis of genotypes beyond those that went into the construction of the public domain files were not required.
The 69 individual genomes consisted of 22 individuals of Northern European ancestry (abbreviated as CE for the CEPH or CEU HapMap Population), 10 individuals of Yoruban ancestry (YR), 5 individuals each of Mexican (ME) and African ancestry living in Dallas (AS), 4 individuals each of Japanese (JP), Han Chinese (CH), Italian (TS), East Indian (GI), Maasai Kenyan (MK) and Luhya Kenyan (LW) ancestry, and 3 individuals of Puerto Rican ancestry (PU).
13 CE individuals who were the offspring of a couple of other CE individuals; one YR individual who was the offspring of a YR couple; and the 3 PU individuals who were a mother-father-offspring trio were excluded from the analysis. Therefore, 52 individuals from 10 different global populations were ultimately considered in the analysis. To show how some of the results apply to other data sources, sequence data available from the 1000 genomes project (www.1000genomes.org/) was also leveraged.
For one set of analyses, an ancestry assessment-verified was considered (see below) for female European individual's genome that was sequenced by CGI independently of the 69 genomes obtained from the public domain.
The genetic background similarity of the 69 individual genomes downloaded from the CGI website was assessed, in addition to the single independently-sequenced European female's genome, by constructing identity-by-state (IBS) allele sharing similarity matrices using 16,411 markers which had also been genotyped on 4,123 individuals in various public domain databases for whom ancestry was known. IBS allele was also calculated sharing matrices based on 19,208,882 variants determined from the whole genome sequencing for the 52 individuals ultimately used on the analyses in addition to the parents in the Puerto Rican trio.
Multidimensional scaling (MDS) analysis was then applied to the sharing matrices to determine patterns in genetic background similarity of the individuals.
To catalog the position-specific differences (i.e., variants) between the 52 genomes two different strategies were considered. First, each genome was compared to the human genome reference (version hg18). Second, the ancestral allele of each variant was then determined by comparing the genomes to the available chimp genome reference.
The sequence position of each variant site relative build hg18 of the human genome provided on the UCSC browser was determined. This was done for variant types that could be determined from the CGI variant files in the public domain, including SNVs, small insertion and deletion variants and multinucleotide variants (i.e., small stretches of sequence where the adjacent nucleotides present differ from the reference genome). Thus, the number and type of ‘non-reference’ variants possessed by each of the 52 individual genomes studied was determined. Large structural variation, large copy number variants (CNVs) and other large repetitive element-based variants were not considered.
The use of the human genome reference for assessing inter-population differences in the frequency and rate of functional variant is problematic since the available UCSC Genome Browser human genome reference (hg18) is constructed from DNA of European individuals. Thus, the frequency or ‘labeling’ nucleotides as variants that are ‘reference’ or ‘non-reference’ in other populations would be dictated by what is present on the genomes of individuals of European ancestry, if the human genome reference (hg18) is used. This can easily lead to interpretive biases regarding the relationships between populations and genomic differences between those populations. In addition, functional element determination based on single individual genomes or genomes from individuals with a unique ancestry is problematic due to structural differences in genomes that may impact the very definition of a functional element. Thus, variants were characterized as ‘non-reference’ for the sake of consistency with the literature and to allow for the determination of a reasonable and accepted approximation of the functional impact of the variants observed in the 52 genomes.
The ancestral allele of each variant site was determined using the PanTro2 build of the chimpanzee genome. In essence, the allele at a variant site among the 52 genomes studied that was also present on the chimpanzee genome (i.e., the ‘ancestral’ allele) and which was not present on the chimpanzee genome (i.e., the ‘derived’ allele) was determined. Ancestral alleles were determined using alignment information between the PanTro2 build of the chimpanzee genome with the human genome (hg18) from the UCSC Genome Browser.
When ancestral alleles could not be determined, alignments between the RheMac2 build of the Macaque genome were switched with the human genome (hg18) and positions when both alignments failed to reveal ancestral information were ignored. Ultimately, non-reference variants (determined from the comparison to the human genome reference hg18 as described above) seen across individuals were pooled and it was determined whether these variants matched ancestral alleles. In such cases, these non-reference variants revealed that the deviation is actually in the human reference genome (hg18) and not the non-reference variant. Subsequently, individuals that harbored the non-reference variant no longer carried the variant while other individuals with the reference allele now contained a ‘derived’ or non-ancestral variant.
Given information about which variants were reference/non-reference and ultimately ancestral or derived, for each individual genome at each variant site the labels ‘reference’ or ‘non-reference’, ‘ancestral’ or ‘derived’ were assigned. Additional genotype labels to each genome as, e.g., ‘homozygous derived,’ ‘heterozygous,’ or ‘homozygous ancestral’, were then assigned for variant site positions for which we ancestral allele information has been determined.
With this information, derived variants (likely functional or not) that were only observed on a single genome (genome-specific or ‘novel’ variants), derived variants that were only seen among the genomes of individuals within a specific population (‘population-specific’ alleles or variants), as well as the overall and population-specific frequencies of the variants could be determined.
All variants were mapped to the UCSC Genome Browser human reference genome, version hg18. Subsequently, variant positions were taken and their proximity to known genes and functional genomic elements was determined using the available databases available from the UCSC Genome Browser. Transcripts of the nearest gene(s) were associated with a variant, and functional impact predictions were made independently for each transcript. If the variant fell within a known gene, its position within gene elements (e.g. exons, introns, untranslated regions, etc.) was recorded for functional impact predictions depending on the impacted gene element. Variants falling within an exon were analyzed for their impact on the amino acid sequence (e.g. synonymous, nonsynonymous, nonsense, frameshift, in-frame, intercodon etc.).
Once the genomic and functional element locations of each variant site were obtained, a suite of bioinformatics techniques and programs to ‘score’ the derived alleles (i.e., derived variant nucleotides) were leveraged for their likely functional effect on the genomic element they resided in. Derived variants were assessed for potential functional effects for the following categories: nonsense SNVs, frameshift structural variants, splicing change variants, probably damaging non-synonymous coding (nsc) SNVs, possibly damaging nscSNVs, protein motif damaging variants, transcription factor binding site (TFBS) disrupting variants, miRNA-BS disrupting variants, exonic splicing enhancer (ESE)-BS disrupting variants, and exonic splicing silencer (ESS)-BS disrupting variants.
As illustrated below, the functional prediction algorithms used exploit a wide variety of methodologies and resources to predict variant functional effects, including conservation of nucleotides, known biophysical properties of DNA sequence, DNA-sequence determined protein and molecular structure, and DNA sequence motif or context pattern matching.
All variants were associated with conservation information in two ways. First, variants were associated with conserved elements from the phastCons conserved elements (28way, 44way, 28wayPlacental, 44wayPlacental, and 44wayPrimates). These conserved elements represent potential functional elements preserved across species. Conservation was also assessed at the specific nucleotide positions impacted by the variant using the phyloP method. The same conservation levels as phastCons were used in order to gain higher resolution into the potential functional importance of the specific nucleotide impacted by the variant.
All variants, regardless of their genomic position, were associated with predicted transcription factor binding sites (TFBS) and scored for their potential impact on transcription factor binding. Predicted TFBS was pre-computed by utilizing the human transcription factors listed in the JASPAR and TRANSFAC transcription-factor binding profile to scan the human genome using the MOODS algorithm. The probability that a site corresponds to a TFBS was calculated by MOODS based on the background distribution of nucleotides in the human genome. TFBS at a relaxed threshold within (p-value <0.0002) was labeled in conserved, hypersensitive, or promoter regions, and at a more stringent threshold (p-value <0.00001) for other locations in order to capture sites that are more likely to correspond to true functional TFBS. Conserved sites correspond to the phastCons conserved elements, hypersensitive sites correspond to Encode DNASE hypersensitive sites annotated in UCSC genome browser, while promoters correspond to regions annotated by TRANSPro, and 2 kb upstream of known gene transcription start sites, identified by SwitchGear Genomics ENCODE tracks. The potential impact of variants on TFBS were scored by calculating the difference between the mutant and wild-type sequence scores using a position weighted matrix method and shown to identify regulatory variants in.
Variants falling near exon-intron boundaries were evaluated for their impact on splicing by the maximum entropy method of maxENTscan. Maximum entropy scores were calculated for the wild-type and mutant sequence independently, and compared to predict the variants impact on splicing. Changes from a positive wild-type score to a negative mutant score suggested a splice site disruption. Variants falling within exons were also analyzed for their impact on exonic splicing enhancers and/or silencers (ESE/ESS). The numbers of ESE and ESS sequences created or destroyed were determined based on the hexanucleotides reported as potential exonic splicing regulatory elements and shown to be the most informative for identification of splice-affecting variants.
Variants falling within 3′UTRs were analyzed for their impact on microRNA binding in two different manners. First, 3′UTRs were associated with pre-computed microRNA binding sites using the targetScan algorithm and database. Variant 3′UTR sequences were rescanned by targetScan in order to determine if microRNA binding sites were lost due to the impact of the variation. Second, the binding strength of the microRNA with its wild-type and variant binding site was calculated by the RNAcofold algorithm to return a ΔΔG score for the change in microRNA binding strength induced by introduction of the variant.
While interpretation of frameshift and nonsense mutations is fairly straightforward, the functional impact of nonsynonymous changes and in-frame indels or multi-nucleotide substitutions is highly variable. The PolyPhen-2 algorithm, which performs favorably in comparison to other available algorithms, was utilized for prioritization of nonsynonymous single nucleotide substitutions. A major drawback to predictors such as PolyPhen-2 is the inability to address more complex amino acid substitutions. To address this issue, the Log R.E-value score of variants, which is the log ratio of the E-value of the HMMER match of PFAM protein motifs between the variant and wild-type amino acid sequences, were also generated. This score has been shown to be capable of accurately identifying known deleterious mutations. More importantly, this score measures the fit of a full protein sequence to a PFAM motif, therefore multinucleotide substitutions are capable of being scored by this approach.
The frequencies and rates of functional and non-functional derived variants among the genomes of individuals with different ancestries in a few different settings were compared. The methodologies associated with each of these settings are described briefly in isolation below.
To compare frequencies and rates of different types of variants (reference or derived; predicted functional or predicted non-functional; coding, TFBS, etc.) across the 10 populations, graphical displays and linear regression techniques were used. For the regression analyses, simple dummy variables for each of the 10 ancestral populations were created (i.e., a value of 1.0 was assigned to an individual genome that belonged to a specific ancestral population and 0.0 otherwise) and were used as independent variables in a regression analysis with either the absolute number of variants of a specific type on a genome, or the rate of that variant type per of an individual's genomic variants, as a dependent variable. For these comparisons, the YR (Yoruban) population was taken as a reference, such that the estimated regression coefficients reflect deviations from the YR population. Tukey's ‘Honestly Significantly Different (HSD)’ method was used for evaluating pairwise differences between individual populations for the different variant types from an analysis-of-variance (ANOVA). The HSD method allowed the appropriate statistical inferences to be made given the number of pair-wise population comparisons made.
The frequency and rate of variants of the different types that were homozygous across the populations were compared using regression methods analogous to those described above. Graphical displays of the frequency and rate differences of homozygous variants across the populations were also considered.
All of the variants that were only found on genomes of individuals with ancestries associated with three major continental populations were determined. First, the genomes from CE and TS subpopulations were combined to form a European (EUR; n=13) population, the JP and CH subpopulations were combined to form an Asian (ASN; n=8) population, and the YR, MK, and LW subpopulations were combined to form an African (AFR; n=17) population. The AS subpopulation was excluded from the formation of the African (AFR) population because that population represents African American individuals sampled from within the United States and therefore could reflect admixed individuals.
The number of variants that were observed only within each population for each variant category was determined, and both aggregated the total number and rate of such variants in each population as well as assessed the rate of such variants in each individual genome in each population. Z-tests assessing the equality of these frequencies were performed.
A regression analysis was also used to assess differences between the frequency and rates of African, European, and Asian population-specific variants. The African population was used as a reference and dummy variables for European and Asian ancestry were constructed. Pearson's correlation coefficients were calculated between rates of population specific functional variants relative to population specific variants and relative to variants.
The impact of using inappropriately ancestry-matched reference panels was assessed in efforts to identify patient-specific pathogenic variants responsible for an idiopathic condition via simulation studies. These simulation studies leveraged both the data and insights associated with the assessment of global functional variant diversity involving the 52 CGI genomes.
First, 506 known Charcot-Marie-Tooth (CMT) syndrome causing variants were taken from the OMIM database and their Polyphon2 and SIFT scores were computed (or rather, technically, 1.0-SIFT score, which we will refer to as the ‘SIFT score’) and their averages (average Polyphen2 score=0.825, average SIFT score=0.931, and average of the average value of the Polyphen2/SIFT scores=0.878) as well as 567 known Cystic Fibrosis (CF) causing variants (average Polyphen2 score=0.769, average SIFT score=0.891, and average Polyphen2/SIFT score=0.830) were obtained and variants reflecting these scores were ‘implanted’ in a European individual's whole genome sequence variant list.
Polyphen2 and SIFT are bioinformatics programs implementing procedures for determining the likely functional significance of non-synonymous coding SNVs and were including in the suite of programs used to characterize the likely functional effect of variants. This European individual was sequenced by Complete Genomics, Inc. in the same way as the 52 individuals taken from the CGI repository, but was not part of that panel of 52 individuals.
It was determined by placing the disease-causing coding variants among the other variants on this individual's genome, it was determined that the method could identify them as likely pathogenic and disease-causative among the other coding variants on that individual's genome.
This was pursued by comparing the coding variants on this individual's genome to reference panel genomes made up of individual genomes from among the 52 CGI genomes studied with the same and different ancestries. They comparison was performed using different bioinformatics functional prediction tools to assess their impact on pathogenic variant identification as well. CMT variants and CF variants were choses for exploration since CMT variants act in a dominant fashion and CF variants act in a recessive fashion. An individual not sequenced along with the 52 CGI public domain genomes was leveraged since the variants on this individual had not been deposited into dbSNP and other databases and thus many of them were not likely to have been studied by other groups.
CMT and CF variants were also implanted with the scores described above in the variant lists of a randomly chosen African (taken from the AS population, which could reflect African American ancestry), Mexican, East Indian, and Puerto Rican genome from the total of the 69 individuals for which WGS data was available from the CGI repository. The number of ns cSNVs (i.e., coding variants) that would be considered novel (i.e., patient-specific) were determined among these individuals' sets of variants with predicted functional scores from Polyphen2, SIFT, and the average Polyphen/SIFT score greater than those associated with the implanted, known disease-causing CMT and CF mutations when compared to different reference panel genomes sets
These reference panel sets included the 1000 Genomes Project exome sequencing data (as of October 2011), both combined across populations considered in the Project and for each of the European, Asian, and African variant sets individually. Reference sets for variants from 52 individuals for which WGS data was available, as well as 8 randomly chosen Europeans, Asians, and Africans from these 52, were also created. Finally, a combined reference variant set that included the 1000 Genomes data and the WGS data for the 52 individuals was considered.
These analyses were pursued by assuming that the CMT mutation was dominant and the CF mutation was recessive (i.e., for the CF mutation only homozygous genotypes not observed in the reference panels were considered novel, whereas for the CMT mutation any genotype that was not observed in the reference panels, homozygous or heterozygous were considered novel).
From the 52 individual genomes, 24,277,549 ‘non-reference’ variants that deviated from build hg18 of the human reference genome represented in the UCSC browser were identified. This included 19374542 SNVs, 1941800 insertions, 2282925 deletions, and 678282 multinucleotide variants. A variant in one genome that was not present on the other 51 genomes was defined as ‘novel’. A filter for novel variants using other publicly available databases was not performed since the DNA samples from the 52 individuals sequenced by CGI are available in the public domain and used often in polymorphism detection studies, such as the 1000 Genomes Project, and hence are likely to have genotype information for them in publicly accessible databases such as dbSNP.
In addition, it is known that different sequencing platforms vary in their ability to identify deviant nucleotides, especially with respect to complex genomic regions, such as regions with highly repetitive DNA. A total of 4,596,517 variants among the 52 individuals (2921142 SNVs, 667458 insertions, 752180 deletions, and 255737 multinucleotide variants and rearrangements) were ‘novel’. For each of the 24,277,549 non-reference variant sites, the ancestral allele was identified using the chimp and Macaque genome comparisons as described in the Methods section. The ancestral allele for 676,185 variants was not determined due to limitations in the available chimp and Macaque reference assemblies. This amounted to 2.78% of the total variants observed. The likely functional effect of the derived allele was evaluated and the number and rate of variant functional category types per genome was catalogues.
The frequency of variants in each of the defined functional categories across the 10 populations was compared via graphical and linear regression analyses and dramatic and statistically significant differences were discovered.
As noted, due to the fact that the human reference genome available from the UCSC genome browser is based on the DNA from individuals of European ancestry, it was not relied on for making claims about the frequency and rates of functional variants on genomes from individuals with different ancestries. Rather, the frequency and rate of derived alleles across the genomes were considered as a complement to comparisons involving non-reference alleles.
Differences in the frequency and per-genome rate of functional derived homozygous genotypes across the populations were tested for.
Interestingly, although some evidence for consistency in the deviations of the non-Yoruban African and non-African populations from the Yoruban population with respect to numbers and rates of functional variants was discovered, there were more subtle, but statistically significant, differences in the total number and rates of different derived variant functional categories, including the number and rate of derived allele homozygous loci, between non-African populations (Table 2, contrast the entries above and below the diagonal). So, for example, the number of homozygous loci harboring derived, likely functional alleles differs between European and Asian as well as East Indian populations, but not necessarily between European populations and the admixed Mexican population (upper diagonal entries of Table 2).
To further characterize the population-level differences in the functional content of individual genomes, the number of population-specific variants in European, Asian, and African populations were determined in a manner analogous to the methods describe in Example 16.
As noted, in addition to comparing population summaries, the rate of population-specific, likely functional variants in each individual genome within each population, was also determined. This is important since sample size differences could impact the ability to identify and test frequency differences of rare and population-specific variants if only population summary statistics over the genomes are considered, as in Table 3. It was determined that there are higher rates of functional variants among the population-specific variants within European and Asian genomes relative to African genomes despite the fact the rate of such variants is higher across the combined variants (i.e., not just population-specific variants) in African rather than European and Asian genomes.
Two factors go into the inference that a variant is likely to be pathogenic and causative of an idiopathic condition are the variant must be unique to the patient with the condition (i.e., ‘novel’) and it must be predicted to be functional. Determining the novelty of a variant requires contrasting the patient's genomic variants with variants on other individuals' genomes (i.e., a reference set of genomes). Determining functionality requires the use of bioinformatics techniques, if not direct laboratory-based functional assays.
Thus, in order to determine the likely impact of these findings on searches for pathogenic variants influencing idiopathic diseases, the number of ns cSNVs in 5 target individuals' genomes (i.e., a European, African, Mexican, East Indian, and Puerto Rican simulated patient's genome) was considered. It was considered as likely pathogenic beyond known dominant-acting CMT syndrome-inducing variant and recessive-acting CF-inducing variants when compared to different reference panel genome' ns cSNV lists derived from the 52 individuals for which WGS information was available. The use of reference sets made up of data from the 1000 genomes project (www.1000genomes.org/) was also considered.
Polyphen2, SIFT, and the average Polyphen2 and SIFT scores were computed for the CMT and CF variants, ns cSNVs variants in each of the 5 target individual's genomes and ns SNVs in each reference data set. The assessment was limited to ns cSNVs due to the low coverage sequencing in non-coding regions pursued in the 1000 Genomes project.
Note that since the non-European target individuals we assessed were part of the 69 WGS individuals studied, the use of a combined reference set with 1000 Genomes and the 69 WGS genomes data (i.e., the ‘ALLDB’ column of Table 4) could not be considered. From Table 4, it can be seen that one could expect some 194 ns cSNVs to be called as ‘novel’ that have Polyphen2 scores greater (and hence likely to be functional) than the known CMT mutation for the European individual we studied based on the use of a 1000 Genomes-derived European reference ns cSNV panel; 680 if a 1000 Genomes-derived African reference panel is used; and 439 if an 8 member reference panel was constructed from the ns cSNVs from the WGS data studied. These would be out of a total of 1539 ns cSNVs for this European individual. These numbers represent the number of ‘false leads’ one would have to deal with in trying to identify the known causative variant (i.e., the ‘implanted’ CMT variant).
Table 4 also suggests that the use of different algorithms for predicting the likely functional significance of variants makes a difference (contrast the entries between the top, middle, and bottom sets of rows), possibly the use of sequencing platforms (as indicated by the small decrease in false positive results from the use of the 1000 Genomes reference panels vs. the only 8 member WGS panel provided by the CGI data) and most importantly the genetic background of the members in the panel (i.e., contrast the columns that only consider the 8 member panels derived from the WGS data). Similar results were observed when assessing the novelty of homozygous variants and the scoring of the likely functional significance of the known CF mutation.
The impact of the addition of genomes to a reference panel on potential ‘false lead’ rates in pathogenic variant identification was also considered.
The differences in the genome-wide rates of DNA sequence variants associated with different genomic functional elements across 10 contemporary global populations were assessed. Evidence that historical population-level phenomena of whatever sort, including possibly bottlenecks, unique migratory patterns, admixture, natural selection, and random drift, have left an imprint on the standing genetic variation that is likely to influence phenotypic expression in these populations. In this light these results are consistent with previous reports, but extend them to the entire genomes of individuals from many different global populations. Important functional variant categories were considered and genomes sequenced on a single platform and to great depth (˜60×) were used. Importantly, it was determined that, on an individual genome-wide basis, there is both an absolute and proportionately greater number and rate of loci that are homozygous for derived alleles that are likely to be functional in non-African populations.
The results of the research described herein suggests that whole genome sequencing will not only be of tremendous value in future population genetic and human evolutionary studies, but also that global human population differences in rates of novel, deleterious or functional variants must be taken into account in certain clinical sequencing applications. Importantly, the results emphasize the need for care in evaluating the novelty or likely functional impact of variants in clinical sequencing studies focusing on the identification of disease-inducing ‘pathogenic’ variants in an individual genome based on comparisons of that genome to a reference panel of genomes. This is the case because of the tremendous diversity of variants across human populations, the existence of an abundance of likely functional variants that are population-specific, and population differences in the absolute number and rates of homozygous variants that are likely to impact phenotype expression. Thus, for example, it might be highly problematic to evaluate the novelty of variants in the genome of an African patient in order to filter out variants not likely to cause his or her unique disease by comparing that individual's genome to a reference panel that only includes genomes from individuals with European ancestry. This problem might be particularly pronounced in large urban centers where individuals with a wide variety of ancestries may require medical care.
Standard probabilistic inheritance-based filters have been described in the literature and are essentially extensions of classical linkage and pedigree-based haplotype-sharing analysis methods. Following the application of inheritance-based filters, if any are used, candidate disease-causing variants are further filtered based upon population data. Ideally, one would compare variants within an affected individual's genome to a reference panel of healthy genomes, such as a disease-free sample of individuals like the Wellderly population, and remove from further consideration variants observed within the healthy reference panel with an appreciable frequency. This filter is most useful for narrowing the number of potential compound heterozygotes and/or in the instances where parental genomes are unavailable. The ranking methodology described herein is applicable to candidate variants derived from any combination of genetic inheritance models and/or population-based filters.
Since the test cases explored did not include information on relatives and given the fact that a healthy genome reference panel does not currently exist, we restricted the application of methods for ranking and prioritizing candidate pathogenic variants to scenarios involving sets of novel variants within single affected individuals' genomes. As such, the first filtration step was to simply eliminate from consideration variants in an individual's genome that have been previously observed and catalogued in dbSNP135 and 1000 genomes databases, as well as in any other individual's genome dataset.
As an alternative to eliminating variants that are not novel or unique to a diseased individual's genome, it is possible to define reasonable allele frequency cutoff values consistent with related disease incidence and penetrance information. However, most idiopathic diseases attributable to genetic variants, the responsible variants will be ultra-rare (i.e., likely <<1% allele frequency, de novo, or simply never observed before and hence novel). Ultimately, the number of candidate disease-causing mutations (described below) to be ranked under with this analysis approach and PIN rank algorithm is an order of magnitude larger than the number of likely candidate-disease causing mutations to be considered if parental information and/or a complete healthy genome reference panel were available making the application and assessment of our proposed methodology on such a set of candidate mutations a true challenge.
The remaining candidate variants (i.e., those that are deemed novel or rare enough to be considered) were then filtered based upon functional annotations and the application of functional prediction algorithms, such as SIFT and Polyphen2. Since there is some danger that the disease-causative variant is not identified as a functional variant by available predictive algorithms, given that such algorithms do not have perfect accuracy, filtration based upon functional annotations was performed after candidate gene ranking. Nevertheless, the reported sensitivity and specificity of functional prediction algorithms suggests that this filter will improve accuracy in the majority of cases. The number of candidate-disease causing mutations passing this filtration step is on the order of what is expected to pass inheritance-based filters without functional annotation-based filtration, thus, this analysis also serves as a benchmark for the accuracy of our approach when family genomes are available.
In summary, the candidate disease-causing variants this approach considers further must be novel and impact protein function if they meet the following functional criteria: (1) they are nonsense SNVs (16.19% of known disease-causing variants in our test set); (2) they are frameshift indels (13.39%); or (3) they are nonsynonymous coding variants with no further functional annotation based filtration (70.42%). When functional annotation filters are applied, nonsynonymous coding variants must be within conserved elements as assessed by PhastCons, and labeled as ‘Probably Damaging’ by PolyPhen-2, and ‘INTOLERANT’ by SIFT (21.34%). These candidate variants are then subjected to PIN rank algorithm for further prioritization and ranking. These two extremes in variant annotation based filtration serve to represent the lower and upper bounds of the accuracy of this approach.
To be certain that the results of the PIN rank algorithm accurately reflected its ability to detect novel gene-disease associations, steps were taken to ensure that the influence of knowledge generated after the discovery of the test disease genes was removed from simulation.
To do this, a genetic network and a set of test gene-disease associations was curated in a manner that ensures no gene-gene interactions within the genetic networks were derived from publications, or other functional studies, that may have been pursued as a direct or indirect result of the discovery of the test gene-disease associations.
This was accomplished by compiling a list of recent (post 2001) gene-disease associations via the Human Gene Mutation Database (HGMD) and filtering that list for disease-causing genes not associated with any disease prior to 2011 but where the associated disease has been associated with at least one additional gene prior to 2011. The genetic network was derived from String Database (StringDB) version 8.2(21), last compiled on May 26, 2010. This network selection and test gene filtration ensured that the list of test genes did not contain any genes that may have been associated with similar diseases prior to 2011 that could have thus resulted in experimental investigations into gene-gene relationships that may favor the performance of our ranking methodology, acts to facilitate automated and unbiased selection of seed genes for the PIN Rank algorithm, and ensured that the gene rankings reflect rankings that would have been achieved prior to the discovery of the test gene-disease associations.
The total number of test gene-disease associations surviving this filtration was 132, broken down as 112 genes distributed across 109 diseases. The final list of test gene-disease associations and the seed genes used in our ranking approach are available in
69 publically available genomes were downloaded from Complete Genomics. The variants within these genomes, were filtered as described in Example 10 and annotated according to the following method:
All variants were mapped to the closest gene from the UCSC Genome Browser known gene database. Variants were associated with transcripts of the nearest gene(s) with impact predictions made independently for each transcript. If the variant fell within a known gene, its position within gene elements (e.g. exons, introns, untranslated regions, etc.) was recorded for future impact predictions depending on the impacted gene element. Furthermore, variants falling within an exon were analyzed for their impact on the amino acid sequence (e.g., nonsynonymous, nonsense, frameshift, etc.).
All variants were also associated with conservation information in two ways. First, variants were associated with conserved elements from the phastCons conserved elements (28way, 44way, 28wayPlacental, 44wayPlacental, and 44wayPrimates). These conserved elements represent potential functional elements preserved across species. Conservation was also assessed at the specific nucleotide positions impacted by the variant using the phyloP method. The same conservation levels as phastCons were used in order to gain higher resolution into the potential functional importance of the specific nucleotide impacted by the variant.
While interpretation of frameshift and nonsense mutations is fairly straightforward, the functional impact of nonsynonymous changes is highly variable. A number of different methods for prediction of their functional impact are available, however, the PolyPhen-2 and SIFT algorithms were selected, which perform favorably in comparison to other available algorithms, for prioritization of nonsynonymous single nucleotide substitutions. PolyPhen-2 utilizes a combination of sequence-based and structural predictors to predict the probability that an amino acid substitution is damaging, and classifies variants as benign, possible damaging, or probably damaging, and SIFT estimates the probability of a damaging missense mutation leveraging information such as conservation, hydrophobicity, and amino acid position, to classify variants as ‘tolerant’ and ‘intolerant’.
These genomes consist of 26 individuals with European ancestry, 20 individuals with African ancestry, 8 individuals with Asian ancestry, 4 African-Americans, 4 Native Americans, 4 Mexicans, and 3 Puerto Ricans. Each test gene-disease association was then implanted into each of the 69 genomes, resulting in a total of 6,141 simulated diseased genomes (69 genomes×89 test gene-disease associations=6,141 simulated diseased genomes). Post population-based and annotation-based filtration, an average of 240 and 25 variants passed the filtration scheme per genome, respectively, with a range of 36-648 and 8-46 variants per genome (
For each implanted known disease causing mutation (implanted KDCM), post-filtration genomic variants plus the implanted KDCM were ranted using the PIN rank algorithm (described in Example 16) (
Without functional filters the known disease-causing variant was in the top 10% of candidate variants in 54% of cases (out of ˜240 candidates), the #1 ranked variant in 15.80% of the cases and was present in the top three ranked variants 24.18% of the time (
With functional filters, the known disease-causing variant was in the top 10% of candidate variants in 47% of cases (out of ˜25 candidates) the #1 ranked variant in 37.16% of the cases and was present in the top three ranked variants 57.99% of the time (
Clearly, the proposed ranking methodology produced a much greater proportion of successes than expected by random ranking (p-value <1e-06), performs extremely well in the case where the number of candidate variants was reduced by family based or functional filtration, and performs well even in the most extreme case where there is very little information available to narrow down the list of candidate variants except for the use of our proposed ranking approaches.
In order to confirm that the success of the ranking methodology stemmed from the methodology itself, and the use of appropriate known disease causing gene seeds, rather than some other general characteristic of the accurately-identified disease causing genes, the test gene-disease associations where the median rank across 69 genomes was at worst rank 3 were selected, and the set of seed genes used to rank each test gene-disease association were randomly swapped. Each high ranking test gene-disease association was then ranked with the seeds from other high ranking test gene-disease associations and the median achieved rank is presented in
In the scenario with no functional filters, only 2.11% of the test gene-disease association re-rankings achieved rank #1, and 3.23% of the test gene-disease association re-rankings scored within the top 3. Similarly, with functional filters only 5.51% of the test gene-disease association re-rankings achieved rank #1, and 11.51% of the test gene-disease association re-rankings scored within the top 3. This performance is consistent with the performance observed by choosing random ranks, confirming that the methodology and selection of an appropriate seed gene set, rather than some general characteristic of disease causing genes, drives the performance of our ranking algorithm.
The influence of network characteristics on the performance of the ranking methodology was investigated. To accomplish this, a determination was made as to whether disease gene connectivity, measured as degree and betweenness centrality, was correlated with ranking performance. Disease gene degree centrality was weakly but significantly correlated with ranking performance across 69 genomes (p-value=0.01, r2=0.05 without functional filters, p-value=0.06, r2=0.04 with functional filters) (
Moreover, the path lengths from genes that passed the filtration criteria were compared to the nearest seed gene to the path lengths from successfully and unsuccessfully identified disease genes to the nearest seed gene. A disease gene was considered correctly classified when the gene was the top ranked gene or when the disease gene was ranked within the top 3. As shown in
Individuals from different ethnicities carry population-specific variants as well as differing numbers of predicted deleterious mutations. Thus, affected individuals from ethnic backgrounds divergent from the ethnic profile of the reference panel genomes, and/or variants used to train functional prediction methods, will likely affect which genes pass our population-based and functional filters for ranking.
The influence of the ethnicity on disease gene ranking was evaluated by stratifying the data set into European (28), African (18), and Asian (8) populations. Overall, the ranking methodology performed best for European subjects (p-value=4.12e-13 without annotation filters, p-value=0.002 with annotation filters; two-proportion z-test), with disease-causing variants achieving the top rank in 19.03%, 13.51%, and 14.40% of our test cases without annotation filters and 39.95%, 36.07%, and 39.36% of the test cases with annotation filters in European, African, and Asian populations, respectively (
Similarly, the disease-causing variant ranked within the top 3 variants in 29.59%, 20.95%, and 20.86% of the cases without functional filters and 61.42%, 56.05%, 57.83% of the cases with functional filters, respectively. This major bias, when no functional filters are used (and the number of candidates is large), demonstrates the massive gains in predictive power when appropriate filtration schemes are applied (population-based filters are heavily biased for European populations) and highlights the need for appropriate reference panels when conducting clinical sequencing studies in non-European populations.
The small bias between ethnicities when functional filters are utilized (and the number of candidates is moderate) also suggests that the ranking methodology is robust when additional filtration schemes, such as family information, are available.
Finally, given that in some instances the implanted KDCM were not highly ranked, reflecting the likely discovery of a disease gene with unknown function or novel mechanism of disease action, a measure of confidence in the ranking results would be useful for distinguishing a true positive vs. false positive instance of our ranking algorithm.
This was accomplished by determining the specificity, sensitivity, and accuracy of the approach at different score cut-offs. Overall, a very high level of accuracy was achieved at a high score cutoff of 249.5 without functional filters (81.24% accuracy, 5.2% sensitivity, 95.52% specificity) and much broader accuracy with functional filters at a score cutoff of 5.3 (75.12% accuracy, 58.4% sensitivity, 84.9% specificity) (
These studies confirm that the combination of genetic analysis filters, population variant frequency filters, variant functional prediction and annotation filters, and gene-disease prioritization methods hold tremendous promise for accelerating the discovery of novel disease gene associations in the context of rare idiopathic conditions.
The PIN Rank method, described herein, is similar to random walk-based methods but provides a number of advantages over previously described methods: (1) the PageRank core does not require a set path length but the diffusion length can be modified by varying its parameters; (2) computation over large graphs is straightforward and efficient; (3) it easily incorporates weighted edges so as to allow for confidence measures derived from different resources; (4) the fold change in PageRank vs. Personalized PageRank controls for the bias towards hub genes while appropriately enhancing the significance of hub genes local to seed genes; and (5) the teleportation matrix allows for the integration of numerous weighted selection strategies for the choice of seed genes.
The success of this approach clearly depends upon the selection of an appropriate seed gene list. In the example described herein, this selection was straightforward as it was based upon named diseases. However, the process can be streamlined to allow for extraction of seed genes, and weighting of those genes, for idiopathic diseases in a manner that is invisible to the user yet operates within their native vocabulary.
Previous work, in different contexts, has described systems that can convert phenotypic information, such as ICD9 codes, medical subject headings, phenotypic networks, or other phenotypic descriptions to seed gene lists. The integration of a seed gene list generation tool into the framework described herein would allow clinical implementation of this approach without the need for tremendous training or re-education.
Sixty-nine publicly available genomes were downloaded from the Complete Genomics website (http://www.completegenomics.com/sequence-data/download-data). The assembly of the genomes as well as the variant calling for these genomes has been described in the literature. Variants were mapped to the closest gene(s) from the UCSC Genome Browser known gene database and extracted nonsynonymous, nonsense, and frameshift variants.
Novel variants in each genome were extracted according to their absence in the dbSNP (v135) downloaded from the UCSC genome browser (www.genome.ucsc.edu), 1000 genomes (v2010-08-04) taken from the NCBI website (ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/release/20100804/) and presence in other test genomes. The final variant sets included novel nonsense and frameshift variants as well novel nonsynonymous variants in the set without functional filters or novel nonsynonymous variants within conserved elements as denoted by PhastCons, and ‘Probably Damaging’ by PolyPhen-2, and ‘INTOLERANT’ by SIFT.
The Human Gene Mutation Database (HGMD) (version 2011.3) was downloaded and used to build the test set of disease-causative variants and to select the disease seeds for the network-based ranking algorithm. The test set consisted of disease-causing variants mapped to genes that have not been associated with any disease prior to 2011. The set of seed genes consisted of other genes that have been associated to the test set diseases prior to 2011.
The phenotype-informed network ranking algorithm (PIN Rank) operates by ranking candidate disease causing genes based upon the fold-change in their basic PageRank vs. the phenotype-informed Personalized PageRank within a genetic network. The matrix notations for these two ranks are:
R=αAR+(1−α)T(39)
Where A is a weighted undirected adjacency matrix containing the information regarding how different genes are linked to one another in the genetic network, T is a teleportation matrix containing the probabilities of randomly teleporting to each gene in the genetic network, a is an adjustable factor denoting how often one moves along the links within the adjacency matrix vs. teleporting to genes within the genetic network, and R is a vector of the ranks for each gene, or the equilibrium probability that one will arrive at each gene by following the links within the adjacency matrix or teleporting.
The final values within R are arrived upon by initiating R with equal probabilities for genes and solving by the power method—or iterating the above calculation until R stabilizes. For the basic PageRank, α is set at 0.99 and T is set at equal probabilities for every gene within the network, effectively removing any effect of teleportation upon the ranks of genes within the network while allowing R to stabilize in the face of dangling nodes or other factors known to disrupt R stabilization via the power method.
For the phenotype-informed Personalized PageRank, α is set at 0.95 and T is set so that teleportation results in equal probabilities of landing at a seed gene and zero probability of teleporting to any other genes within the genetic network. This effectively increases the rank of seed genes and genes within the network neighborhood of seed genes.
The values within the adjacency matrix A are derived from the probability that each gene is connected to another via StringDB, which integrates genomic context, known protein-protein interactions, co-expression, and literature mining to derive these probabilities. Link probabilities for each gene are scaled to the third power to down-weight low-probability links and then normalized.
For the phenotype-informed ranking algorithm (PIN Rank), a must be set to an appreciable value in order to optimize the probability of moving along links within the adjacency matrix and teleporting to genes within the network, as shown in:
R=αAR+(1−α)T(10)
Where A is a weighted undirected adjacency matrix containing the information regarding how different genes are linked to one another in the genetic network, T is a teleportation matrix containing the probabilities of randomly teleporting to each gene in the genetic network, α (“alpha value”) is an adjustable factor denoting how often one moves along the links within the adjacency matrix vs. teleporting to genes within the genetic network, and R is a vector of the ranks for each gene, or the equilibrium probability that one will arrive at each gene by following the links within the adjacency matrix or teleporting.
Additionally, protein-protein interaction probabilities must be raised to an appropriate power (“scale factor”), to accentuate stronger links from others. To determine the optimal values for these variables, we ran a preliminary analysis of the PIN Rank algorithm using combinations of scale factors between 1-5 and alpha values between 0.01-0.99, and compared the proportion of disease genes captured within the top 3 ranks with functional filters (
This ranking methodology effectively integrates seed gene information from multiple sources and places some emphasis on nearby hub genes vs. less well connected genes as the ranks depend not only on the distance between nodes but also the importance of each node and its interacting partners within the network at large.
While certain embodiments have been described above, it will be understood that the embodiments described are by way of example only. Accordingly, the systems and methods described herein should not be limited based on the described embodiments. Rather, the systems and methods described herein should only be limited in light of the claims that follow when taken in conjunction with the above description and accompanying drawings.
All references cited herein, including patents, patent applications, and publications, are hereby incorporated by reference in their entireties, whether previously specifically incorporated or not.
While the disclosure has been described in connection with specific embodiments thereof, it will be understood that it is capable of further modifications. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the disclosed principles and including such departures from the disclosure as come within known or customary practice within the art to which the disclosure pertains and as may be applied to the essential features hereinbefore set forth.
The term “comprising,” which is used interchangeably with “including,” “containing,” or “characterized by,” is inclusive or open-ended language and does not exclude additional, unrecited elements or method steps. The phrase “consisting of” excludes any element, step, or ingredient not specified in the claim. The phrase “consisting essentially of” limits the scope of a claim to the specified materials or steps and those that do not materially affect the basic and novel characteristics of the claimed invention. The present disclosure contemplates embodiments of the invention compositions and methods corresponding to the scope of each of these phrases.
Patent | Priority | Assignee | Title |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 07 2014 | The Scripps Research Institute | (assignment on the face of the patent) | / | |||
Jan 04 2016 | SCHORK, NICHOLAS | SCRIPPS RESEARCH INSTITUTE, THE | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037578 | /0262 | |
Jan 14 2016 | TORKAMANI, ALI | SCRIPPS RESEARCH INSTITUTE, THE | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 037578 | /0262 |
Date | Maintenance Fee Events |
Sep 13 2022 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
Date | Maintenance Schedule |
Mar 19 2022 | 4 years fee payment window open |
Sep 19 2022 | 6 months grace period start (w surcharge) |
Mar 19 2023 | patent expiry (for year 4) |
Mar 19 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 19 2026 | 8 years fee payment window open |
Sep 19 2026 | 6 months grace period start (w surcharge) |
Mar 19 2027 | patent expiry (for year 8) |
Mar 19 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 19 2030 | 12 years fee payment window open |
Sep 19 2030 | 6 months grace period start (w surcharge) |
Mar 19 2031 | patent expiry (for year 12) |
Mar 19 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |