SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (2024)

SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (1) Alfred Ferrer Florensa
Genomic Epidemiology
Technical University of Denmark
Kongens Lyngby, Denmark
alff@dtu.dk
&SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (2) Jose Juan Almagro Armenteros
Informatics and Predictive Sciences Research
Bristol Myers Squibb Company
Sevilla, Spain

&SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (3) Henrik Nielsen
Bioinformatics
Technical University of Denmark
Kongens Lyngby, Denmark

&SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (4) Frank Møller Aarestrup
Genomic Epidemiology
Technical University of Denmark
Kongens Lyngby, Denmark

&SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (5) Philip Thomas Lanken Conradsen Clausen
Genomic Epidemiology
Technical University of Denmark
Kongens Lyngby, Denmark
plan@food.dtu.dk

Abstract

The use of deep learning models in computational biology has increased massively in recent years, and is expected to do so further with the current advances in fields like Natural Language Processing. These models, although able to draw complex relations between input and target, are also largely inclined to learn noisy deviations from the pool of data used during their development. In order to assess their performance on unseen data (their capacity to generalize), it is common to randomly split the available data in development (train/validation) and test sets. This procedure, although standard, has lately been shown to produce dubious assessments of generalization due to the existing similarity between samples in the databases used.In this work, we present SpanSeq, a database partition method for machine learning that can scale to most biological sequences (genes, proteins and genomes) in order to avoid data leakage between sets. We also explore the effect of not restraining similarity between sets by reproducing the development of the state-of-the-art model DeepLoc, not only confirming the consequences of randomly splitting databases on the model assessment, but expanding those repercussions to the model development. SpanSeq is available for downloading and installing at https://github.com/genomicepidemiology/SpanSeq.

Keywords Deep learning, Generalization, DNA, Proteins

1 Introduction

The value of deep learning models resides in their ability to predict accurately on previously unseen data. To achieve this generalization capacity, they rely on their considerable plasticity, which allows them to learn underlying patterns in the data available, regardless of their complexity. Moreover, this potential allows them to memorize individual samples, perfectly fitting input with output of the training data, while decreasing the loss of the training set. The capacity to memorize by neural networks should not be underestimated, as for this they have shown to able to fit random data perfectly Zhang etal. (2021). On real datasets, it has been observed that this issue can be caused by noisy data Arpit etal. (2017) and out-of-distribution samples Carlini etal. (2019).Other factors include deeper architectures in neural networks Tirumala etal. (2022); Zhang etal. (2019); Arpit etal. (2017); Carlini etal. (2019), and repeated exposure to the same samples Carlini etal. (2019), which can come from the epoch iteration during the training process, or from duplicated samples Carlini etal. (2022).
Although some of these memorization factors are shared with overfitting, these two phenomena are different, and carry different consequences. While the first defines particular samples (mapping input to output), the latter describes the fitting of the particularities of the training set, considering them as underlying patterns of the true distribution of data. This will be reflected at the performance difference of the model on training and test sets, which will be notably lower on the latter Tetko etal. (1995).When that happens, the value of the model is dubious, as its generalization capacity is not optimal. Other differences have been found between memorization and overfitting, as the first happening earlier during the training process Tirumala etal. (2022), and methods like regularization, used to control overfitting, showing very limited effect on it Carlini etal. (2019).Nevertheless, the effect of memorization on the efficiency of a deep learning model is ambiguous Chatterjee (2018). Being able to memorize out-of-distribution samples can be beneficial, particularly with certain data distributions Feldman and Zhang (2020); Feldman (2020). But it also provoke complications, as with privacy issues in language models Carlini etal. (2022); Lee etal. (2021), or by confusing generalization with memorization, overestimating the efficiency of a model Elangovan etal. (2021).

A long-established procedure when developing a deep learning model is to split the available data in different sets, which are employed for different purposes. Frequently, for fitting the model (train set), tuning its hyperparameters (training and development or validation set) and evaluating its performance (test set) Hastie etal. (2009). Thus, train and validation sets are used during the development of the model, using the latter to evaluate the model in pursuance of choosing a set of hyperparameters that provides a better model. Meanwhile, the test set will be used to evaluate the performance of the model Westerhuis etal. (2008), to have an estimation of how its performance will be once deployed. Multiple variants of this rigid split scheme have been proposed, handling issues as related to single splits, unequal distribution of targets and representation of the true distribution of the data Kohavi (1995); Daszykowski etal. (2002); Harrington (2018); Cawley and Talbot (2010); Xu and Goodacre (2018). However, using a hold out set for testing the performance of the model has been kept as the standard evaluation of its efficiency Westerhuis etal. (2008).The split between data for training/validation and testing should ensure a truthful evaluation of the generalization capacity of the model, without being affected by its memorization capacity Hastie etal. (2009); Westerhuis etal. (2008).The most common strategy for distributing the available data in different splits is to do it randomly (with respect to the input of the model), in an attempt to avoid any influence from the developer’s previous knowledge of the dataset. This strategy is based on the assumption the data is independent and identically distributed (i.i.d.) Vapnik (1995), which is not always the case, specially with certain types of data. Duplicates and similarity among samples populate the existing databases, and they spread among the train/validation/test sets if created by random splitting. When that happens, similar points appear in both training and test sets, inducing data leakage. This event has been found when developing and testing deep learning models on images Tampu etal. (2022), text Elangovan etal. (2021); Søgaard etal. (2020) and code Allamanis (2019), meaning that the prediction on the test can be solved by memorization, leading to an overestimation of their capacity to generalize. However, data leakage through similarity is not restricted to those types of data, as the features that make them prone to data leakage can be found elsewhere, including in biological sequential data, like genes, proteins or genomes, due to the evolutionary relationships that exist among them, and the redundancy in their databases Hobohm etal. (1992).

In fact, it is well known that closeness between biological sequences tends to lead to similarity of their phenotypes Lund etal. (1997). This has been the principle used on alignment-based prediction methods Pearson (2013), but also has been object of concern with as it brings data leakage when creating machine learning models. Some strategies have been used in the last years to address the split of datasets of biological sequences, mainly built around the concepts of data reduction and data partitioning. The first clusters the data around representative sample points of the data itself, which are separate above a certain threshold of similarity. It has several benefits, such as solving the issue of similarity inside sets Allamanis (2019). However, it also brings consequences, as choosing a representative data point over the rest can bias the dataset, or the amount of data left might be insufficient for proper learning, as well as the possible beneficial effects similar samples can bring to training (as adversarial examples Goodfellow etal. (2014)). The second, data partitioning, groups similar sequences in clusters, which will be distributed among the partitions, avoiding similar samples to be in different partitions. Those two strategies are not exclusive, as the first can be used for removing duplicates, and the second for removing any data leakage from less similar sequences. For the research in this paper, we have chosen to evaluate on the latter, as it is a feasible step no matter the quantity of data available.

Grouping sequences by similarity is a step common in both reduction and partition strategies. For measuring similarity in biological sequences, traits from pairwise alignment methods Needleman and Wunsch (1970); Gotoh (1982) have been largely used, as it is an established method on the field. Particularly, percent identity due to the ease of its interpretation. However, these methods have a complexity of 𝒪(l1l2)𝒪subscript𝑙1subscript𝑙2\mathcal{O}(l_{1}l_{2}), where l1,l2subscript𝑙1subscript𝑙2l_{1},l_{2} are the lengths of the two sequences aligned. With this complexity, calculating all the distances with pairwise alignment of a large dataset (as required for deep learning) can easily become infeasible, especially as the length of the sequences grows. In order to address large datasets, most of the clustering strategies for biological sequences attempt to reduce the amount of alignments, like CD-HIT Fu etal. (2012), uCLUST Prasad etal. (2015), HHblits Remmert etal. (2012), and MMseqs Hauser etal. (2016), as per data reduction. While some of those methods might be adequate for data reduction, due to using the representatives of each cluster for the similarity threshold, they are not suitable for data partition. A few methods have recently addressed this issue, such as attempting to create appropriate train and test sets of biological sequences Petti and Eddy (2022), split molecular databases to reduce data leakage (DataSAIL Joeres etal. (2023)), or partition databases using graph-based approaches (GraphPart Teufel etal. (2023)). Although all of them split the data necessary for deep learning development at a reasonable time, as alignment-based approaches its usefulness is limited by sequence lengths.

In this work we present SpanSeq, an alignment-free approach to data partitioning for deep learning applications. With it, we are able to cluster and partition datasets in the order of 105superscript10510^{5} biological sequences, as genes, proteins or even genomes, while using an all-vs-all clustering calculation. Moreover, we apply this new method to the state-of-the-art protein subcellular location model DeepLoc AlmagroArmenteros etal. (2017); Thumuluri etal. (2022), proving the importance of similarity-based data splitting not only during model assessment, but through the whole model development process.

2 Materials and Methods

2.1 SpanSeq method

The SpanSeq method is based on three steps:

  1. 1.

    Similarity calculation among all the sequences of the dataset. It is performed with Mash Ondov etal. (2016) (amino acids and nucleotides) or KMA Clausen etal. (2018) (nucleotides).

  2. 2.

    Clustering of similar sequences into clusters (DBSCAN)

  3. 3.

    Partitions creation by distributing the clusters into k partitions (Makespan) by minimizing the difference of samples between partitions.

The software that form SpanSeq has been implemented in C++, and organized using the workflow management system SnakeMake Mölder etal. (2021). The pipeline can be installed through the public Github repository .

2.1.1 Similarity Calculation

As an alternative to pairwise alignment, SpanSeq takes advantage of k-mer comparisons, where sets of k-mers are compared and used as an unbiased estimate of the pairwise alignment Ondov etal. (2016).Although the comparison of k-mer sets is computationally efficient, when compared to pairwise alignment, it is still considered a computationally heavy process when the length of the sequences exceeds a few thousand bases Clausen etal. (2016).To overcome this computational burden, SpanSeq offers the possibility of sub-sampling k-mers using either MinHash (through MASH Ondov etal. (2016)), Minimizers Li (2016) or k-mer prefixes (through KMA Clausen etal. (2018)). These techniques shrink the sizes of the individual k-mer sets, which both makes the set comparisons faster while it requires less memory to compute.Currently, the distances between amino acid sequences can only be calculated by Mash distance Ondov etal. (2016), while distances between nucleotide sequences can be calculated by Mash Ondov etal. (2016), Cosine, Inverse K-Mer Coverage, Jaccard and Szymkiewicz-Simpson distance (Supplementary Material, Equations S1, S2, S3, S4.

2.1.2 Clustering

To avoid similar sequences between partitions, SpanSeq clusters similar sequences together using DBSCAN Ester etal. (1996) (implemented in the software CCPhylo Clausen (2023)). The parameters ϵitalic-ϵ\epsilon (minimum distance between clusters) and minPoint𝑚𝑖𝑛𝑃𝑜𝑖𝑛𝑡minPoint (minimum number of close points to make a cluster) are used differently by SpanSeq. While the latter is fixed to one, so DBSCAN turns into a single-linkage clustering method, the first one is used as the minimum distance between points to be located in different partitions.Due to the complexity of 𝒪(N2)𝒪superscript𝑁2\mathcal{O}(N^{2}) at the worst case Gan and Tao (2015), DBSCAN is suitable for most datasets up to a few hundred thousand data points. Nevertheless, to allow for massive data-clustering, a Hobohm 1 Hobohm etal. (1992) clustering can optionally be applied with a runtime of 𝒪(MN)𝒪𝑀𝑁\mathcal{O}(MN) previously to the distance calculation step, where N𝑁N is the number of sequences and M𝑀M is the average sequence length. This step can either serve to collapse very close data points, or be applied as the main clustering approach, although the latter can introduce similarity between partitions.

2.1.3 Partitions creation

In SpanSeq, the partitioning of clusters into separate cross-validation folds is treated as a makespan problem Hübscher and Glover (1994). By default, it minimizes the difference of the amount of samples between partitions, while the optimization criteria can be changed at runtime to weigh cluster size differently (e.g. log- or squared-weights) and/or prioritize equal amounts of clusters between partitions with a secondary minimization of total size difference. Likewise, a minimization criterion based on class imbalance was added, to balance prediction labels between partitions, by minimizing the difference between all labels in each partition rather than just the total size.To minimize the makespan of cross-validation partitions, a tabu search algorithmHübscher and Glover (1994) was implemented, which is initiated by a longest processing time solution (also known as decreasing best first solution).Through the tabu search, pairs of clusters are exchanged between partitions, where the optimal exchange between all partitions and clusters is accepted as the new best solution given that it is better than the previous best. This process is then repeated until no more exchanges that benefit the overall optimization criteria can be made. To limit the chances of getting stuck in a local minima, exchanges providing equally well suited solutions are accepted if they instead provide a heightened flexibility by minimizing the average amount of clusters.The search for an acceptable exchange requires 𝒪(M2(N/M)2)=𝒪(N2)𝒪superscript𝑀2superscript𝑁𝑀2𝒪superscript𝑁2\mathcal{O}(M^{2}(N/M)^{2})=\mathcal{O}(N^{2}) in each iteration (with M𝑀M being machines, N𝑁N number of clusters). However, by keeping the clusters sorted within each partition, the optimal exchange between any two partitions can be identified by a single pass-through of the clusters in the two partitions, giving a worst case runtime of 𝒪(M2N/M)=𝒪(MN)𝒪superscript𝑀2𝑁𝑀𝒪𝑀𝑁\mathcal{O}(M^{2}N/M)=\mathcal{O}(MN).

2.2 Performance Evaluation

Three datasets were used for assessing the performance of SpanSeq:

The clustering by SpanSeq on proteins and genes was tested by comparing the different distance measures available with the identity reported by global pairwise alignment performed with ggsearch36, which is part of the package FASTA36 (version: 36.3.8i) Pearson (2016). As that was not feasible with genomes, the clusters provided by SpanSeq were compared by the phylogeny of the samples.

The performance evaluation was done on a machine with AMD EPYC 7352 24-Core Processor (96 CPUs), and the OS system Ubuntu 22.04.3 LTS. Each CPU has 2100-1500 MHz speed, 2.0TB of RAM Memory and 8.0GB of RAM Swap.

2.3 Similarity-based partition effect on DeepLoc

To evaluate the effect of similarity-based partitioning, the development of the deep learning model DeepLoc 1.0 AlmagroArmenteros etal. (2017) was reproduced (hyperparameter selection, training, model assessment). This deep neural network predicts the subcellular location of eukaryotic proteins, using a combination of convolutional, LSTM, and attention layers. Although at the time of the SpanSeq research DeepLoc 2.0 Thumuluri etal. (2022) had been released, it was decided to evaluate on the first version, as the newest uses a pre-trained model, so that part of the model could not be evaluated with different similarity splits.

As DeepLoc works on amino acid sequences, a fixed threshold of 0.3 similarity was chosen, below which two sequences would not be considered close enough to be prone to data leakage. This number was chosen based on previous studies of the relationship between the amino acid sequence and general structure features Sander and Schneider (1991); Lund etal. (1997), and its usage in protein-based deep learning models AlmagroArmenteros etal. (2017). However, the strategy for selecting a threshold to define data leakage by similarity is outside the scope of this research.

SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (6)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (7)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (8)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (9)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (10)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (11)

2.3.1 Datasets

The dataset of DeepLoc 2.0 Thumuluri etal. (2022) was used for developing DeepLoc 1.0 AlmagroArmenteros etal. (2017), as it was newer and larger Thumuluri etal. (2022). Proteins with multiple locations, as well as the ones longer than 1,000 amino acids, were dropped due to DeepLoc 1.0’s inability to classify these, unlike DeepLoc 2.0, leaving a dataset of 19,171 sequences. In order to analyze the effect of data similarity partition during the development of a deep learning model, the dataset created was split in different ways:

  • Hold Out Test Set: The dataset was originally partitioned in six subsets using SpanSeq with Mash, so there was a maximum 0.3 similarity between sequences in different sets. From that split, 1 set was selected randomly to be the hold out test set, while the other five were merged to be the development set.

  • Development Set: This set was divided in five splits: one for testing and four for training and validation set. However, four different methods for partitioning were used, each of which with different approaches towards similarity between splits.

    • SpanSeq (Mash) Split: The development set was split using SpanSeq, with Mash Ondov etal. (2016) for calculating the distance among sequences. Thus, there is no pair of sequences more than 0.3 similar (mash distance) in two different partitions.

    • SpanSeq (Pairwise Alignment) Split: The development set is split using SpanSeq, with global pairwise alignment (with ggsearch36 Pearson (2016)) for calculating the distance among sequences. Thus, there are no pair of sequences with more than 30% identity in two different partitions.

    • Random Split: The development set is split randomly, so there is no control of where similar sequences go.

    • Increased Similarity Split: The development set is split with similar sequences being spread in different partitions, so that the similarity is increased between them. This was done by inverting the distance measure between sequences, so that closely related sequences were treated as distantly related sequences and vice.verse, thus spreading them into different partitions.

2.3.2 Development of DeepLoc with different datasets

The generalization of MCC (Matthews Correlation Coefficient) Matthews (1975) for K𝐾K categories known as Gorodkin measure Gorodkin (2004) was used as performance measure.

With each of the four different dataset splits, SpanSeq-Mash, SpanSeq-Alignment, Random, Increased Similarity, the same process was followed for developing DeepLoc. The hyperparameter selection was performed using the four training/validation sets with Bayesian hyperparameter optimization using SigOpt Hayes etal. (2019), with the four different datasets having the same parameters for hyperparameter selection (amount of combination tried, hyperparameter’s ranges explored).

Each of the four different dataset splits and combination of hyperparameters was used for training a DeepLoc model. In order to avoid overfitting, each model was trained four times, where each time three partitions were used for training while one was used as a validation set. The training lasted for 800 epochs, and the model iteration with best performance on the validation set was selected, in order to assess the effect of similarity on classic methods like early-stopping.

The hyperparameter optimization and training were performed on one Node of 2 Tesla V100 16 GB GPUs.

SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (12)

3 Results

3.1 SpanSeq features

3.1.1 Distance measure

The correlation between the distance measures based on k-mers used by SpanSeq and the identity from global alignment can be visualized on the protein (Figure 1(a)) and gene (Figures 1(b)-1(f)) datasets. While mash, cosine, k-mer inverse coverage and Szymkiewicz-Simpson distances seem to follow identity, jaccard distance gave values higher than the identity predicted on the same pair of sequences. Further exploration of the relation between the difference between k-mer distances and sequence lengths was performed (Supplementary Data, Figure S3, as well as the effects of k-mer and minimizer size, although an in-depth study of them is outside the scope of this research.

3.1.2 Taxonomic relations in clustering

The clustering results of SpanSeq on the RefSeq database O’Leary etal. (2016) (104superscript10410^{4} complete genomes) show differences between the distance measures and the taxonomic relations among the genomes clustered (Figure 2). The cosine, Szymkiewicz–Simpson and k-mer inverse coverage distances show similar distributions for the five NCBI taxonomic ranks (Species, Genus, Family, Order and Class), as for the last four the distance is mainly almost 1, while in the species rank the majority of the distances are between 0.2 and 0.4. With the Jaccard distance, we observe that the distribution of distances for species is higher, which is a similar behavior as seen in Figure 1(f). Due to the distribution of distances at the other taxonomic levels, it is difficult to compare the behavior of Jaccard on those with the other distances from KMA Clausen etal. (2018). With Mash distance, the distributions on the taxonomic ranks differ considerably from the other distances. Most of the distances at the species rank are below 0.05, while at Genus, family and order ranks most of the distances are between 0.4 and 1, although there are plenty of distances close to 0.

3.1.3 Performance evaluation

The evaluation of the performance of SpanSeq was done on the subsets of the genomic dataset created from RefSeq, in order to evaluate the method not only in large datasets (from 101superscript10110^{1} to 105superscript10510^{5}), but also on sequences with lengths large enough to be prohibitive for most alignment methods. The results show that SpanSeq was able to partition datasets in a sensible time relative to the size of the datasets (Supplementary Data, Figure S4) without using the Hobohm1 strategy, while using different strategies depending on the size of the dataset (minimizers/prefixes).

SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (13)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (14)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (15)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (16)

3.2 Similarity partitioning

3.2.1 Hyperparameter selection

During the hyperparameter selection, the performance during the cross-validation process in the different dataset splits is directly correlated with the similarity between the folds (Supplementary Data, Table S1). The hyperparameter configuration for each of the dataset splits is also different, although SigOpt Hayes etal. (2019) was provided with the same range of hyperparameters for each split of data (Supplementary Data, Table S1).

3.2.2 Training process

The training curves of the DeepLoc algorithm show clear differences among partition strategies. In the increased similarity and random split training sets, it is reached a Gorodkin value higher than 0.9 before 100 (Figure 3(a)) and 200 epochs (Figure 3(b)), respectively, followed by a slight plateau of its improvement. The Gorodkin measures of the validation set follows a similar behavior, but 0.2 points below. In the case of the two splits made with SpanSeq, the training curves draw very different dynamics, as they increase sharply during the first epochs, but after which they keep incrementing slowly from there without reaching 0.9 during the 800 epochs trained (Figures 3(c), 3(d)). The Gorodkin measure curve in the validation set also differs from the other two splits of the dataset, as the Gorodkin values in both SpanSeq splits seem to decrease after the 100th epoch, unlike the training’s done without SpanSeq, which plateau or even increase (Figures 3(a), 3(b)).We also saw differences on the location of the epochs where the performance of the validation set is higher among the different ways of splitting the dataset. Those epochs appear certainly much earlier during the training of SpanSeq split datasets (vertical lines in 3; Supplementary Material, Figure S5).

3.2.3 Model evaluation

When evaluating the performance of DeepLoc (Figure 4; Supplementary Material, Table S2), we observe the same differences seen in Figure 3 between splitting methods. However, the performance on the test and test holdout show differences between them depending on the split method. Using SpanSeq, the performances on the test and test holdout are almost equal, and so on the validation set. But when splitting randomly or increasing the similarity among sets, the validation and test set have a notably higher Gorodkin measures than on the test holdout, which are slightly smaller than on the models developed with SpanSeq.

SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (17)

4 Discussion

4.1 Sequence similarity calculation using distance measures

SpanSeq is able to use a clustering strategy of all-vs-all and compare long sequences with a reasonable amount of computational resources (Supplementary Material S4), by using k-mer comparison instead of alignment related measures. Looking at the high correlation between those two strategies (Figure 1), it is fair to assume that the use of k-mer distances (besides Jaccard) gathers similar information to the use of identity from alignments. However, the points seem to spread more across both sides of the diagonal line as identity gets lower, which could indicate that some of the differences between the measures come from alignment constructs. Besides, in most of the distance measures clusters of sequences are found that have a much smaller distance between them than the inverse of their identity. This could be related to the fact that k-mer distance is unaffected by recombination events, unlike global alignment.The choice of hyperparameters of SpanSeq (k-mer, minimizer, etc) can affect the correlation between distance and identity. Moreover, there are clear relationships between their optimal values and the length of the sequences to be clustered (Supplementary Data, Figure S3). However, a study of strategies for choosing these hyperparameters is outside of the scope of this study.

The use of k-mer distances as a measure of similarity makes it possible to standardize methods for avoiding data leakage when working with long sequences such as genomes. Previously limited to using the taxonomic nomenclature, partitions made with SpanSeq should be more consistent, while they show certain connections (Figure 2). K-mer inverse coverage, cosine and Szymkiewicz–Simpson distance measures shows a very similar behavior, with most of the intra-species distances being below 0.7. On the other ranks, those distances are almost all collapsed at 1 (maximum distance), which makes them unable to differentiate genomic distances there. Looking at those results, we believe that setting a minimum distance of 0.7 between sequences in different folds should limit data leakage when working on genomes. The other two distance measures behave differently and seem less adequate for genome dataset partitions. While Jaccard distance should be discarded for overestimating the distance between sequences, Mash shows a behavior not observed with shorter sequences, as it seems to underestimate the distance between very dissimilar long sequences. This inaccuracy between dissimilar sequences is a result of a logarithmic error profile produced by the Jaccard estimate calculated through Mash, giving a small error-margin of similar sequences which grows as the sequences become more dissimilar.
Beyond comparing distances and taxonomic ranks, the distribution of clusters among the species on the 106superscript10610^{6} genomes provided insights of the inner-species dispersion and species designation (Supplementary Data, Figure S6, Tables S4, S3). In fact, we believe using distances can show deficiencies on sequencing certain species, or taxonomic annotations presenting deceiving differences between genomes. An example of the last is shown in the Figure 6(b), where although most clusters had one or two species, some clusters contained even 7 species. When looking at those clusters (Table S3), it is noticeable that some of them are actually the same bacteria species but of different hosts (Wolbachia endosymbiont), or species previously considered biotypes and difficult to differentiate (Brucella) Moreno etal. (2002).Although in this case we used a Cosine distance of 0.4 to allow some species to be placed in different clusters (as seen in Figure 2, to cluster most of the species together a value of 0.7 would be more adequate), we observe some differences on the amount of genomes of each specie in the dataset, and the amount of clusters formed (Figure 6(a)). Firstly, most of the species with more genomes in the database are related to human health (Table S4), as more effort has been done on sequencing them. Secondly, while for most species there is a correlation between amount of samples (genomes) and the amount of clusters formed (E. coli, K. pneumoniae, S. enterica), species like H. pylori seem to have a higher cluster per genome ratio, which would agree with the higher genomic variability found in this species Göttke etal. (2000).To sum up, while the use of k-mer distances and the taxonomic landscape of bacterias is outside the scope of this study, the results shown indicate that k-mer distances can be a more consistent measure in order to cluster genomes by similarity than taxonomy rank.

4.2 Effect of data similarity on deep learning models

The clustering (and the following makespan step) performed by SpanSeq does not search for a representative clustering of the dataset, as other algorithms Fu etal. (2012), but aims to build independent partitions and avoid data leakage between them. The outcome of this dataset partition strategy has been tested on training DeepLoc with four different approaches, which show clear differences on the Gorodkin values on the test and validation sets. Seeing such variation when using the DeepLoc dataset is remarkable, as it does not contain a large number of similar sequences (Figure 1(a)).

The purpose of using a separate test set when evaluating a model is to assess the generalization capacity of our model, as its data should be different to those seen during training. In this case, for each partition we have a test set (which is a common procedure in the development of a model) and a hold out test set to evaluate that test set. In doing so, it is found that the test sets created from SpanSeq have a Gorodkin measure very similar to the one shown by the hold out test, while the other two partition strategies show test Gorodkin values much higher (Table 4), overestimating its generalization capacity. This behavior coincides with the previous studies done on similarity between the training and the test set Elangovan etal. (2021), questioning if a test set created with random split is a truthful method to report the capacity to generalize of a model (especially when working with sequential data). In fact, it is reasonable to claim that the difference between the Gorodkin values on the test and hold out test sets of the random and increased hom*ology partitions comes from similar sequences between training and test sets (that the hold out test set is free of). Thus, when evaluating the model, some of the model’s generalization ability claimed is, indeed, its memorization capacity.

The consequences of having similarity between sets are not limited to the model assessment. The Gorodkin value on the validation set is also similar to the hold out test set when using SpanSeq (although both sets have completely different data points), while higher on the not similarity restricted cases. This indicates that similarity between partitions also affects steps on the development of the model that involve validation sets, as the hyperparameter selection. Moreover, common methods to avoid overfitting that depend on a validation set, such as early-stopping, are also affected by the similarity between the development sets (Figure 3). When using similarity restricted partitions, the Gorodkin measure of the validation set follows the one of the training set, until it plateaus and slightly diminishes as the epochs increase. This usual dynamic is commonly diagnosed as the neural network starting to fit the particularities and noise from the training set, in detriment to fitting the real distribution of the data (overfitting). However, on the other two dataset splits, the validation Gorodkin value does not decrease (even increases slightly), thus making early-stopping and other overfitting detection methods useless.Having a deceiving validation set can also affect our understanding of the model’s development. Looking at the DeepLoc training curves and performances on the different sets created with SpanSeq, we could consider that the model is underfitting before overfitting (the training Gorodkin measure is low at the epoch with the best performance in the validation set). However, this conclusion is hard to achieve when looking at the similarity non-restricted splitting.

The effects of similarity between sequences are not limited to the performance and model assessment, as the time necessary for developing the deep learning model (hyperparameter selection and training) can be reduced drastically. As seen in Figure 3, the epochs that can generalize better appear dramatically earlier when using similarity-based splitting (SpanSeq). This fact can be applied not only at the training process (with early stopping or reducing the amount of epochs that the model is trained for), but also for hyperparameter selection (amount of epochs necessary for assessing the parameter configuration). Considering the power consumption for training deep learning models Kaack etal. (2022), the use of SpanSeq can help on making their development much ecologically sustainable and economically affordable for smaller teams.

5 Conclusion

In this work, we present SpanSeq, a dataset partition method specially created for the development of deep learning models. Unlike other clustering methods for biological sequences Fu etal. (2012), SpanSeq uses a clustering method based on k-mer distances and all-vs-all clustering, to assess data leakage from similarity during data partition. While it provides balanced partitions with respect to partition size as well as the option to minimize class imbalance. The method is able to handle large datasets of biological sequences, no matter its length (proteins, genes and genomes).

Furthermore, we prove the need to use SpanSeq over random splits to limit the similarity between partitions. First, because it prevents to report overconfident performance of the test set due to being affected by the memorization capacity of the model. Second, to protect methods such as hyperparameter selection or early-stopping, based on the performance on validation sets, so they benefit models with generalization capacity. And last, as it reduces largely the resources necessary for developing a deep learning model, without compromising the generalization capacity of the model.

6 Data Availability

The software SpanSeq is available at the Github repository https://github.com/genomicepidemiology/SpanSeq.git. SpanSeq uses the softwares CCPhylo Clausen (2023), KMA Clausen etal. (2018) and Mash Ondov etal. (2016).The software for DeepLoc 1.0 AlmagroArmenteros etal. (2017) is available at https://github.com/JJAlmagro/subcellular_localization and the database of DeepLoc 2.0 Thumuluri etal. (2022) is available at https://services.healthtech.dtu.dk/services/DeepLoc-2.0/.

7 Supplementary Data

Supplementary data is available at the file Supplementary Material

8 Funding

This study was supported by the European Union’s Horizon 2020 research and innovation program under VEO grant agreement No. 874735.

9 Conflict of interests

Jose Juan Almagro Armenteros is an employee of Bristol Myers Squibb Company at the time of the publication; however, that did not influence the research in any way.

References

  • Zhang etal. [2021]Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals.Understanding deep learning (still) requires rethinking generalization.Communications of the ACM, 64(3):107–115, 2021.
  • Arpit etal. [2017]Devansh Arpit, Stanisław Jastrzębski, Nicolas Ballas, David Krueger, Emmanuel Bengio, MaxinderS Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, etal.A closer look at memorization in deep networks.In International conference on machine learning, pages 233–242. PMLR, 2017.
  • Carlini etal. [2019]Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song.The secret sharer: Evaluating and testing unintended memorization in neural networks.In 28th USENIX Security Symposium (USENIX Security 19), pages 267–284, 2019.
  • Tirumala etal. [2022]Kushal Tirumala, Aram Markosyan, Luke Zettlemoyer, and Armen Aghajanyan.Memorization without overfitting: Analyzing the training dynamics of large language models.Advances in Neural Information Processing Systems, 35:38274–38290, 2022.
  • Zhang etal. [2019]Chiyuan Zhang, Samy Bengio, Moritz Hardt, MichaelC Mozer, and Yoram Singer.Identity crisis: Memorization and generalization under extreme overparameterization.arXiv preprint arXiv:1902.04698, 2019.
  • Carlini etal. [2022]Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang.Quantifying memorization across neural language models.arXiv preprint arXiv:2202.07646, 2022.
  • Tetko etal. [1995]IgorV Tetko, DavidJ Livingstone, and AlexanderI Luik.Neural network studies. 1. comparison of overfitting and overtraining.Journal of chemical information and computer sciences, 35(5):826–833, 1995.
  • Chatterjee [2018]Satrajit Chatterjee.Learning and memorization.In International conference on machine learning, pages 755–763. PMLR, 2018.
  • Feldman and Zhang [2020]Vitaly Feldman and Chiyuan Zhang.What neural networks memorize and why: Discovering the long tail via influence estimation.Advances in Neural Information Processing Systems, 33:2881–2891, 2020.
  • Feldman [2020]Vitaly Feldman.Does learning require memorization? a short tale about a long tail.In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 954–959, 2020.
  • Lee etal. [2021]Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini.Deduplicating training data makes language models better.arXiv preprint arXiv:2107.06499, 2021.
  • Elangovan etal. [2021]Aparna Elangovan, Jiayuan He, and Karin Verspoor.Memorization vs. generalization: quantifying data leakage in NLP performance evaluation.arXiv preprint arXiv:2102.01818, 2021.
  • Hastie etal. [2009]Trevor Hastie, JeromeH Friedman, and Robert Tibshirani.The elements of statistical learning: data mining, inference, and prediction, volume2.Springer, 2009.
  • Westerhuis etal. [2008]JohanA Westerhuis, HuubCJ Hoefsloot, Suzanne Smit, DanielJ Vis, AgeK Smilde, EwoudJJ van Velzen, JohnPM van Duijnhoven, and FerdiA van Dorsten.Assessment of PLSDA cross validation.Metabolomics, 4:81–89, 2008.
  • Kohavi [1995]Ron Kohavi.A study of cross-validation and bootstrap for accuracy estimation and model selection.In The International Joint Conference on AI, volume14, pages 1137–1145. Montreal, Canada, 1995.
  • Daszykowski etal. [2002]Michal Daszykowski, Beata Walczak, and DLMassart.Representative subset selection.Analytica chimica acta, 468(1):91–103, 2002.
  • Harrington [2018]Peter deBoves Harrington.Multiple versus single set validation of multivariate models to avoid mistakes.Critical reviews in analytical chemistry, 48(1):33–46, 2018.
  • Cawley and Talbot [2010]GavinC Cawley and NicolaLC Talbot.On over-fitting in model selection and subsequent selection bias in performance evaluation.The Journal of Machine Learning Research, 11:2079–2107, 2010.
  • Xu and Goodacre [2018]Yun Xu and Royston Goodacre.On splitting training and validation set: a comparative study of cross-validation, bootstrap and systematic sampling for estimating the generalization performance of supervised learning.Journal of analysis and testing, 2(3):249–262, 2018.
  • Vapnik [1995]VladimirN Vapnik.The nature of statistical learning theory.Springer, 1995.
  • Tampu etal. [2022]IulianEmil Tampu, Anders Eklund, and Neda Haj-Hosseini.Inflation of test accuracy due to data leakage in deep learning-based classification of oct images.Scientific Data, 9(1):580, 2022.
  • Søgaard etal. [2020]Anders Søgaard, Sebastian Ebert, Jasmijn Bastings, and Katja Filippova.We need to talk about random splits.arXiv preprint arXiv:2005.00636, 2020.
  • Allamanis [2019]Miltiadis Allamanis.The adverse effects of code duplication in machine learning models of code.In Proceedings of the 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, pages 143–153, 2019.
  • Hobohm etal. [1992]Uwe Hobohm, Michael Scharf, Reinhard Schneider, and Chris Sander.Selection of representative protein data sets.Protein Science, 1(3):409–417, 1992.
  • Lund etal. [1997]Ole Lund, Kenneth Frimand, Jan Gorodkin, Henrik Bohr, Jakob Bohr, Jan Hansen, and Søren Brunak.Protein distance constraints predicted by neural networks and probability density functions.Protein Engineering, 10(11):1241–1248, 1997.
  • Pearson [2013]WilliamR Pearson.An introduction to sequence similarity (“hom*ology”) searching.Current protocols in bioinformatics, 42(1):3–1, 2013.
  • Goodfellow etal. [2014]IanJ Goodfellow, Jonathon Shlens, and Christian Szegedy.Explaining and harnessing adversarial examples.arXiv preprint arXiv:1412.6572, 2014.
  • Needleman and Wunsch [1970]SaulB Needleman and ChristianD Wunsch.A general method applicable to the search for similarities in the amino acid sequence of two proteins.Journal of molecular biology, 48(3):443–453, 1970.
  • Gotoh [1982]Osamu Gotoh.An improved algorithm for matching biological sequences.Journal of molecular biology, 162(3):705–708, 1982.
  • Fu etal. [2012]Limin Fu, Beifang Niu, Zhengwei Zhu, Sitao Wu, and Weizhong Li.CD-HIT: accelerated for clustering the next-generation sequencing data.Bioinformatics, 28(23):3150–3152, 2012.
  • Prasad etal. [2015]DVenkatavara Prasad, Sathya Madhusudanan, and Suresh Jaganathan.uCLUST – a new algorithm for clustering unstructured data.ARPN Journal of Engineering and Applied Sciences, 10(5):2108–2117, 2015.
  • Remmert etal. [2012]Michael Remmert, Andreas Biegert, Andreas Hauser, and Johannes Söding.HHblits: lightning-fast iterative protein sequence searching by HMM-HMM alignment.Nature methods, 9(2):173–175, 2012.
  • Hauser etal. [2016]Maria Hauser, Martin Steinegger, and Johannes Söding.MMseqs software suite for fast and deep clustering and searching of large protein sequence sets.Bioinformatics, 32(9):1323–1330, 2016.
  • Petti and Eddy [2022]Samantha Petti and SeanR Eddy.Constructing benchmark test sets for biological sequence analysis using independent set algorithms.PLOS Computational Biology, 18(3):e1009492, 2022.
  • Joeres etal. [2023]Roman Joeres, DavidB Blumenthal, and OlgaV Kalinina.DataSAIL: Data splitting against information leakage.bioRxiv, page 2023.11.15.566305, 2023.
  • Teufel etal. [2023]Felix Teufel, MagnúsHalldór Gíslason, JoséJuan AlmagroArmenteros, AlexanderRosenberg Johansen, Ole Winther, and Henrik Nielsen.GraphPart: hom*ology partitioning for biological sequence analysis.NAR Genomics and Bioinformatics, 5(4):lqad088, 2023.
  • AlmagroArmenteros etal. [2017]JoséJuan AlmagroArmenteros, CasperKaae Sønderby, SørenKaae Sønderby, Henrik Nielsen, and Ole Winther.DeepLoc: prediction of protein subcellular localization using deep learning.Bioinformatics, 33(21):3387–3395, 2017.
  • Thumuluri etal. [2022]Vineet Thumuluri, JoséJuan AlmagroArmenteros, AlexanderRosenberg Johansen, Henrik Nielsen, and Ole Winther.DeepLoc 2.0: multi-label subcellular localization prediction using protein language models.Nucleic acids research, 50(W1):W228–W234, 2022.
  • Ondov etal. [2016]BrianD Ondov, ToddJ Treangen, Páll Melsted, AdamB Mallonee, NicholasH Bergman, Sergey Koren, and AdamM Phillippy.Mash: fast genome and metagenome distance estimation using minhash.Genome biology, 17(1):1–14, 2016.
  • Clausen etal. [2018]PhilipTLC Clausen, FrankM Aarestrup, and Ole Lund.Rapid and precise alignment of raw reads against redundant databases with KMA.BMC bioinformatics, 19:1–8, 2018.
  • Mölder etal. [2021]Felix Mölder, KimPhilipp Jablonski, Brice Letcher, MichaelB Hall, ChristopherH Tomkins-Tinch, Vanessa Sochat, Jan Forster, Soohyun Lee, SvenO Twardziok, Alexander Kanitz, etal.Sustainable data analysis with Snakemake.F1000Research, 10:33, 2021.
  • Clausen etal. [2016]PhilipTLC Clausen, EaZankari, FrankM Aarestrup, and Ole Lund.Benchmarking of methods for identification of antimicrobial resistance genes in bacterial whole genome data.Journal of Antimicrobial Chemotherapy, 71(9):2484–2488, 2016.
  • Li [2016]Heng Li.Minimap and miniasm: fast mapping and de novo assembly for noisy long sequences.Bioinformatics, 32(14):2103–2110, 2016.
  • Ester etal. [1996]Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, etal.A density-based algorithm for discovering clusters in large spatial databases with noise.In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, volume96, pages 226–231. AAAI Press, 1996.
  • Clausen [2023]PhilipTLC Clausen.Scaling neighbor joining to one million taxa with dynamic and heuristic neighbor joining.Bioinformatics, 39(1):btac774, 2023.
  • Gan and Tao [2015]Junhao Gan and Yufei Tao.DBSCAN revisited: Mis-claim, un-fixability, and approximation.In Proceedings of the 2015 ACM SIGMOD international conference on management of data, pages 519–530, 2015.
  • Hübscher and Glover [1994]Roland Hübscher and Fred Glover.Applying tabu search with influential diversification to multiprocessor scheduling.Computers & operations research, 21(8):877–884, 1994.
  • Bortolaia etal. [2020]Valeria Bortolaia, RolfS Kaas, Etienne Ruppe, MarilynC Roberts, Stefan Schwarz, Vincent Cattoir, Alain Philippon, RosaL Allesoe, AnaRita Rebelo, AlfredFerrer Florensa, etal.Resfinder 4.0 for predictions of phenotypes from genotypes.Journal of Antimicrobial Chemotherapy, 75(12):3491–3500, 2020.
  • O’Leary etal. [2016]NualaA O’Leary, MathewW Wright, JRodney Brister, Stacy Ciufo, Diana Haddad, Rich McVeigh, Bhanu Rajput, Barbara Robbertse, Brian Smith-White, Danso Ako-Adjei, etal.Reference sequence (RefSeq) database at NCBI: current status, taxonomic expansion, and functional annotation.Nucleic acids research, 44(D1):D733–D745, 2016.
  • Pearson [2016]WilliamR Pearson.Finding protein and nucleotide similarities with fasta.Current protocols in bioinformatics, 53(1):3–9, 2016.
  • Sander and Schneider [1991]Chris Sander and Reinhard Schneider.Database of hom*ology-derived protein structures and the structural meaning of sequence alignment.Proteins: Structure, Function, and Bioinformatics, 9(1):56–68, 1991.
  • Rost [1999]Burkhard Rost.Twilight zone of protein sequence alignments.Protein engineering, 12(2):85–94, 1999.
  • Matthews [1975]BrianW Matthews.Comparison of the predicted and observed secondary structure of t4 phage lysozyme.Biochimica et Biophysica Acta (BBA)-Protein Structure, 405(2):442–451, 1975.
  • Gorodkin [2004]Jan Gorodkin.Comparing two k-category assignments by a k-category correlation coefficient.Computational biology and chemistry, 28(5-6):367–374, 2004.
  • Hayes etal. [2019]Patrick Hayes, Dan Anderson, Bolong Cheng, TaylorJackle Spriggs, Alexandra Johnson, and Michael McCourt.SigOpt documentation.Technical Report SO-12/14 – Revision 1.07, SigOpt, Inc., 2019.URL https://sigopt.com/docs.
  • Fay and Proschan [2010]MichaelP Fay and MichaelA Proschan.Wilcoxon-mann-whitney or t-test? on assumptions for hypothesis tests and multiple interpretations of decision rules.Statistics surveys, 4:1, 2010.
  • Moreno etal. [2002]Edgardo Moreno, Axel Cloeckaert, and Ignacio Moriyón.Brucella evolution and taxonomy.Veterinary microbiology, 90(1-4):209–227, 2002.
  • Göttke etal. [2000]MarkusU Göttke, CarloA Fallone, AlanN Barkun, Konstanze Vogt, Vivian Loo, Matthias Trautmann, JianZ Tong, ThanhN Nguyen, Toby Fainsilber, HelmutH Hahn, etal.Genetic variability determinants of helicobacter pylori: influence of clinical background and geographic origin of isolates.The Journal of infectious diseases, 181(5):1674–1681, 2000.
  • Kaack etal. [2022]LynnH Kaack, PriyaL Donti, Emma Strubell, George Kamiya, Felix Creutzig, and David Rolnick.Aligning artificial intelligence with climate change mitigation.Nature Climate Change, 12(6):518–527, 2022.
  • Rice etal. [2000]Peter Rice, Ian Longden, and Alan Bleasby.Emboss: the European molecular biology open software suite.Trends in genetics, 16(6):276–277, 2000.

Appendix A SUPPLEMENTARY MATERIAL

A.1 FIGURES

SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (18)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (19)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (20)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (21)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (22)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (23)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (24)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (25)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (26)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (27)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (28)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (29)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (30)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (31)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (32)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (33)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (34)

A.2 Tables

Increased
Similarity
Random
SpanSeq
(Mash)
SpanSeq
(Alignment)
Gorodkin value0.6983000.6535060.6127090.614149
Batch Size16649696
Attention Size2566448464
Clip3599
Convolutional Kernels5,1,1,21,29,121,1,1,9,3,11,3,29,1,1,211,1,21,1,1,9
Dropouts0.25,0.10.1,0.250.5,0.10.5,0.25
Learning Rate0.00050.0010.0010.001
Number of Filters10201010
Number of Features50305020
Hidden Feed-forward layer size484256128384
Sets/Split method
Increased
Similarity
Random
SpanSeq
(Mash)
SpanSeq
(Alignment)
Train0.9839 (0.01)0.9563 (0.02)0.7415 (0.01)0.6887 (0.01)
Validation0.6745 (0.02)0.6658 (0.01)0.6198 (0.01)0.6204 (0.00)
Test0.7096 (0.00)0.7102 (0.00)0.6196 (0.00)0.6269 (0.01)
Hold Out Test0.6375 (0.00)0.6374 (0.00)0.6450 (0.00)0.6431 (0.00)
Species Clustered
Wolbachia endosymbiont of Drosophila santomea,
Wolbachia endosymbiont of Drosophila melanogaster,
Wolbachia pipientis,
Wolbachia endosymbiont of Aedes aegypti,
Wolbachia endosymbiont of Drosophila simulans,
Wolbachia endosymbiont of Drosophila innubila,
Wolbachia endosymbiont (group A) of Coremacera marginat
Brucella melitensis,
Brucella suis,
Brucella abortus,
Brucella ceti, Brucella canis,
Brucella pinnipedialis
Rhizobium sp. N1341,
Rhizobium esperanzae,
Rhizobium sp. N113,
Rhizobium sp. N621
Amount clusters the species is divided inSpecies Name
210Escherichia coli
90Klebsiella pneumoniae
86Helicobacter pylori
78Salmonella enterica
77Pseudomonas aeruginosa

A.3 Distance Formulas

Cosine Distance=DC(A,B)=1SC(A,B):=1cos(θ)=1ABABCosine Distancesubscript𝐷𝐶𝐴𝐵1subscript𝑆𝐶𝐴𝐵assign1𝜃1𝐴𝐵norm𝐴norm𝐵\textbf{{Cosine Distance}}=D_{C}{(A,B)}=1-S_{C}{(A,B)}:=1-\cos{(\theta)}=1-\frac{A\cdot B}{\left\|A\right\|\cdot\left\|B\right\|}(S1)
Jaccard Distance=DJ(A,B)=1SJ(A,B)=1|AB||A|+|B||AB|Jaccard Distancesubscript𝐷𝐽𝐴𝐵1subscript𝑆𝐽𝐴𝐵1𝐴𝐵𝐴𝐵𝐴𝐵\textbf{{Jaccard Distance}}=D_{J}{(A,B)}=1-S_{J}{(A,B)}=1-\frac{\left|A\cap B\right|}{\left|A\right|+\left|B\right|-\left|A\cap B\right|}(S2)
Inverse Coverage=DI(A,B)=1SI(A,B)=12|AB|A+BInverse Coveragesubscript𝐷𝐼𝐴𝐵1subscript𝑆𝐼𝐴𝐵12𝐴𝐵norm𝐴norm𝐵\textbf{{Inverse Coverage}}=D_{I}{(A,B)}=1-S_{I}{(A,B)}=1-2\frac{\left|A\cap B\right|}{\left\|A\right\|+\left\|B\right\|}(S3)
Szymkiewicz–Simpson Distance=DS(A,B)=1SS(A,B)=1|AB|min(|A|,|B|)Szymkiewicz–Simpson Distancesubscript𝐷𝑆𝐴𝐵1subscript𝑆𝑆𝐴𝐵1𝐴𝐵𝐴𝐵\textbf{{Szymkiewicz\textendash Simpson Distance}}=D_{S}{(A,B)}=1-S_{S}{(A,B)}=1-\frac{\left|A\cap B\right|}{\min{(\left|A\right|,\left|B\right|})}(S4)
SpanSeq: Similarity-based sequence data splitting method for improved development and assessment of deep learning projects (2024)

References

Top Articles
Latest Posts
Article information

Author: Horacio Brakus JD

Last Updated:

Views: 5652

Rating: 4 / 5 (51 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Horacio Brakus JD

Birthday: 1999-08-21

Address: Apt. 524 43384 Minnie Prairie, South Edda, MA 62804

Phone: +5931039998219

Job: Sales Strategist

Hobby: Sculling, Kitesurfing, Orienteering, Painting, Computer programming, Creative writing, Scuba diving

Introduction: My name is Horacio Brakus JD, I am a lively, splendid, jolly, vivacious, vast, cheerful, agreeable person who loves writing and wants to share my knowledge and understanding with you.