Provides all {@link de.jstacs.sequenceScores.statisticalModels.trainable.TrainableStatisticalModel}s, which can be learned from a single {@link de.jstacs.data.DataSet}. Often, parameter learning follows a learning principle like maximum likelihood or maximum a-posteriori. Parameter learning typically is performed analytically like for the homogeneous and inhomogeneous models in the {@link de.jstacs.sequenceScores.statisticalModels.trainable.discrete} sub-package.
Notable exceptions are hidden Markov models ({@link de.jstacs.sequenceScores.statisticalModels.trainable.hmm}), which are learned by Baum-Welch or Viterbi training, and mixture models ({@link de.jstacs.sequenceScores.statisticalModels.trainable.mixture}), which are learned by expectation-maximization (EM) or Gibbs sampling.
After a {@link de.jstacs.sequenceScores.statisticalModels.trainable.TrainableStatisticalModel} has been trained, it can be used to compute the likelihood of new sequences.
Any combination of {@link de.jstacs.sequenceScores.statisticalModels.trainable.TrainableStatisticalModel}s can be used to build a {@link de.jstacs.classifiers.trainSMBased.TrainSMBasedClassifier}, which can be used to classify new sequences and which can be evaluated using a {@link de.jstacs.classifiers.assessment.ClassifierAssessment}.