Provides all {@link de.jstacs.sequenceScores.statisticalModels.differentiable.DifferentiableStatisticalModel}s, which can compute the gradient with
respect to their parameters for a given input {@link de.jstacs.data.sequences.Sequence}.
The parameters of {@link de.jstacs.sequenceScores.statisticalModels.differentiable.DifferentiableStatisticalModel} are learned numerically, typically by
gradient-based method like provided in {@link de.jstacs.algorithms.optimization.Optimizer}.
This is especially used in Jstacs for learning the parameters by discriminative learning principles like maximum conditional likelihood or
maximum supervised posterior (see {@link de.jstacs.classifiers.differentiableSequenceScoreBased.msp.MSPClassifier}) or by a unified learning principle (see
{@link de.jstacs.classifiers.differentiableSequenceScoreBased.gendismix.GenDisMixClassifier}).
The sub-package {@link de.jstacs.sequenceScores.statisticalModels.differentiable.directedGraphicalModels} contains Bayesian networks and inhomogeneous Markov models.
The sub-package {@link de.jstacs.sequenceScores.statisticalModels.differentiable.homogeneous} provides homogeneous models like homogeneous Markov models.
The sub-package {@link de.jstacs.sequenceScores.statisticalModels.differentiable.mixture} provides mixture models including an extended ZOOPS model
for de-novo motif discovery.
Some of the provided {@link de.jstacs.sequenceScores.statisticalModels.differentiable.DifferentiableStatisticalModel}s also implement the interface
{@link de.jstacs.sequenceScores.statisticalModels.differentiable.SamplingDifferentiableStatisticalModel} and can be used for
Metropolis-Hastings parameter sampling in a {@link de.jstacs.classifiers.differentiableSequenceScoreBased.sampling.SamplingGenDisMixClassifier}.