Taming the Zoo
of Discrete HMM Subspecies & Some of their Relatives
by Henning Christiansen, Christian Theil Have, Ole Torp Lassen and Matthieu Petit
Proceedings of 1st International Work-Conference on Linguistics, Biology and Computer Science: Interplays; Tarragona, Spain, March 14-18, 2011.
Hidden Markov Models, or HMMs, are a family of probabilistic models used for describing and analyzing sequential phenomena such as written and spoken text, biological sequences and sensor data from monitoring of hospital patients and industrial plants. An inherent characteristic of all HMM subspecies is their control by some sort of probabilistic, finite state machine, but which may differ in the detailed structure and specific sorts of conditional probabilities. In the literature, however, the different HMM subspecies tend to be described as separate kingdoms with their entrails and inference methods defined from scratch in each particular case. Here we suggest a unified characterization using a generic, probabilistic-logic framework and generic inference methods, which also promote experiments with new hybrids and mutations. This may even involve context dependencies that traditionally are considered beyond reach of HMMs.
The paper refers to a set of example programs that runs under the PRISM system developed by Taisuke Sato, Yoshitaka Kameya and Neng-Fa Zhou. These programs are given found in full text here, and when I get time, I'll add a bit more text that explains how work with them.
You may need the autoAnnotation system on top of PRISM for efficient prediction.
See also conditions of use (left menu).
© Henning Christiansen 2011 (website and source code)
Paper copyrighted by all authors in common, or any publisher to whom they may have transferred copyright