Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
470 views
in Technique[技术] by (71.8m points)

machine learning - Issue in training hidden markov model and usage for classification

I am having a tough time in figuring out how to use Kevin Murphy's HMM toolbox Toolbox. It would be a great help if anyone who has an experience with it could clarify some conceptual questions. I have somehow understood the theory behind HMM but it's confusing how to actually implement it and mention all the parameter setting.

There are 2 classes so we need 2 HMMs.
Let say the training vectors are :class1 O1={ 4 3 5 1 2} and class O_2={ 1 4 3 2 4}.
Now,the system has to classify an unknown sequence O3={1 3 2 4 4} as either class1 or class2.

  1. What is going to go in obsmat0 and obsmat1?
  2. How to specify/syntax for the transition probability transmat0 and transmat1?
  3. what is the variable data going to be in this case?
  4. Would number of states Q=5 since there are five unique numbers/symbols used?
  5. Number of output symbols=5 ?
  6. How do I mention the transition probabilities transmat0 and transmat1?
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Instead of answering each individual question, let me illustrate how to use the HMM toolbox with an example -- the weather example which is usually used when introducing hidden markov models.

Basically the states of the model are the three possible types of weather: sunny, rainy and foggy. At any given day, we assume the weather can be only one of these values. Thus the set of HMM states are:

S = {sunny, rainy, foggy}

However in this example, we can't observe the weather directly (apparently we are locked in the basement!). Instead the only evidence we have is whether the person who checks on you every day is carrying an umbrella or not. In HMM terminology, these are the discrete observations:

x = {umbrella, no umbrella}

The HMM model is characterized by three things:

  • The prior probabilities: vector of probabilities of being in the first state of a sequence.
  • The transition prob: matrix describing the probabilities of going from one state of weather to another.
  • The emission prob: matrix describing the probabilities of observing an output (umbrella or not) given a state (weather).

Next we are either given the these probabilities, or we have to learn them from a training set. Once that's done, we can do reasoning like computing likelihood of an observation sequence with respect to an HMM model (or a bunch of models, and pick the most likely one)...

1) known model parameters

Here is a sample code that shows how to fill existing probabilities to build the model:

Q = 3;    %# number of states (sun,rain,fog)
O = 2;    %# number of discrete observations (umbrella, no umbrella)

%#  prior probabilities
prior = [1 0 0];

%# state transition matrix (1: sun, 2: rain, 3:fog)
A = [0.8 0.05 0.15; 0.2 0.6 0.2; 0.2 0.3 0.5];

%# observation emission matrix (1: umbrella, 2: no umbrella)
B = [0.1 0.9; 0.8 0.2; 0.3 0.7];

Then we can sample a bunch of sequences from this model:

num = 20;           %# 20 sequences
T = 10;             %# each of length 10 (days)
[seqs,states] = dhmm_sample(prior, A, B, num, T);

for example, the 5th example was:

>> seqs(5,:)        %# observation sequence
ans =
     2     2     1     2     1     1     1     2     2     2

>> states(5,:)      %# hidden states sequence
ans =
     1     1     1     3     2     2     2     1     1     1

we can evaluate the log-likelihood of the sequence:

dhmm_logprob(seqs(5,:), prior, A, B)

dhmm_logprob_path(prior, A, B, states(5,:))

or compute the Viterbi path (most probable state sequence):

vPath = viterbi_path(prior, A, multinomial_prob(seqs(5,:),B))

5th_example

2) unknown model parameters

Training is performed using the EM algorithm, and is best done with a set of observation sequences.

Continuing on the same example, we can use the generated data above to train a new model and compare it to the original:

%# we start with a randomly initialized model
prior_hat = normalise(rand(Q,1));
A_hat = mk_stochastic(rand(Q,Q));
B_hat = mk_stochastic(rand(Q,O));  

%# learn from data by performing many iterations of EM
[LL,prior_hat,A_hat,B_hat] = dhmm_em(seqs, prior_hat,A_hat,B_hat, 'max_iter',50);

%# plot learning curve
plot(LL), xlabel('iterations'), ylabel('log likelihood'), grid on

log_likelihood

Keep in mind that the states order don't have to match. That's why we need to permute the states before comparing the two models. In this example, the trained model looks close to the original one:

>> p = [2 3 1];              %# states permutation

>> prior, prior_hat(p)
prior =
     1     0     0
ans =
      0.97401
  7.5499e-005
      0.02591

>> A, A_hat(p,p)
A =
          0.8         0.05         0.15
          0.2          0.6          0.2
          0.2          0.3          0.5
ans =
      0.75967      0.05898      0.18135
     0.037482      0.77118      0.19134
      0.22003      0.53381      0.24616

>> B, B_hat(p,[1 2])
B =
          0.1          0.9
          0.8          0.2
          0.3          0.7
ans =
      0.11237      0.88763
      0.72839      0.27161
      0.25889      0.74111

There are more things you can do with hidden markov models such as classification or pattern recognition. You would have different sets of obervation sequences belonging to different classes. You start by training a model for each set. Then given a new observation sequence, you could classify it by computing its likelihood with respect to each model, and predict the model with the highest log-likelihood.

argmax[ log P(X|model_i) ] over all model_i

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...