Calculating Mutual Information Between Responses and Stimuli

6 views (last 30 days)
Hi all,
I have collected neuronal data over time and have arranged them into matrices so that matrix S represents a stimulus (say distance for instance) and R representing my neuronal data responding to the stimulus. My aim is to estimate the mutual information between stimulus and response.
The part of the code doing this is:
Hr=-nansum(pR.*log2(pR));
hrs=[];
for r=1:5
hrs(r)=nansum(CondProb(r,:).*log2(CondProb(r,:)));
end
condent=-nansum(pS.*hrs);
MI=Hr-condent;
Specifically, pR is the probabilities array for the response, pS is the probabilities array for the stimulus (both obtained by histcounts of R and S repsectively) and CondProb, the conditional probabilities generated by manipulating the histcounts2 output, with R and S being the inputs.
Hr (entropy), condent (entropy of R, given S) and MI, mutual information are estimated using the well known formulae.
Now, where my problem lies is that MI is negative, which according to Jensen's non-negativity condition should not be the case. I know that this way of calculating MI is sensitive to my histogram bins, which in my case is 5. I have tweaked this number without really achieving non-negativity.. Also, adding more bins creates gaps in the histogram. Can someone suggest another way to calculate these variables, or perhaps a change to the code above?

Answers (0)

Products


Release

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!