Hyper-parameter optimization

6 views (last 30 days)
1) When training an ECOC classifier for multiclass classification, with knn as base learner, how can I change the minimized function (from the classification error to a loss function I want to define)?
I'm now using this code (where the loss function is in the last line of code). If Preds are the predicted classes, labels are the true classes, N is the numebr of sample, my loss is:
myLoss = double(sum(abs(resPreds - labels)))/double(N); % this is the loss function I wish to minimize
% variable labels contains the labels of training data
tknn = templateKNN('Distance', @distKNN); % I WOULD LIKE TO USE THIS DISTANCE
N = size(XKnn,1);
c = cvpartition(N,'LeaveOut');
% Use leave one out
mdlknnCecoc = fitcecoc(XKnn,labelsRed, ...
'OptimizeHyperparameters','auto', ...
'HyperparameterOptimizationOptions',struct( 'UseParallel',...
true,'CVPartition',c), 'Learners',tknn);
resPreds = predict(mdlknnCecoc, XKnn); % I don't know why kfoldPredict function does not work
myLoss = double(sum(abs(resPreds - labels)))/double(N); % this is the loss function I wish to minimize
  2 Comments
Don Mathis
Don Mathis on 23 Sep 2019
Is it important for you to use ECOC for this? fitcknn directly supports multiclass problems.
Elena Casiraghi
Elena Casiraghi on 24 Sep 2019
ECOC classifiers provide better results (at least in my first experiments).
Anyhow I solved the problem.
I defined a function myLoss:
function loss = myLoss(C,S,W,Cost)
% C is an n-by-K logical matrix with rows indicating the class to which the corresponding observation belongs.
% The column order corresponds to the class order in CVMdl.ClassNames.
% Construct C by setting C(p,q) = 1 if observation p is in class q, for each row.
% Set every element of row p to 0.
%
% S is an n-by-K numeric matrix of negated loss values for the classes.
% Each row corresponds to an observation.
% The column order corresponds to the class order in CVMdl.ClassNames.
% The input S resembles the output argument NegLoss of kfoldPredict.
%
% W is an n-by-1 numeric vector of observation weights.
% If you pass W, the software normalizes its elements to sum to 1.
%
% Cost is a K-by-K numeric matrix of misclassification costs.
% For example,
% Cost = ones(K) – eye(K) specifies a cost of 0 for correct classification and 1 for misclassification.
if size(C,2)>1; [~,Y] = max(C,[],2);
else; Y = C; end
if size(S,2)>1; [~,Yfit] = max(S,[],2);
else; Yfit = S; end
cost = diag(Cost(Y,Yfit));
loss = sum(abs(Y-Yfit).*W.*cost);
end
and I'm now using this code, which works :) fine (To test it I optimized the ECOC doing an exhaustive search). However, I will follow your suggestion and I will also test knn
c = cvpartition(nTrain,'LeaveOut');
% train knn on the reduced training set
% non uso matrici di costo!
num = optimizableVariable('n',[1,maxK],'Type','integer');
cod = optimizableVariable('coding',{'onevsone','onevsall','ternarycomplete'},'Type','categorical');
fun = @(x)kfoldLoss(fitcecoc(X,Y,'CVPartition',c,'Coding', char(x.coding),...
'Learners',templateKNN('Distance', @distKNN,'NumNeighbors',x.n)),'LossFun',@myLoss);
bestParams = bayesopt(fun,[num,cod], 'Verbose',0);

Sign in to comment.

Accepted Answer

Elena Casiraghi
Elena Casiraghi on 24 Sep 2019
ECOC classifiers provide better results (at least in my first experiments).
Anyhow I solved the problem.
I defined a function myLoss:
function loss = myLoss(C,S,W,Cost)
% C is an n-by-K logical matrix with rows indicating the class to which the corresponding observation belongs.
% The column order corresponds to the class order in CVMdl.ClassNames.
% Construct C by setting C(p,q) = 1 if observation p is in class q, for each row.
% Set every element of row p to 0.
%
% S is an n-by-K numeric matrix of negated loss values for the classes.
% Each row corresponds to an observation.
% The column order corresponds to the class order in CVMdl.ClassNames.
% The input S resembles the output argument NegLoss of kfoldPredict.
%
% W is an n-by-1 numeric vector of observation weights.
% If you pass W, the software normalizes its elements to sum to 1.
%
% Cost is a K-by-K numeric matrix of misclassification costs.
% For example,
% Cost = ones(K) – eye(K) specifies a cost of 0 for correct classification and 1 for misclassification.
if size(C,2)>1; [~,Y] = max(C,[],2);
else; Y = C; end
if size(S,2)>1; [~,Yfit] = max(S,[],2);
else; Yfit = S; end
cost = diag(Cost(Y,Yfit));
loss = sum(abs(Y-Yfit).*W.*cost);
end
and I'm now using this code, which works :) fine (To test it I optimized the ECOC doing an exhaustive search). However, I will follow your suggestion and I will also test knn
c = cvpartition(nTrain,'LeaveOut');
% train knn on the reduced training set
% non uso matrici di costo!
num = optimizableVariable('n',[1,maxK],'Type','integer');
cod = optimizableVariable('coding',{'onevsone','onevsall','ternarycomplete'},'Type','categorical');
fun = @(x)kfoldLoss(fitcecoc(X,Y,'CVPartition',c,'Coding', char(x.coding),...
'Learners',templateKNN('Distance', @distKNN,'NumNeighbors',x.n)),'LossFun',@myLoss);
bestParams = bayesopt(fun,[num,cod], 'Verbose',0);

More Answers (0)

Products


Release

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!