Matlab: Convolutional Netzwerk für Würfel Nummer Erkennung
(English)
This post describes a Matlab function which uses a 7 layer
convolutional network for dice number identification (numbers 1-12). It
calls for the earlier proposed image processing methode shown in this post.
Mode of operation:
Two methodes were tested. The dice as shown in previous post is just a tiny part of the whole image. With image processing, the dice itself is excluded and saved in a new folder sorted by number. They are loaded in "digitData". The it is splitted in trainigs set (4/5) and test set (1/5 of alle images). All in all I have 996 images and I tested the not processed images and the binare tresholded images, which suprisingly showed a higher accuracy for the non processed images. An all in all accuracy of 99.6% in the training set and 99.2% in the test set was achieved around 5% higher than for the binare images. In this image the value of accuracy is seen what the convolutional net predicts. A good improvement is seen in non processed images compared to binare ones.
The non processed images net had 4 fault predictions and the other one 15.
The
matlab function "trainNetwork" was used for the training process which
needed around 40 seconds for images of size 25x25. The training curve is
seen here (upper one is for binare images and lower one for non
processed):
Source code:
digitDatasetPath = fullfile(folder_save);digitData = imageDatastore(digitDatasetPath,...
'IncludeSubfolders',true,'LabelSource','foldernames');
% _______________Begin Training_______________________%
trainingNumFiles = round(digitData.countEachLabel.Count.*1./5);
digitData.countEachLabel
rng(1) % For reproducibility
[testDigitData,trainDigitData] = splitEachLabel(digitData,...
min(trainingNumFiles));
layers = [
imageInputLayer([new_sz new_sz 1])
convolution2dLayer(3,16,'Padding',1)
batchNormalizationLayer
reluLayer
maxPooling2dLayer(2,'Stride',2)
convolution2dLayer(3,32,'Padding',1)
batchNormalizationLayer
reluLayer
maxPooling2dLayer(2,'Stride',2)
convolution2dLayer(3,64,'Padding',1)
batchNormalizationLayer
reluLayer
fullyConnectedLayer(num_cases)
softmaxLayer
classificationLayer];
options = trainingOptions('sgdm',...
'InitialLearnRate',0.03,...
'MaxEpochs',50,...
'MiniBatchSize',256,...
'VerboseFrequency',200,...
'ValidationFrequency',200,...
'ValidationPatience',Inf,...
'Plots','training-progress',...
'OutputFcn',@(info)stopIfAccuracyNotImproving(info,5));
tic;
convnet = trainNetwork(trainDigitData,layers,options);
time = toc;
YTest = classify(convnet,testDigitData);
TTest = testDigitData.Labels;
Y = classify(convnet,digitData);
T = digitData.Labels;
cd(folder)
cd ..
% Calculate the accuracy.
name_net = sprintf('dice_net_sz_%d',new_sz);
save(name_net,'convnet');
accuracy = sum(YTest == TTest)/numel(TTest);
acc = sum(Y == T)/numel(T);
fprintf('Test Accuracy is %d and the full accuracy %d\n', accuracy, acc);
fprintf('dice_net_sz_%d generated!\n',new_sz);
Kommentare
Kommentar veröffentlichen