How to declare the weight and bias values for a convolution layer?

1 view (last 30 days)
Greetings,
I want to add a convolution layer in the existing squeezenet network. However the errors shown like this:
Error using assembleNetwork (line 47)
Invalid network.
Error in trainyolov3 (line 80)
newbaseNetwork = assembleNetwork(lgraph); % for tiny-yolov3-coco
Caused by:
Layer 'add_conv': Empty Weights property. Specify a nonempty value for the Weights property.
Layer 'add_conv': Empty Bias property. Specify a nonempty value for the Bias property.
Therefore, I want to ask on how to declare the weights and bias value for that layer? My code is as shown below:
lgraph = disconnectLayers(lgraph,'fire8-concat','fire9-squeeze1x1');
layer = [
maxPooling2dLayer([3 3],"Name","pool6","Padding","same","Stride",[2 2])
convolution2dLayer([1 1],512,"Name","add_conv","Padding",[1 1 1 1],"Stride",[2 2])
reluLayer("Name","relu_add_conv")];
lgraph = addLayers(lgraph, layer);
lgraph = connectLayers(lgraph,'fire8-concat','pool6');
lgraph = connectLayers(lgraph,'relu_add_conv','fire9-squeeze1x1');
newbaseNetwork = assembleNetwork(lgraph);

Answers (1)

Manish
Manish on 27 Aug 2024
Edited: Manish on 27 Aug 2024
Hi,
It seems the error you're encountering is due to the weights and biases not being initialized.
You can verify this by inspecting the layer properties using “layer(2).Weights.” In the code provided, the “Weights and “Biases are uninitialized.
To resolve this issue, you can explicitly initialize the weights and biases and apply them when defining the “add_conv layer.
Refer to the code snippet below that demonstrates how to initialize weights and biases:
% Load the network and convert to layer graph
net = squeezenet('Weights', 'imagenet');
lgraph = layerGraph(net);
% Disconnect the specified layers
lgraph = disconnectLayers(lgraph, "fire8-concat", "fire9-squeeze1x1");
% Define the input channels based on the architecture
inputChannels = 512; % This should match the number of output channels from the previous layer
% Initialize weights and biases
weights = randn([1, 1, inputChannels, 512]) * 0.01; % Random initialization
bias = zeros([1, 1, 512]); % Zero initialization
% Define the new layers with initialized weights and biases
layer = [
maxPooling2dLayer([3 3], "Name", "pool6", "Padding", "same", "Stride", [2 2])
convolution2dLayer([1 1], 512, "Name", "add_conv", "Padding", [1 1 1 1], "Stride", [2 2], ...
'Weights', weights, 'Bias', bias)
reluLayer("Name", "relu_add_conv")
];
% Add the new layers to the graph
lgraph = addLayers(lgraph, layer);
% Connect the layers
lgraph = connectLayers(lgraph, 'fire8-concat', 'pool6');
lgraph = connectLayers(lgraph, 'relu_add_conv', 'fire9-squeeze1x1');
% Assemble the network
newbaseNetwork = assembleNetwork(lgraph);
For more information on initializing the weights and biases explicitly, you can refer to the “Parameters and Initialization” section in the below mentioned documentation:
Hope it helps!

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!