Semantic Segmentation input size
2 views (last 30 days)
According to this https://www.mathworks.com/examples/computer-vision/mw/vision-ex90050995-create-a-semantic-segmentation-network
it stated that "A semantic segmentation network starts with an imageInputLayer, which defines the smallest image size the network can process."
I tried to run this code using different images size, but then i have error
The training images are of size 192x144x1 but the input layer expects images of size 32x32x1.
i only change the size from rgb channel into grayscale channel. my ImageDatastore contain gray scale images with 5 different size from 128x128 to 320x320, some have uneven size. why does this network can not process images that are larger than the specified input size?
Vishal Bhutani on 21 Sep 2018
By my understanding, you want to train a Semantic Segmentation on different set of images. One thing which you should do make all images of same size, uneven size will also work but all images should be of same size. After making all images to one size, you can do one thing is to make changes in the following command:
>> inputSize = [size1 size2 3];
>> imgLayer = imageInputLayer(inputSize)
where size1 and size2 specify your image size. Specify 3 for RGB images and 1 for grayscale images. Hope it helps.