checkLayer
Check validity of custom or function layer
Description
checkLayer(
checks the validity of a custom or function layer using generated data of the sizes
in layer
,validInputSize
)validInputSize
. For layers with a single input, set
validInputSize
to a typical size of input data to the
layer. For layers with multiple inputs, set validInputSize
to a
cell array of typical sizes, where each element corresponds to a layer input.
checkLayer(
specifies additional options using one or more name-value arguments.layer
,validInputSize
,Name=Value
)
Examples
Check Custom Layer Validity
Check the validity of the example custom layer preluLayer
.
The custom layer preluLayer
, attached to this example as a supporting file, applies the PReLU operation to the input data. To access this layer, open this example as a live script.
Create an instance of the layer.
layer = preluLayer;
Because the layer has a custom initialize function, initialize the layer using a networkDataFormat
object that specifies the expected input size and format of a single observation of typical input to the layer.
Specify a valid input size of [24 24 20]
, where the dimensions correspond to the height, width, and number of channels of the previous layer output.
validInputSize = [24 24 20];
layout = networkDataLayout(validInputSize,"SSC");
layer = initialize(layer,layout);
Check the layer validity using checkLayer
. Specify the valid input size as the size as the size as used to initialize the layer. When you pass data through the network, the layer expects 4-D array inputs, where the first three dimensions correspond to the height, width, and number of channels of the previous layer output, and the fourth dimension corresponds to the observations.
checkLayer(layer,validInputSize)
Skipping multi-observation tests. To enable tests with multiple observations, specify the 'ObservationDimension' option. For 2-D image data, set 'ObservationDimension' to 4. For 3-D image data, set 'ObservationDimension' to 5. For sequence data, set 'ObservationDimension' to 2. Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the 'CheckCodegenCompatibility' and 'ObservationDimension' options. Running nnet.checklayer.TestLayerWithoutBackward .......... .. Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 12 Passed, 0 Failed, 0 Incomplete, 16 Skipped. Time elapsed: 0.054851 seconds.
The results show the number of passed, failed, and skipped tests. If you do not specify the ObservationsDimension
option, or do not have a GPU, then the function skips the corresponding tests.
Check Multiple Observations
For multi-observation image input, the layer expects an array of observations of size h-by-w-by-c-by-N, where h, w, and c are the height, width, and number of channels, respectively, and N is the number of observations.
To check the layer validity for multiple observations, specify the typical size of an observation and set the ObservationDimension
option to 4.
checkLayer(layer,validInputSize,ObservationDimension=4)
Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the 'CheckCodegenCompatibility' and 'ObservationDimension' options. Running nnet.checklayer.TestLayerWithoutBackward .......... ........ Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 18 Passed, 0 Failed, 0 Incomplete, 10 Skipped. Time elapsed: 0.030498 seconds.
In this case, the function does not detect any issues with the layer.
Check Function Layer Validity
Create a function layer object that applies the softsign operation to the input. The softsign operation is given by the function .
layer = functionLayer(@(X) X./(1 + abs(X)))
layer = FunctionLayer with properties: Name: '' PredictFcn: @(X)X./(1+abs(X)) Formattable: 0 Acceleratable: 0 Learnable Parameters No properties. State Parameters No properties. Show all properties
Check that the layer it is valid using the checkLayer
function. Set the valid input size to the typical size of a single observation input to the layer. For example, for a single input, the layer expects observations of size h-by-w-by-c, where h, w, and c are the height, width, and number of channels of the previous layer output, respectively.
Specify validInputSize
as the typical size of an input array.
validInputSize = [5 5 20]; checkLayer(layer,validInputSize)
Skipping multi-observation tests. To enable tests with multiple observations, specify the 'ObservationDimension' option. For 2-D image data, set 'ObservationDimension' to 4. For 3-D image data, set 'ObservationDimension' to 5. For sequence data, set 'ObservationDimension' to 2. Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the 'CheckCodegenCompatibility' and 'ObservationDimension' options. Running nnet.checklayer.TestLayerWithoutBackward .......... .. Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 12 Passed, 0 Failed, 0 Incomplete, 16 Skipped. Time elapsed: 0.29567 seconds.
The results show the number of passed, failed, and skipped tests. If you do not specify the ObservationsDimension
option, or do not have a GPU, then the function skips the corresponding tests.
Check Multiple Observations
For multi-observation image input, the layer expects an array of observations of size h-by-w-by-c-by-N, where h, w, and c are the height, width, and number of channels, respectively, and N is the number of observations.
To check the layer validity for multiple observations, specify the typical size of an observation and set the ObservationDimension
option to 4.
layer = functionLayer(@(X) X./(1 + abs(X))); validInputSize = [5 5 20]; checkLayer(layer,validInputSize,ObservationDimension=4)
Skipping GPU tests. No compatible GPU device found. Skipping code generation compatibility tests. To check validity of the layer for code generation, specify the 'CheckCodegenCompatibility' and 'ObservationDimension' options. Running nnet.checklayer.TestLayerWithoutBackward .......... ........ Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 18 Passed, 0 Failed, 0 Incomplete, 10 Skipped. Time elapsed: 0.14659 seconds.
In this case, the function does not detect any issues with the layer.
Check Custom Layer for Code Generation Compatibility
Check the code generation compatibility of the custom layer codegenPreluLayer
.
The custom layer codegenPreluLayer
, attached to this is example as a supporting file, applies the PReLU operation to the input data. To access this layer, open this example as a live script.
Create an instance of the layer and check its validity using checkLayer
. Specify the valid input size as the size of a single observation of typical input to the layer. The layer expects 4-D array inputs, where the first three dimensions correspond to the height, width, and number of channels of the previous layer output, and the fourth dimension corresponds to the observations.
Specify the typical size of the input of an observation and set the 'ObservationDimension'
option to 4. To check for code generation compatibility, set the CheckCodegenCompatibility
option to true
. The checkLayer
function does not check for functions that are not compatible with code generation. To check that the custom layer definition is supported for code generation, first use the Code Generation Readiness app. For more information, see Check Code by Using the Code Generation Readiness Tool (MATLAB Coder).
layer = codegenPreluLayer(20,"prelu");
validInputSize = [24 24 20];
checkLayer(layer,validInputSize,ObservationDimension=4,CheckCodegenCompatibility=true)
Skipping GPU tests. No compatible GPU device found. Running nnet.checklayer.TestLayerWithoutBackward .......... .......... ... Done nnet.checklayer.TestLayerWithoutBackward __________ Test Summary: 23 Passed, 0 Failed, 0 Incomplete, 5 Skipped. Time elapsed: 0.67454 seconds.
The function does not detect any issues with the layer.
Input Arguments
layer
— Layer to check
nnet.layer.Layer
object | nnet.layer.ClassificationLayer
object | nnet.layer.RegressionLayer
object
Layer to check, specified as an nnet.layer.Layer
,
nnet.layer.ClassificationLayer
,
nnet.layer.RegressionLayer
, or
FunctionLayer
object.
If layer
has learnable or state parameters, then the
layer must be initialized. If the layer has a custom
initialize
function, then first initialize the layer
using the initialize function using networkDataLayout
objects.
The checkLayer
function does not support layers that
inherit from nnet.layer.Formattable
.
For an example showing how to define your own custom layer, see Define Custom Deep Learning Layer with Learnable Parameters. To
create a layer that applies a specified function, use functionLayer
.
validInputSize
— Valid input sizes
vector of positive integers | cell array of vectors of positive integers
Valid input sizes of the layer, specified as a vector of positive integers or cell array of vectors of positive integers.
For layers with a single input, specify
validInputSize
as a vector of integers corresponding to the dimensions of the input data. For example,[5 5 10]
corresponds to valid input data of size 5-by-5-by-10.For layers with multiple inputs, specify
validInputSize
as a cell array of vectors, where each vector corresponds to a layer input and the elements of the vectors correspond to the dimensions of the corresponding input data. For example,{[24 24 20],[24 24 10]}
corresponds to the valid input sizes of two inputs, where 24-by-24-by-20 is a valid input size for the first input and 24-by-24-by-10 is a valid input size for the second input.
For more information, see Layer Input Sizes.
For large input sizes, the gradient checks take longer to run. To speed up the check, specify a smaller valid input size.
Example: [5 5 10]
Example: {[24 24 20],[24 24 10]}
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| cell
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Example: ObservationDimension=4
sets the observation dimension
to 4
ObservationDimension
— Observation dimension
positive integer
Observation dimension, specified as a positive integer.
The observation dimension specifies which dimension of the layer input data corresponds to observations. For example, if the layer expects input data is of size h-by-w-by-c-by-N, where h, w, and c correspond to the height, width, and number of channels of the input data, respectively, and N corresponds to the number of observations, then the observation dimension is 4. For more information, see Layer Input Sizes.
If you specify the observation dimension, then the
checkLayer
function checks that the layer
functions are valid using generated data with mini-batches of size 1 and
2. If you do not specify the observation dimension, then the function
skips the corresponding tests.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
CheckCodegenCompatibility
— Flag to enable code generation tests
0
(false) (default) | 1
(true)
Flag to enable code generation tests, specified as
0
(false) or 1
(true).
If CheckCodegenCompatibility
is
1
(true), then you must specify the ObservationDimension
option.
Code generation supports intermediate layers with 2-D image or feature input only. Code generation does not support layers with state properties (properties with attribute State
).
The checkLayer
function does not check that functions used by the layer
are compatible with code generation. To check that functions used by the custom layer also
support code generation, first use the Code Generation Readiness app. For more
information, see Check Code by Using the Code Generation Readiness Tool (MATLAB Coder).
For an example showing how to define a custom layer that supports code generation, see Define Custom Deep Learning Layer for Code Generation.
Data Types: logical
More About
Layer Input Sizes
For each layer, the valid input size and the observation dimension depend on the output of the previous layer.
For intermediate layers (layers of type nnet.layer.Layer
),
the valid input size and the observation dimension depend on the type of data
input to the layer.
For layers with a single input, specify
validInputSize
as a vector of integers corresponding to the dimensions of the input data.For layers with multiple inputs, specify
validInputSize
as a cell array of vectors, where each vector corresponds to a layer input and the elements of the vectors correspond to the dimensions of the corresponding input data.
For large input sizes, the gradient checks take longer to run. To speed up the check, specify a smaller valid input size.
Layer Input | Input Size | Observation Dimension |
---|---|---|
Feature vectors | c-by-N, where c corresponds to the number of channels and N is the number of observations | 2 |
2-D images | h-by-w-by-c-by-N, where h, w, and c correspond to the height, width, and number of channels of the images, respectively, and N is the number of observations | 4 |
3-D images | h-by-w-by-d-by-c-by-N, where h, w, d, and c correspond to the height, width, depth, and number of channels of the 3-D images, respectively, and N is the number of observations | 5 |
Vector sequences | c-by-N-by-S, where c is the number of features of the sequences, N is the number of observations, and S is the sequence length | 2 |
2-D image sequences | h-by-w-by-c-by-N-by-S, where h, w, and c correspond to the height, width, and number of channels of the images, respectively, N is the number of observations, and S is the sequence length | 4 |
3-D image sequences | h-by-w-by-d-by-c-by-N-by-S, where h, w, d, and c correspond to the height, width, depth, and number of channels of the 3-D images, respectively, N is the number of observations, and S is the sequence length | 5 |
For example, for 2-D image classification problems, set
validInputSize
to [h w c]
, where
h
, w
, and c
correspond to the height, width, and number of channels of the images,
respectively, and ObservationDimension
to
4
.
Code generation supports intermediate layers with 2-D image input only.
For output layers (layers of type
nnet.layer.ClassificationLayer
or
nnet.layer.RegressionLayer
), set
validInputSize
to the typical size of a single input
observation Y
to the layer.
For classification problems, the valid input size and the observation
dimension of Y
depend on the type of problem:
Classification Task | Input Size | Observation Dimension |
---|---|---|
2-D image classification | 1-by-1-by-K-by-N, where K is the number of classes and N is the number of observations | 4 |
3-D image classification | 1-by-1-by-1-by-K-by-N, where K is the number of classes and N is the number of observations | 5 |
Sequence-to-label classification | K-by-N, where K is the number of classes and N is the number of observations | 2 |
Sequence-to-sequence classification | K-by-N-by-S, where K is the number of classes, N is the number of observations, and S is the sequence length | 2 |
For example, for 2-D image classification problems, set
validInputSize
to [1 1 K]
, where
K
is the number of classes, and
ObservationDimension
to 4
.
For regression problems, the dimensions of Y
also depend on
the type of problem. The following table describes the dimensions of
Y
.
Regression Task | Input Size | Observation Dimension |
---|---|---|
2-D image regression | 1-by-1-by-R-by-N, where R is the number of responses and N is the number of observations | 4 |
2-D Image-to-image regression | h-by-w-by-c-by-N, where h, w, and c are the height, width, and number of channels of the output, respectively, and N is the number of observations | 4 |
3-D image regression | 1-by-1-by-1-by-R-by-N, where R is the number of responses and N is the number of observations | 5 |
3-D Image-to-image regression | h-by-w-by-d-by-c-by-N, where h, w, d, and c are the height, width, depth, and number of channels of the output, respectively, and N is the number of observations | 5 |
Sequence-to-one regression | R-by-N, where R is the number of responses and N is the number of observations | 2 |
Sequence-to-sequence regression | R-by-N-by-S, where R is the number of responses, N is the number of observations, and S is the sequence length | 2 |
For example, for 2-D image regression problems, set
validInputSize
to [1 1 R]
, where
R
is the number of responses, and
ObservationDimension
to 4
.
Algorithms
List of Tests
The checkLayer
function checks the validity of a custom layer
by performing a series of tests, described in these tables. For more information on
the tests used by checkLayer
, see Check Custom Layer Validity.
The checkLayer
function uses these tests to check the validity of custom
intermediate layers (layers of type nnet.layer.Layer
).
Test | Description |
---|---|
functionSyntaxesAreCorrect | The syntaxes of the layer functions are correctly defined. |
predictDoesNotError | predict function does not error. |
forwardDoesNotError | When specified, the |
forwardPredictAreConsistentInSize | When |
backwardDoesNotError | When specified, backward does not error. |
backwardIsConsistentInSize | When
|
predictIsConsistentInType | The outputs of |
forwardIsConsistentInType | When |
backwardIsConsistentInType | When |
gradientsAreNumericallyCorrect | When backward is specified, the gradients computed
in backward are consistent with the numerical
gradients. |
backwardPropagationDoesNotError | When backward is not specified, the derivatives
can be computed using automatic differentiation. |
predictReturnsValidStates | For layers with state properties, the predict
function returns valid states. |
forwardReturnsValidStates | For layers with state properties, the forward
function, if specified, returns valid states. |
resetStateDoesNotError | For layers with state properties, the resetState
function, if specified, does not error and resets the states to valid
states. |
codegenPragmaDefinedInClassDef | The pragma "%#codegen" for code generation is
specified in class file. |
layerPropertiesSupportCodegen | The layer properties support code generation. |
predictSupportsCodegen | predict is valid for code generation. |
doesNotHaveStateProperties | For code generation, the layer does not have state properties. |
functionLayerSupportsCodegen | For code generation, the layer function must be a named function on
the path and the Formattable property must be
0 (false). |
Some tests run multiple times. These tests also check different data types and for GPU compatibility:
predictIsConsistentInType
forwardIsConsistentInType
backwardIsConsistentInType
To execute the layer functions on a GPU, the functions must support inputs and outputs of
type gpuArray
with the underlying data type
single
.
The checkLayer
function uses these tests to check the
validity of custom output layers (layers of type
nnet.layer.ClassificationLayer
or
nnet.layer.RegressionLayer
).
Test | Description |
---|---|
forwardLossDoesNotError | forwardLoss does not error. |
backwardLossDoesNotError | backwardLoss does not error. |
forwardLossIsScalar | The output of forwardLoss is scalar. |
backwardLossIsConsistentInSize | When backwardLoss is specified, the output of
backwardLoss is consistent in size:
dLdY is the same size as the predictions
Y . |
forwardLossIsConsistentInType | The output of |
backwardLossIsConsistentInType | When |
gradientsAreNumericallyCorrect | When backwardLoss is specified, the gradients computed
in backwardLoss are numerically correct. |
backwardPropagationDoesNotError | When backwardLoss is not specified, the derivatives
can be computed using automatic differentiation. |
The forwardLossIsConsistentInType
and
backwardLossIsConsistentInType
tests also check for GPU compatibility. To
execute the layer functions on a GPU, the functions must support inputs and outputs of type
gpuArray
with the underlying data type single
.
Version History
Introduced in R2018a
See Also
trainNetwork
| trainingOptions
| analyzeNetwork
Topics
- Check Custom Layer Validity
- Define Custom Deep Learning Layers
- Define Custom Deep Learning Layer with Learnable Parameters
- Define Custom Deep Learning Layer with Multiple Inputs
- Define Custom Classification Output Layer
- Define Custom Regression Output Layer
- Define Custom Deep Learning Layer for Code Generation
- List of Deep Learning Layers
- Deep Learning Tips and Tricks
Open Example
You have a modified version of this example. Do you want to open this example with your edits?
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)