Dense

The dense block represents a fully connected layer of artificial nodes.

Each of the nodes (as many as the Nodes attribute) has one weight per input feature and an output that is a function of its inputs, according to the formula:

the sum inside the function = the dot product of the weights vector with the features vector.

Dense layers
Figure 1. Dense layers

Dense blocks are the only blocks with nodes used in multilayer perceptrons, the simplest form of deep neural networks.

Flatten input to Dense blocks

If the input has more than one dimension of features (for example an image has height, width, and channels), the data should be flattened before being fed into the nodes with a flatten block.

Dense layers grow quickly in memory usage

Given that a layer with N nodes and P input features has N*P weights, as all nodes are fully connected and each feature is treated separately, these layers grow quickly in memory usage with the input data size, and are not suitable for data like images (P = height x width x number of channels, which for a standard definition image is at least 1 million).
Convolutional blocks, e.g., the 2D Convolution block, take better advantage of the structure within image data and need less memory to work.

Parameters

Nodes: the number of nodes in the layer.

Initializer: the procedure used to set the initial values of the weights and the bias before starting training.
Default: Glorot uniform initialization

Activation: the function used to transform the output of the dot product inside the layer.
Default: ReLU

Trainable: whether we want the training algorithm to change the value of the weights during training. In some cases one will want to keep parts of the network static, for instance when using the encoder part of an autoencoder as preprocessing for another model.

Get started for free