1D Convolution

The 1D Convolution block represents a layer that can be used to detect features in a vector.

The 1D block is composed by a configurable number of filters, where the filter has a set size; a convolution operation is performed between the vector and the filter, producing as output a new vector with as many channels as the number of filters. Every value in the tensor is then fed through an activation function to introduce nonlinearity.

How does 1D convolution work?

1d Convolution
Figure 1. A 1D convolution with a kernel sized 3 and stride 1.

The default is to move filters of a set Width by 1 element at a time when performing convolutions; this is called Horizontal stride and it can be altered by the user.

The bigger the stride, the smaller the output vector will be. This can be used to reduce the number of parameters and memory used but leads to a loss of information.


Padding is the process of adding one or more pixels of zeros all around the boundaries of an image, in order to increase its effective size.

Convolutional layers return by default a smaller tensor than the input. If a lot of convolutional layers are strung together, the output tensor is progressively reduced in size until, eventually, it might become unusable.

By padding an tensor before a convolutional layer, i.e., "increasing" its size, this effect can be mitigated.


Filters: The number of convolutional filters to include in the layer. Default: 32

Width: The size of a single filter. Default: 3

Horizontal stride: The number of cells to move while performing the convolution along the vector. Default: 1

Activation: The function that will be applied to each element of the output. Default: ReLu

Padding: Same results in padding the input such that the output has the same length as the original input. Valid means "no padding".

Use bias: Adds a bias vector to a tensor.

Trainable: Whether we want the training algorithm to change the value of the weights during training. In some cases, one will want to keep parts of the network static, for instance when using the encoder part of an autoencoder as preprocessing for another model.

Was this page helpful?
Yes No