0 votes
in Network Theory by
Explain about the Keras Core Layers?

1 Answer

0 votes
by

The dense layer can be defined as a densely-connected common Neural Network layer. The output = activation(dot(input, kernel) +bias) operation is executed by the Dense layer. Here an element-wise activation function is being performed by the activation, so as to pass an activation argument, a matrix of weights called kernel is built by the layer, and bias is a vector created by the layer, which is applicable only if the use_bias is True.

It is to be noted that if the input given to the layer has a rank greater than two, it will be flattened previously to its primary dot product with the kernel.

Example

  1. # First layer in the sequential model:  
  2. model = Sequential()  
  3. model.add(Dense(32, input_shape=(16,)))  
  4. # The model takes the input as arrays of shape (*, 16) and output arrays of shape (*, 32)  
  5. # After the first layer, you don't need to specify the size of the input:  
  6. model.add(Dense(32))  

Arguments

  • units: It refers to a positive integer that acknowledges the output space dimensionality.
  • activation: It makes sure that the dense layer utilizes the element-wise activation function. It is a linear activation, which is set to none by default. Since its linearity is limited, we don't have much of its in-built activation function.
  • use_bias: It is an optional parameter, which means we may or may not incorporate it in our calculation. It represents a Boolean that shows whether the layer utilizes a bias vector.
  • kernel_initializer: It can be defined as an initializer for the kernel weights matrix.
  • bias_initializer: It can be defined as an initializer for the bias vector for which Keras uses zero initializer by default. It is assumed that it sets the bias vector to all zeros.
  • kernel_regularizer: It can be termed as a regularizer function, which is implemented on the kernel weights matrix.
  • bias_regularizer: It can be defined as a regularizer function, which is applied to the bias vector.
  • activity_regualrizer: It relates to a regularizer function that is applied to the output of the layer (its activation).
  • kernel_constraint: It refers to the constraint, which is applied to the kernel weights matrix.
  • bias_constraint: It can be defined as a constraint, which is applied to the bias vector.

Input shape

The input shape layer accepts an nD tensor of shape (batch_size, …, input_dim), and makes sure that its most common situation would have to be a 2D input encompassing a shape of (batch_size, input_dim).

Output shape

It outputs an nD tensor of shape (batch_size, …, units). For instance, where input is a 2D of shape (batch_size, input_dim), the corresponding output will be of shape (batch_size, units).

Activation

  1. keras.layers.Activation(activation)  

This is the layer that implements an activation function on the output.

Arguments

  • activation: Basically, it refers to the name of an activation function to be used, or simply we can say a Theano or TensorFlow operation.

Input Shape

It comprises of an arbitrary input shape. It makes use of an argument called input_shape while using it as an initial layer in the model. The input_shape can be defined as a tuple of integers that does not include the samples axis.

Output Shape

The output shape is the same as that of the input shape.

Dropout

  1. keras.layers.Dropout(rate, noise_shape=None, seed=None)  

The dropout is applied to the input as it prevents overfitting by randomly setting units of a fraction rate to 0 during the training time at each update.

Arguments

  • rate: It refers to a float value between 0 and 1, which represents the fraction units to be dropped.
  • noise_shape: It refers to a one-dimension tensor integer that epitomizes the shape of a binary dropout mask, which will be used in its multiplication with the input. If the input shape is (batch_size, timesteps, features), and for all timesteps, you wish your dropout mask to be similar, then, in that case, noise_shape=(batch_size, 1, feature) can be utilized.
  • seed: It indicates a python integer that will be used as a random seed.

Flatten

  1. keras.layers.Flatten(data_format=None)  

The flatten layer is used for flattening the input by not affecting the batch size.

Arguments

  • data_format: It can be defined as a string one of channels_last (by default) or channels_first. It is mainly used for ordering the input dimensions, so as to preserve ordering of weight when a model is being switched from one data format to another. Here the channels_last relates to the input shape of (batch, …, channels), whereas the channels_first relates to the input shape of (batch, channels, …). By default, the image_data_format value found in Keras config file is residing at ~/.keras/keras.json, else if it has not been set, then it will be at "channels_last".

Example

  1. model = Sequential()  
  2. model.add(Conv2D(64, (33),  
  3.                  input_shape=(33232), padding='same',))  
  4. # Now: model.output_shape == (None, 643232)  
  5.   
  6. model.add(Flatten())  
  7. # Now: model.output_shape == (None, 65536)  

Input

  1. keras.engine.input_layer.Input()  

The input layer makes use Input() to instantiate a Keras tensor, which is simply a tensor object from the backend such as Theano, TensorFlow, or CNTK. It can be augmented with some specific attributes, which will let us build a Keras model with the help of only inputs and outputs.

If we have m,n and o Keras tensors, then we can perform model = Model(input=[m, n], output=o).

Some other added Keras attributes are; _keras_shape, integer shape tuple that is propagated via Keras-side shape inference, and _keras_history is the last layer, which is applied on the tensor. The last layer enables the retrieval of the entire layer graph recursively.

Arguments

  • shape: The shape tuple can be defined as an integer that does not include the batch size. For instance, shape=(32, ) specifies that the expected input batches will be of 32-dimensional vectors.
  • batch_shape: The shape tuple indicates an integer that includes the batch size, such that for instance, batch_shape=(10, 32) represents that the expected input batches will be ten 32-dimensional vectors and batch_shape=(None, 32) represents batches of an arbitrary number of 32-D vectors.
  • name: It is an optional string name of the layer that must be unique, and even if it is not provided, it gets generated automatically.
  • dtype: The expected data type of the input is a string (float32, float64, int32, …).
  • sparse: It refers to a Boolean that specifies if the created placeholder is sparse or not.
  • tensor: It is an optional tensor that exists for wrapping up into the Input layer. If it is set, the layer will not create a placeholder tensor

Returns

It returns a tensor.

Example

  1. # Logistic regression in Keras  
  2. x = Input(shape=(32,))  
  3. y = Dense(16, activation='softmax')(x)  
  4. model = Model(x, y)  

Reshape

  1. keras.layers.Reshape(target_shape)  

It is used to reshape the output to some specific shape.

Arguments

  • target_shape: It refers to a tuple of integers that points the output shape, excluding the batch axis.

Input shape

It includes arbitrary input shape even though if it is fixed and make use of input_shape argument while using this layer as the initial layer in the model.

Output shape

  1. batch_size, ) + target_shape  

Example

  1. # First layer in a Sequential model  
  2. model = Sequential()  
  3. model.add(Reshape((34), input_shape=(12,)))  
  4. # Now: model.output_shape == (None, 34)  
  5. # Note: Here `None` represents the batch dimension  
  6.   
  7. # An intermediate layer in a Sequential model  
  8. model.add(Reshape((62)))  
  9. # Now: model.output_shape == (None, 62)  
  10.   
  11. # It also supports shape inference using `-1` as a dimension  
  12. model.add(Reshape((-122)))  
  13. # Now: model.output_shape == (None, 322)  

Permute

  1. keras.layers.Permute(dims)  

It permutes the input's dimension as per the given pattern and is mainly used to join RNN's with convnets together.

Example

  1. model = Sequential()  
  2. model.add(Permute((21), input_shape=(1064)))  
  3. # now: model.output_shape == (None, 6410)  
  4. # note: `None` is the batch dimension  

Argument

  • dims: It can be defined as a tuple of integers. The permutation patterns do not comprehend sample dimensions. Here the indexing starts at 1and for any random instance, (2,1) will permute first and second dimension of the input.

Input shape

It consists of an arbitrary input shape and makes use of the input_shape keyword argument, which is a tuple of integers. This argument is utilized while using this layer as the initial layer in the model. It does not include the samples axis.

Output shape

The output shape is similar to the input shape, just the fact that dimensions are re-ordered according to some specific pattern.

RepeatVector

  1. keras.layers.RepeatVector(n)  

The RepeatVector layer is used for reiterating an input n times.

Example

  1. model = Sequential()  
  2. model.add(Dense(32, input_dim=32))  
  3. # now: model.output_shape == (None, 32)  
  4. # note: `None` is the batch dimension  
  5.   
  6. model.add(RepeatVector(3))  
  7. # now: model.output_shape == (None, 332)  

Arguments

  • n: It can be defined as an integer that signifies the repetition factor.

Input shape

It comprises of a 2D tensor having a shape of (num_samples, features).

Output shape

It constitutes a 3D tensor of shape (num_samples, n, features).

Lambda

  1. keras.layers.Lambda(function, output_shape=None, mask=None, arguments=None)  

This layer is used for wrapping up an arbitrary expression like an object of Layer.

Examples

  1. # Adding a x -> x^2 layer  
  2. model.add(Lambda(lambda x: x ** 2))  

  1. # Now add a layer that will return the concatenation of the positive part of the input and the opposite of the negative part  
  2. def antirectifier(x):  
  3.     x -= K.mean(x, axis=1, keepdims=True)  
  4.     x = K.l2_normalize(x, axis=1)  
  5.     pos = K.relu(x)  
  6.     neg = K.relu(-x)  
  7.     return K.concatenate([pos, neg], axis=1)  
  8.   
  9. def antirectifier_output_shape(input_shape):  
  10.     shape = list(input_shape)  
  11.     assert len(shape) == 2  # only valid for 2D tensors  
  12.     shape[-1] *= 2  
  13.     return tuple(shape)  
  14.   
  15. model.add(Lambda(antirectifier,  
  16.                  output_shape=antirectifier_output_shape))  

  1. # Now add a layer that will return the hadamard product and its sum from two input tensors.  
  2. def hadamard_product_sum(tensors):  
  3.     out1 = tensors[0] * tensors[1]  
  4.     out2 = K.sum(out1, axis=-1)  
  5.     return [out1, out2]  
  6.   
  7. def hadamard_product_sum_output_shape(input_shapes):  
  8.     shape1 = list(input_shapes[0])  
  9.     shape2 = list(input_shapes[1])  
  10.     assert shape1 == shape2  # else hadamard product isn't possible  
  11.     return [tuple(shape1), tuple(shape2[:-1])]  
  12.   
  13. x1 = Dense(32)(input_1)  
  14. x2 = Dense(32)(input_2)  
  15. layer = Lambda(hadamard_product_sum, hadamard_product_sum_output_shape)  
  16. x_hadamard, x_sum = layer([x1, x2])  
...