Slides PyConfr Bordeaux Calcagno
Slides PyConfr Bordeaux Calcagno
Slides PyConfr Bordeaux Calcagno
AUTOENCODERS FOR
NETWORK SIGNAL
ANOMALY DETECTION
FACUNDO CALCAGNO
@FMCALCAGNO
1.
Tessella
3
Tessella
4
2.
Unsupervised
Learning
5
6
Unsupervised Learning
7
3.
Anomaly
Detection
8
9
Anomaly Detection
○ Common cases: bank fraud, medical problems, abnormal server activity, etc.
10
4.
Deep Learning
11
DEEP
LEARNING
14
RECURRENT NEURAL NETWORKS
RNN’s
17
18
LONG SHORT TERM MEMORY NETWORKS
LSTM’s
23
AUTOENCODERS
Dimensionality Reduction
An autoencoder neural network is an unsupervised learning algorithm that applies
backpropagation, setting the target values to be equal to the inputs.
By making the network learn how to reproduce the input it forces the Compressing
Layer (or Latent Variables) to learn a “compressed” representation of the input. 24
VARIATIONAL AUTOENCODERS
Kullback-Leiber Divergence
The KL Divergence is a measure on how “off” two probabilities distributions P(X) and Q(X)
q between two probability distributions.
are. In other words it measures the distance
Variational Autoencoders uses a Reversed KL Divergence to minimize the difference
between the true distribution P(z|x) and a Gaussian distribution 26
8.
Croissant Model
Mean
Bi- Bi-
Random Average
Directional Average Directional
Sampling
LSTM LSTM
Sigma
Latent Variables
28
CROISSANT MODEL
Inference Time
Mean
Bi- Bi-
Random Average
Directional Average Directional
Sampling
LSTM LSTM
Sigma
Latent Variables
class CroissantModel:
def __init__(self,
input_dim,
timesteps, Parameters
batch_size, Models parameters
intermediate_dim,
latent_dim,
epsilon_std=1.,
gpus=1,
learning_rate=0.001):
31
TensorFlow Code
def generate_model(self):
with tf.device('/gpu:0'):
x = Input(shape=(self.timesteps, self.input_dim,),
name="Main_input_VAE")
# LSTM encoding
h1 = Bidirectional(
CuDNNLSTM(self.intermediate_dim, The Encoder
kernel_initializer='random_uniform', Coding the Bi-Directional
input_shape=(self.timesteps,self.input_dim,) LSTM Encoder
),
merge_mode='ave')(x)
# VAE Z layer
z_mean = Dense(self.latent_dim)(h1) The Latent Variables
z_log_sigma = Dense(self.latent_dim)(h1)
Z mean and Z log sigma
def sampling(args):
z_mean, z_log_sigma = args
epsilon = K.random_normal(
shape=(self.batch_size, self.latent_dim), Sampling
mean=0.,stddev=self.epsilon_std Sampling the generated
) distribution with
return z_mean + z_log_sigma * epsilon
parameters
z = Lambda(sampling,output_shape=(sef.latent_dim,) Z mean and Z log sigma
)([z_mean, z_log_sigma]) 32
TensorFlow Code
decoder_h = Bidirectional(
CuDNNLSTM(
self.intermediate_dim,
kernel_initializer='random_uniform', The Decoder
input_shape=(self.timesteps,self.latent_dim,), Coding the Bi-
return_sequences=True
Directional LSTM
),
merge_mode='ave' Decoder
)
33
TensorFlow Code
h_decoded = RepeatVector(self.timesteps)(z)
h_decoded = decoder_h(h_decoded)
# decoded layer
x_decoded_mean = decoder_mean(h_decoded)
# end-to-end autoencoder
vae = Model(x, x_decoded_mean)
_x_decoded_mean = decoder_mean(_h_decoded)
generator = Model(decoder_input, _x_decoded_mean)
34
TensorFlow Code
if self.gpus > 1:
try:
vae = multi_gpu_model(vae, gpus=self.gpus)
except:
print("Error in Multi GPU")
vae.compile(optimizer=opt_rmsprop, loss=vae_loss)
39
Results
40
Results
41
Results
42
Results
43
Results
44
Results
45
“
Deep-learning will transform every single industry.
Healthcare and transportation will be transformed by deep-
learning. I want to live in an AI-powered society.
Andrew Ng
46
LSTM VARIATIONAL
AUTOENCODERS FOR
NETWORK SIGNAL
ANOMALY DETECTION
FACUNDO CALCAGNO
@FMCALCAGNO