FGSM Practical Fundamentals

This will be our first hands-on post, and we’re starting with a very visual type of attack that we can implement for free using services like Google Colab or in local environments with Anaconda/Miniconda, a Python virtual environment, Docker, or any other setup you’re more comfortable with.

Neural networks—specifically convolutional neural networks (CNNs)—are commonly used in computer vision tasks such as image classification, object detection, facial recognition, and more.

Read More

Adversarial Machine Learning Introduction

Adversarial Machine Learning (AML) is a discipline focused on studying vulnerabilities and security flaws in machine learning models with the aim of making them more secure. It seeks to understand how these models can be deceived and manipulated.

In recent years, machine learning in particular, and artificial intelligence in general, have experienced tremendous growth and are being integrated into critical decision-making processes. Some examples include medical diagnoses and future autonomous vehicles. For this reason, it is essential to minimize the possibility that such decisions could be manipulated by malicious actors. A striking example is the Optical Adversarial Attack (OPAD), in which researchers from Purdue University demonstrated that a “stop” sign could be interpreted as a speed limit sign:

Read More