Skip to main content

Posts

Convolutional Neural Network : The Beginning

Thanks to Deep Learning, Computer Vision is one of the areas that has been rapidly advancing in the recent times. The advancements in Computer Vision with Deep Learning has been constructed and perfected primarily over the time due to one algorithm - the Convolutional Neural Network. A Convolutional Neural Network (CNN/ConvNet) is a Deep Learning Algorithm which can take in input image, assign various weights to the different aspects in that image and hence on learning, be able to differentiate one image from another. Let us look into the basic steps required for a CNN Model. The Input Image The computer sees an image as an array of pixels, which depends on the image resolution. As discussed in my previous blogs, the array mainly consists of the RGB pixels.  For eg, lets say the input image is the following RGB Matrix :   Here this is a 4x4x3 RGB image matrix where the height of the image is of 4 pixels, width is 4 pixels, and the 3 refers to the RGB values. This is jus

Forward Propagation and backpropagation

As promised in the previous blog, we are to dive deep into the mathematical implementation of forward propagation and backpropagation. Forward Propagation As the name suggests, the input data is fed into the model in the forward direction. Each hidden layer in the network get access to the inputted data, processes the data as per its respective activation function and then passes it to the successive layer. There are 2 steps of processing which takes place in each neuron of the hidden layer. 1. Preactivation : In this process, the weighted sum of the inputs is fed into the neuron. 2. Activation : Here the calculated weighted sum is then passed on to an activation function which adds non-linearity to the network. Based on the result of the activated function on the weighted sum, the neuron makes a decision whether to pass this information further to the successive neuron or not. So coming to the mathematical approach, as we know for the ith training example in a 2

Activation Functions

In the last blog, we were introduced to the concept and were given an overview of the different steps required in building a Neural Network. In this blog we will learn in detail about the first few steps required to build the neural network. When we build the Neural Network, one of the choices we get to make is what  Activation Function  is to be used for the hidden layers. Before we move on to what Activation Functions are, let us refresh our brains on how the Neural Networks operate. They take in the input parameters with the associated weights and biases and then computes the output sum on the "activated" neurons. In the last blog we used the sigmoid activation function, but the activation function that almost always works better than the sigmoid function is the  Hyperbolic Tangent Function . The tanh function is defined as: a(z)=tanh(z)=(e^z + e^-z)/(e^z - e^-z)   ;   where -1 ≤ a(z) ≤ 1 This function is almost always better than the sigmoid function becau

Introduction to Neural Networks

With the evolution of Machine Learning, Artificial Intelligence has also taken a vast leap. Deep Learning is an improvised take on Machine Learning which uses Neural Networks to solve distinctly complex cases which involve the analysis of multi-dimensional data. A Neural Network was built to mimic the functionality of a human brain. A Neural Network mainly consists of 3 different layers: 1. Input Layer 2. Hidden Layer 3. Output Layer Since the entire concept of Neural Networks is very huge, we will proceed step by step. Firstly, we will try to implement some ideas using Logistic Regression which is an algorithm fro Binary Classification. This is a learning algorithm which we use when the output Y in a Supervised Learning model is either zero or one. Taking a very basic example: If we want to write an algorithm which will recognize whther an image is either a picture of a dog or not a picture of a dog, we will output our prediction(lets call it  ลท ) which will be the o

Starting a New Journey

Hello Everyone. This is Ankita and I am thrilled that you stopped by my corner of BlogLand! I have recently started writing blogs on a whim in the summers of 2020. Techaeris is a space designated for letting creativity splish, splash and spill. Well as the saying goes, "Creativity is very messy", and I am very creative. So I hope you will join me in making a mess and in the journey of becoming imperfect and creative. These blogs will be containing small excerpts from life and stuffs related to modern technology. Now more about myself. I am a twenty one year old aspiring software engineer who dreams to live life on her own terms. I have still been discovering myself, but at twenties now, I have plenty of time for discovering, don't you think? I like looking into the depths of things, always holding to the question "why and how" whenever I am introduced to any new element of life. Well, being an engineer and a science student does explain my curious nature to

Unknown Facts about JavaScript

Along with HTML, CSS; JavaSript is one of the three main things of the www. Today I will be discussing a few facts, which might help you respect JavaScript even more. Even for developers that interact with it daily, some part of the language remains unexplored. I myself was surprised as I had no idea about maximum of the facts I am going to discuss now. 1. Presumably, JavaScript has 2 sets of Zeros.: -0 and +0. Although both of them are considered to be equal. This so happens because both (-0).toString() and (+0).toString() results to 0. And hence the console shows both -0 and +0 as simply 0. 1 2 3 4 +0 → 0 -0 → 0 2. The XOR(^) is used in cryptography. 1 2 3 4 5 6 7 8 9 10 11 // Alice and Bob share the same secr et key: let key = 123; // Alice wants to send Bob a number let msg = 42; // But before sending it, Alice encrypts it: msg = msg ^ key // or directly: msg ^= key → 81 // Bob receives 45, but knowing