Hello everyone, Welcome to the Machine learning course offered by Google Developer Society.
In this course, we cover the essentials of Classical Machine Learning or the Pre-Deep Learning era. The readers are then introduced to Deep Learning - the backbone of Artifical Intelligence today. This course requires one to be familiar with basics of Python and some background in Calculus.
- Week 1-1: Brushing up the Basics and Concepts
- Week 1-2: Linear Models and hyperparameters
- Week 2-1: Classification - 1
- Week 2-2: Classification - 2
- Week 3-1: Unsupervised Learning
- Week 3-2: Introduction to Neural Networks
- Week 4-1: Introduction to Tensorflow
- Week 4-2: Introduction to Convolutional Neural Networks
- Exploring Beyond...
You may be wondering, "Why should I study this course when there are numerous other courses and tutorials available online?" It's a valid question, considering the overwhelming amount of information on the internet.
The internet is brimming with tens of thousands of resources for learning Machine Learning, but not all of them are of equal quality. Some fail to provide in-depth insights into concepts or explanations of why things work the way they do. Additionally, there are resources that contain inaccurate or misleading information.
We've encountered these challenges ourselves while learning Machine Learning, which is why we've developed this course with a unique and user-friendly approach. Instead of overwhelming you with exhaustive analyses and intricate details, we've carefully curated the best available resources for each topic. These references are chosen to offer you comprehensive insights and to address any doubts you may have, ensuring a crystal-clear understanding.
In essence, we've taken on the Herculean task of conducting thorough internet searches for you, so you don't have to. Our aim is to empower you on your journey to explore Data Science by providing concise explanations and guiding you to the most valuable resources. Happy Learning!
Our journey into the realm of Machine Learning commences with the establishment of a solid groundwork in different interconnected areas, such as Mathematics and coding abilities.
-
We commence with Python, which is undeniably one of the most widely used programming languages worldwide, particularly in the context of Machine Learning and broader programming applications.
-
Our next step involves delving into one of the fundamental skills in Machine Learning – Data Plotting and Visualization. This skill serves as an initial and crucial stage in addressing Machine Learning challenges. Understanding the data you're working with is imperative for developing effective Machine Learning models tailored to the task at hand.
-
We also explore the concept of Data Distribution, which holds central importance in comprehending the types of models suitable for solving specific problems.
-
Assuming we have a grasp of the data we're dealing with and understand its distribution, we encounter the need to prepare and pre-process the data to align it with the requirements of our chosen model. To address this, we introduce a highly useful library known as Pandas.
-
Once the data is prepared, the next question is how to efficiently process it. This is where Numpy – Numerical Python comes into play. Numpy is extensively employed for numerical computation and is renowned for accelerating various computational tasks.
-
We then aim to familiarize readers with the foundational concepts of Machine Learning, namely Calculus and Linear Algebra.
With this foundational knowledge in place, we are primed to delve deeply into the world of Machine Learning and explore every facet of it in detail, commencing in week 2.
We begin our walk with basics of Machine Learning i.e. Regression and Classification in this week.
-
Linear Regression stands as one of the most fundamental concepts in Machine Learning, serving as a cornerstone upon which Machine Learning is built.
-
Moving beyond simple Linear Regression, we explore a more efficient model commonly known as Segmented Regression and assess its performance relative to Simple Linear Regression.
-
We delve into the intricacies of Locally Weighted Regression, a powerful Linear Regression model that thrives on high-quality observations, and we provide an in-depth exploration of this approach.
-
Transitioning from the linear realm, we venture into Logistic Regression, a pivotal tool for data classification tasks.
-
We introduce the relatively advanced Naive Bayes Classifier, extensively utilized in the era of Machine Learning.
-
Building upon the Data Analysis skills acquired in Week 1, we delve into the realms of Exploratory Data Analysis and Data Pre-Processing.
-
Our journey culminates in the exploration of Generalized Linear Models (GLM), a comprehensive approach that unifies various linear models. Although this topic involves mathematical concepts, we strive to present it in an intuitive manner, minimizing the focus on mathematical intricacies.
In Week 3, we move beyond Linear Models and delve into the realm of non-linear models while also exploring some Unsupervised Learning techniques.
-
We kick things off with a look at the non-parametric learning model known as the Decision Tree. While it might appear deceptively simple at first, this model proves to be remarkably powerful, serving purposes in both Regression and Classification tasks. Understanding the intricacies of Decision Trees is crucial, given their extensive historical and ongoing usage.
-
Additionally, we discuss the algorithm employed for constructing Decision Trees, providing insights into the inner workings of this model.
-
Our exploration extends to the concept of Ensemble Learning, where we delve into an ensemble of Decision Trees known as the Random Forest. We dissect the various ways to effectively combine Decision Trees and evaluate the advantages and disadvantages of this approach in comparison to standalone Decision Trees.
-
We address the challenge posed by high-dimensional images and explore the feasibility of using the original image data in conventional models. To tackle this, we introduce the technique of Principal Component Analysis (PCA), a powerful tool widely utilized in applications such as Face Recognition. PCA essentially functions as a Dimensionality Reduction Technique.
-
Transitioning to the realm of Unsupervised Learning, we introduce the k Nearest Neighbours (kNN) model, a valuable approach for clustering data and often employed as a Classifier. kNN represents a Non-Parametric Learning model that helps make sense of data by grouping it into clusters.
In this week's module, we explore the realm of Neural Networks, a groundbreaking development that has transformed the field of Artificial Intelligence. Neural Networks are extensively employed across various domains of Artificial Learning, and their advent has reshaped the landscape of machine learning. In many ways, they serve as a unifying model capable of handling a wide range of tasks that traditional machine learning models address.
-
We delve into the fundamental concepts of Neural Networks, along with the associated mathematics, to provide a comprehensive understanding of how they operate.
-
We also cover essential concepts like Loss Functions and Optimization Techniques that play pivotal roles in the training and performance of Neural Networks.
-
Starting this week, we shift our focus to the practical aspects of machine learning models. These practical considerations are crucial determinants of a model's efficiency and effectiveness, and they represent critical factors to address before tackling real-world problem-solving tasks.
-
We introduce relevant statistical concepts that closely intertwine with Machine Learning, forming the bedrock of understanding for various machine learning models.
-
To reinforce your understanding of this pivotal model, we provide a set of practice exercises aimed at clarifying key concepts.
This week, we start with implementation details of Neural Networks in Standard Libraries like Tensorflow and Keras. We also try to further our knowledge of Neural Nets with more advanced concepts.
- We start with tutorials on tensorflow and Keras where we learn about the implementation of Neural Nets and applying them to tasks.
- We then head on to Convolutional Neural Nets. Probaby one of the greatest developments in the field of Deep Learning that has indeed changed the field of Computer Vision completely.
- We talk a lot of detail about CNN's and introduce you to Conv Layers and Pooling Layers.
- A few exercises have been given to acquaint readers with Neural Nets and Convolutional Neural Nets.
In this week, we dive into the practical intricacies of Neural Networks and their pivotal role in shaping our model's performance.
-
We commence by delving into the Initialization of Parameters and provide a comprehensive discussion on how it significantly impacts the learning process of a model.
-
We aim to consolidate various techniques into a standardized expression of Standard Deviation, as elucidated in a Research Paper included in the content section.
-
Our final topic of discussion in this week is Dropouts. Google introduced this ingenious concept of Dropouts, which, in a way, enables an ensemble approach using a single neural network, resulting in substantial performance improvements.
In this final week, we give a well thought assignemnt to the readers so that they can get try their hands on a problem using all the concepts that they have learnt throughout the course. This assignment is also used to judge and evaluate the performance of the student based on how well the code is written and the accuracy of the model.
This assignment is based on Neural Networks that was first designed some 70 years back by Frank RosenBalt. Neural networks form the backbone of the entire Artifical Intelligence Industry today. Deep Learning has really taken off recently and through this assignment we try to give you a flavour of the Deep Learning domain through a very basic exercise.
-
Towards Data Science: This website is frequently utilized by Data Scientists and offers a vast repository of articles contributed by individuals ranging from students to seasoned Data Scientists.
-
Stanford CS 229: Taught by renowned AI expert Andrew Ng at Stanford, this course provides advanced content in Machine Learning. It delves deep into the mathematical foundations of Machine Learning and is a comprehensive resource for those seeking in-depth knowledge.
Created by GDSC 🌊