This machine learning tutorial is an attempt to extract essential machine learning concepts for beginners.
What is Machine Learning?
The term Machine Learning sounds like you are going to do something with machine like hardware, physical interaction with machine which we can be touched and felt.
At some extent you are correct partially but machine Learning is more than that.
So, obvious question may arise that do I need to get know about hardware before learning machine Learning?
and answer is definitely not ,Machine Learning is quite vast and is expanding rapidly.
The main motivation to build Machine Learning models was to develop intelligent systems and analyze data in science and engineering. Machine learning models enable systems such as Siri, Kinect or the Google self driving car, to name a few examples. At the same time machine learning methods help deciphering the information in our DNA and make sense of the flood of information gathered on the web.
There are some definitions which describe core or you can say essence of machine learning like Statistical Learning Theory which we we going to learn or interested in future while building machine learning models.
Machine Learning deals with systems that are trained from data rather than being explicitly programmed. Here we describe the data model considered in statistical learning theory.
Statistical learning theory is a framework for machine learning drawing from the fields of statistics (Statistics is the discipline that concerns the collection, organization, displaying, analysis, interpretation and presentation of data.) and functional analysis.
Statistical learning theory deals with the problem of finding a predictive function based on data. Statistical learning theory has led to successful applications in fields such as computer vision, speech recognition, and bioinformatics.
The goals of learning are understanding and prediction. Learning can categories into four type: supervised learning, unsupervised learning, online learning, and reinforcement learning.
From the perspective of statistical learning theory, supervised learning is best understood. Supervised learning involves learning from a training set of data. Every point in the training is an input-output pair, where the input maps to an output. The learning problem consists of inferring the function that maps between the input and the output, such that the learned function can be used to predict the output from future input.
Depending on the type of output, supervised learning problems are either problems of regression or problems of classification.
If the output takes a continuous range of values, it is a regression problem.
For an example, Let's take Ohm's Law.
Ohm's law states that the current through a conductor between two points is directly proportional to the voltage across the two points. Introducing the constant of proportionality, the resistance, We can express this using mathematical equation that describes this relationship:
I = V / R
where I is the current , V is the voltage , and R is the resistance.
A regression could be performed with voltage as input and current as an output. The regression would find the functional relationship between voltage and current to be R, such that
V = R * I
Classification problems are those for which the output will be an element from a discrete set of labels. Classification is very common for machine learning applications.
In facial recognition, for instance, a picture of a person's face would be the input, and the output label would be that person's name. The input would be represented by a large multidimensional vector whose elements represent pixels in the picture.
After learning a function based on the training set data, that function is validated on a test set of data, data that did not appear in the training set.