Statistics is the science of collecting, interpreting, and presenting data. It provides methods for analyzing and assessing the significance of data. Statistics enables the transformation of data into information that can then serve as the basis for decision-making. A good and comprehensive source on Statistics: Methods and Applications is available online.
Descriptive statistics organize, describe, and summarize data using numbers and graphical techniques. This branch of statistics uses a set of standard measures such as percent, averages, and variability, as well as simple graphs, charts, and tables. Descriptive statistics help you to better understand your data by describing and summarizing its basic features. You learn how to generate and understand numerical summaries. These include frequency; measures of location, including minimum, maximum, percentiles, quartiles, and central tendency (mean, median, and mode); and measures of dispersion or variability, including range, interquartile range, variance, and standard deviation. The graphical summaries you learn include the histogram, normal probability plot, and box plot. The goals when you’re describing data are to:
• screen for unusual data values,
• inspect the spread and shape of your data,
• characterize the central tendency, and
• draw preliminary conclusions about your data.
Inferential statistics is the branch of statistics concerned with drawing conclusions about a population from analysis of a random sample drawn from that population. It is also concerned with the precision and reliability of those inferences. Inferential statistics generalize from data you observe to the population that you have not observed.
Descriptive statistics describe your sample data, but inferential statistics help you draw conclusions about the entire population of data.
http://rstatistics.net/ provides a very complete set of topics on Statistics – both basic and advanced, including regression, decision trees, principal component analysis, cluster analysis, support vector machines, and many other, with clear explanation and examples in R.
Dr. Mark Gardener also gives a very wide and comprehensive coverage of Statistical Analyses methods in his webpages, with clear numerical examples in his explanations.
A good coverage of Statistics is also available on this online course, though it is using the software Minitab.
Linear vs Logistic Regression
In statistics, regression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps one understand how the typical value of the dependent variable (response variable) changes when any one of the independent variables (explanatory / predictor variables) is varied, while the other independent variables are held fixed.
Linear Regression is used to model a response variable against one or more explanatory variables. The variables must be pairwise, continuous and are assumed to have a linear relationship between them. This technique is widely popular in predictive analysis. A very good explanation with numerical examples is shown here.
Logistic Regression models a binary (0 or 1) or categorical (High-income, Middle-income, Low-income) response variable, where the predictor variables could be continuous or ordinal(rank). It is a widely used linear classification technique. See a good R example on logistic regression.
Classification (Decision tree) vs Regression tree
Classification trees are used to separate datasets into classes belonging to the response variable. Usually the response variable has two classes: Yes or No (1 or 0). ID3 algorithm is used for the case when the response variable (target attribute) has two possible output values. If the response variable has more than 2 categories, then a variant of the algorithm, called C4.5, is used. For binary splits however, the standard CART procedure is used. Thus classification trees are used when the response or target variable is categorical in nature. A very good explanation with numerical examples is shown here.
Regression trees are needed when the response variable is numeric or continuous, for example, when you need to predict the price of a consumer good based on several input factors. Thus regression trees are applicable for prediction type problems as opposed to classification.
Informally, classification predicts whether something will happen, whereas regression predicts how much something will happen. Regression involves a numerical target while classification involves a categorical (often binary) target.
Keep in mind that in either case, the predictors or independent variables may be categorical or numeric. It is the target variable that determines the type of decision tree needed. A detailed explanation of Decision Tree is attached here. Below shows a high level of the types decision trees used in practice.
Supervised vs Unsupervised Learning
The difference is that in supervised learning the ‘categories‘ are known. In unsupervised learning, they are not, and the learning process attempts to find appropriate ‘categories‘. In both kinds of learning all parameters are considered to determine which are most appropriate to perform the classification/clustering. Supervised learning discovers patterns in the data that relate data attributes with a target (class) attribute. These patterns are then utilized to predict the values of the target attribute in future data instances. In unsupervised learning, the data have no target attribute. We want to explore the data to find some intrinsic structures in them, for example, clustering. Clustering attempts to group individuals in a population together by their similarity, but not driven by any specific purpose.
Whether you choose supervised or unsupervised should be based on whether or not you know what the ‘categories‘ of your data are. If you know, use supervised learning. If you do not know, then use unsupervised.