@ -7,7 +7,7 @@ ML implements logistic regression, which is a probabilistic classification techn
Like SVM, Logistic Regression can be extended to work on multi-class classification problems like digit recognition (i.e. recognizing digitis like 0,1 2, 3,... from the given images).
This version of Logistic Regression supports both binary and multi-class classifications (for multi-class it creates a multiple 2-class classifiers).
In order to train the logistic regression classifier, Batch Gradient Descent and Mini-Batch Gradient Descent algorithms are used (see [BatchDesWiki]_).
Logistic Regression is a discriminative classifier (see [LogRegTomMitch]_ for more details). Logistic Regression is implemented as a C++ class in ``CvLR``.
Logistic Regression is a discriminative classifier (see [LogRegTomMitch]_ for more details). Logistic Regression is implemented as a C++ class in ``LogisticRegression``.
In Logistic Regression, we try to optimize the training paramater
@ -28,26 +28,26 @@ or class 0 if
.
In Logistic Regression, choosing the right parameters is of utmost importance for reducing the training error and ensuring high training accuracy.
``CvLR_TrainParams`` is the structure that defines parameters that are required to train a Logistic Regression classifier.
The learning rate is determined by ``CvLR_TrainParams.alpha``. It determines how faster we approach the solution.
It is a positive real number. Optimization algorithms like Batch Gradient Descent and Mini-Batch Gradient Descent are supported in ``CvLR``.
``LogisticRegressionParams`` is the structure that defines parameters that are required to train a Logistic Regression classifier.
The learning rate is determined by ``LogisticRegressionParams.alpha``. It determines how faster we approach the solution.
It is a positive real number. Optimization algorithms like Batch Gradient Descent and Mini-Batch Gradient Descent are supported in ``LogisticRegression``.
It is important that we mention the number of iterations these optimization algorithms have to run.
The number of iterations are mentioned by ``CvLR_TrainParams.num_iters``.
The number of iterations are mentioned by ``LogisticRegressionParams.num_iters``.
The number of iterations can be thought as number of steps taken and learning rate specifies if it is a long step or a short step. These two parameters define how fast we arrive at a possible solution.
In order to compensate for overfitting regularization is performed, which can be enabled by setting ``CvLR_TrainParams.regularized`` to a positive integer (greater than zero).
One can specify what kind of regularization has to be performed by setting ``CvLR_TrainParams.norm`` to ``CvLR::REG_L1`` or ``CvLR::REG_L2`` values.
``CvLR`` provides a choice of 2 training methods with Batch Gradient Descent or the Mini-Batch Gradient Descent. To specify this, set ``CvLR_TrainParams.train_method`` to either ``CvLR::BATCH`` or ``CvLR::MINI_BATCH``.
If ``CvLR_TrainParams`` is set to ``CvLR::MINI_BATCH``, the size of the mini batch has to be to a postive integer using ``CvLR_TrainParams.minibatchsize``.
In order to compensate for overfitting regularization is performed, which can be enabled by setting ``LogisticRegressionParams.regularized`` to a positive integer (greater than zero).
One can specify what kind of regularization has to be performed by setting ``LogisticRegressionParams.norm`` to ``LogisticRegression::REG_L1`` or ``LogisticRegression::REG_L2`` values.
``LogisticRegression`` provides a choice of 2 training methods with Batch Gradient Descent or the Mini-Batch Gradient Descent. To specify this, set ``LogisticRegressionParams.train_method`` to either ``LogisticRegression::BATCH`` or ``LogisticRegression::MINI_BATCH``.
If ``LogisticRegressionParams`` is set to ``LogisticRegression::MINI_BATCH``, the size of the mini batch has to be to a postive integer using ``LogisticRegressionParams.mini_batch_size``.
A sample set of training parameters for the Logistic Regression classifier can be initialized as follows:
..[LogRegWiki] http://en.wikipedia.org/wiki/Logistic_regression. Wikipedia article about the Logistic Regression algorithm.
@ -56,9 +56,9 @@ A sample set of training parameters for the Logistic Regression classifier can b
..[LogRegTomMitch] http://www.cs.cmu.edu/~tom/NewChapters.html. "Generative and Discriminative Classifiers: Naive Bayes and Logistic Regression" in Machine Learning, Tom Mitchell.
..[BatchDesWiki] http://en.wikipedia.org/wiki/Gradient_descent_optimization. Wikipedia article about Gradient Descent based optimization.
CvLR_TrainParams
----------------
..ocv:struct::CvLR_TrainParams
LogisticRegressionParams
------------------------
..ocv:struct::LogisticRegressionParams
Parameters of the Logistic Regression training algorithm. You can initialize the structure using a constructor or declaring the variable and initializing the the individual parameters.
@ -74,7 +74,7 @@ CvLR_TrainParams
..ocv:member:: int norm
The type of normalization applied. It takes value ``CvLR::L1`` or ``CvLR::L2``.
The type of normalization applied. It takes value ``LogisticRegression::L1`` or ``LogisticRegression::L2``.
..ocv:member:: int regularized
@ -82,89 +82,95 @@ CvLR_TrainParams
..ocv:member:: int train_method
The kind of training method used to train the classifier. It should be set to either ``CvLR::BATCH`` or ``CvLR::MINI_BATCH``.
The kind of training method used to train the classifier. It should be set to either ``LogisticRegression::BATCH`` or ``LogisticRegression::MINI_BATCH``.
..ocv:member:: int minibatchsize
..ocv:member:: int mini_batch_size
If the training method is set to CvLR::MINI_BATCH, it has to be set to positive integer. It can range from 1 to number of training samples.
If the training method is set to LogisticRegression::MINI_BATCH, it has to be set to positive integer. It can range from 1 to number of training samples.
..ocv:function::CvLR_TrainParams::CvLR_TrainParams(double alpha, int num_iters, int norm, int regularized, int train_method, int minbatchsize)
..ocv:function::LogisticRegressionParams::LogisticRegressionParams(double alpha, int num_iters, int norm, int regularized, int train_method, int minbatchsize)
:param alpha:Specifies the learning rate.
:param num_iters:Specifies the number of iterations.
:param norm:Specifies the kind of regularization to be applied. ``CvLR::REG_L1`` or ``CvLR::REG_L2``. To use this, set ``CvLR_TrainParams.regularized`` to a integer greater than zero.
:param norm:Specifies the kind of regularization to be applied. ``LogisticRegression::REG_L1`` or ``LogisticRegression::REG_L2``. To use this, set ``LogisticRegressionParams.regularized`` to a integer greater than zero.
:param:regularized: To enable or disable regularization. Set to positive integer (greater than zero) to enable and to 0 to disable.
:param:train_method: Specifies the kind of training method used. It should be set to either ``CvLR::BATCH`` or ``CvLR::MINI_BATCH``. If using ``CvLR::MINI_BATCH``, set ``CvLR_TrainParams.minibatchsize`` to a positive integer.
:param:train_method: Specifies the kind of training method used. It should be set to either ``LogisticRegression::BATCH`` or ``LogisticRegression::MINI_BATCH``. If using ``LogisticRegression::MINI_BATCH``, set ``LogisticRegressionParams.mini_batch_size`` to a positive integer.
:param:minibatchsize: Specifies the number of training samples taken in each step of Mini-Batch Gradient Descent.
:param:mini_batch_size: Specifies the number of training samples taken in each step of Mini-Batch Gradient Descent.
By initializing this structure, one can set all the parameters required for Logistic Regression classifier.
CvLR
----
..ocv:class::CvLR : public CvStatModel
LogisticRegression
------------------
..ocv:class:: LogisticRegression : public CvStatModel