-->

Tuesday, April 3, 2018

In computer science, online machine learning is a method of machine learning in which data becomes available in a sequential order and is used to update our best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time, e.g. stock price prediction. Online learning algorithms may be prone to catastrophic interference. This problem is tackled by incremental learning approaches.

Introduction




Lecture 17.5 â€" Large Scale Machine Learning | Online Learning â€" [ Machine Learning | Andrew Ng ] - Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for "FAIR USE" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research....

In the setting of supervised learning, a function of f : X â†' Y {\displaystyle f:X\to Y} is to be learned, where X {\displaystyle X} is thought of as a space of inputs and Y {\displaystyle Y} as a space of outputs, that predicts well on instances that are drawn from a joint probability distribution p ( x , y ) {\displaystyle p(x,y)} on X × Y {\displaystyle X\times Y} . In reality, the learner never knows the true distribution p ( x , y ) {\displaystyle p(x,y)} over instances. Instead, the learner usually has access to a training set of examples ( x 1 , y 1 ) , … , ( x n , y n ) {\displaystyle (x_{1},y_{1}),\ldots ,(x_{n},y_{n})} . In this setting, the loss function is given as V : Y × Y â†' R {\displaystyle V:Y\times Y\to \mathbb {R} } , such that V ( f ( x ) , y ) {\displaystyle V(f(x),y)} measures the difference between the predicted value f ( x ) {\displaystyle f(x)} and the true value y {\displaystyle y} . The ideal goal is to select a function f ∈ H {\displaystyle f\in {\mathcal {H}}} , where H {\displaystyle {\mathcal {H}}} is a space of functions called a hypothesis space, so that some notion of total loss is minimised. Depending on the type of model (statistical or adversarial), one can devise different notions of loss, which lead to different learning algorithms.

Statistical view of online learning


Google wants to teach more people AI and machine learning with a ...
Google wants to teach more people AI and machine learning with a .... Source : www.theverge.com

In statistical learning models, the training sample ( x i , y i ) {\displaystyle (x_{i},y_{i})} are assumed to have been drawn from the true distribution p ( x , y ) {\displaystyle p(x,y)} and the objective is to minimize the expected "risk"

I [ f ] = E [ V ( f ( x ) , y ) ] = ∫ V ( f ( x ) , y ) d p ( x , y )   . {\displaystyle I[f]=\mathbb {E} [V(f(x),y)]=\int V(f(x),y)\,dp(x,y)\ .}

A common paradigm in this situation is to estimate a function f ^ {\displaystyle {\hat {f}}} through empirical risk minimization or regularized empirical risk minimization (usually Tikhonov regularization). The choice of loss function here gives rise to several well-known learning algorithms such as regularized least squares and support vector machines. A purely online model in this category would learn based on just the new input ( x t + 1 , y t + 1 ) {\displaystyle (x_{t+1},y_{t+1})} , the current best predictor f t {\displaystyle f_{t}} and some extra stored information (which is usually expected to have storage requirements independent of training data size). For many formulations, for example nonlinear kernel methods, true online learning is not possible, though a form of hybrid online learning with recursive algorithms can be used where f t + 1 {\displaystyle f_{t+1}} is permitted to depend on f t {\displaystyle f_{t}} and all previous data points ( x 1 , y 1 ) , … , ( x t , y t ) {\displaystyle (x_{1},y_{1}),\ldots ,(x_{t},y_{t})} . In this case, the space requirements are no longer guaranteed to be constant since it requires storing all previous data points, but the solution may take less time to compute with the addition of a new data point, as compared to batch learning techniques.

A common strategy to overcome the above issues is to learn using mini-batches, which process a small batch of b ≥ 1 {\displaystyle b\geq 1} data points at a time, this can be considered as pseudo-online learning for b {\displaystyle b} much smaller than the total number of training points. Mini-batch techniques are used with repeated passing over the training data to obtain optimized out-of-core versions of machine learning algorithms, for e.g. Stochastic gradient descent. When combined with backpropagation, this is currently the de facto training method for training artificial neural networks.

Example: linear least squares

The simple example of linear least squares is used to explain a variety of ideas in online learning. The ideas are general enough to be applied to other settings, for e.g. with other convex loss functions.

Batch learning

In the setting of supervised learning with the square loss function, the intent is to minimize the empirical loss,

I n [ w ] = âˆ' j = 1 n V ( ⟨ w , x j ⟩ , y j ) = âˆ' j = 1 n ( x j T w âˆ' y j ) 2 {\displaystyle I_{n}[w]=\sum _{j=1}^{n}V(\langle w,x_{j}\rangle ,y_{j})=\sum _{j=1}^{n}(x_{j}^{T}w-y_{j})^{2}} where
x j ∈ R d , w ∈ R d , y j ∈ R {\displaystyle x_{j}\in \mathbb {R} ^{d},w\in \mathbb {R} ^{d},y_{j}\in \mathbb {R} } .

Let X {\displaystyle X} be the i × d {\displaystyle i\times d} data matrix and Y {\displaystyle Y} is the i × 1 {\displaystyle i\times 1} matrix of target values after the arrival of the first i {\displaystyle i} data points. Assuming that the covariance matrix Σ i = X T X {\displaystyle \Sigma _{i}=X^{T}X} is invertible (otherwise it is preferential to proceed in a similar fashion with Tikhonov regularization), the best solution f ∗ ( x ) = ⟨ w ∗ , x ⟩ {\displaystyle f^{*}(x)=\langle w^{*},x\rangle } to the linear least squares problem is given by

w ∗ = ( X T X ) âˆ' 1 X T Y = Σ i âˆ' 1 âˆ' j = 1 i x j y j {\displaystyle w^{*}=(X^{T}X)^{-1}X^{T}Y=\Sigma _{i}^{-1}\sum _{j=1}^{i}x_{j}y_{j}} .

Now, calculating the covariance matrix Σ i = âˆ' j = 1 i x j x j T {\displaystyle \Sigma _{i}=\sum _{j=1}^{i}x_{j}x_{j}^{T}} takes time O ( i d 2 ) {\displaystyle O(id^{2})} , inverting the d × d {\displaystyle d\times d} matrix takes time O ( d 3 ) {\displaystyle O(d^{3})} , while the rest of the multiplication takes time O ( d 2 ) {\displaystyle O(d^{2})} , giving a total time of O ( i d 2 + d 3 ) {\displaystyle O(id^{2}+d^{3})} . When n {\displaystyle n} total points in the dataset and having to recompute the solution after the arrival of every datapoint i = 1 , … , n {\displaystyle i=1,\ldots ,n} , the naive approach will have a total complexity O ( n 2 d 2 + n d 3 ) {\displaystyle O(n^{2}d^{2}+nd^{3})} . Note that when storing the matrix Σ i {\displaystyle \Sigma _{i}} , then updating it at each step needs only adding x i + 1 x i + 1 T {\displaystyle x_{i+1}x_{i+1}^{T}} , which takes O ( d 2 ) {\displaystyle O(d^{2})} time, reducing the total time to O ( n d 2 + n d 3 ) = O ( n d 3 ) {\displaystyle O(nd^{2}+nd^{3})=O(nd^{3})} , but with an additional storage space of O ( d 2 ) {\displaystyle O(d^{2})} to store Σ i {\displaystyle \Sigma _{i}} .

Online learning: recursive least squares

The recursive least squares algorithm considers an online approach to the least squares problem. It can be shown that by initialising w 0 = 0 ∈ R d {\displaystyle \textstyle w_{0}=0\in \mathbb {R} ^{d}} and Î" 0 = I ∈ R d × d {\displaystyle \textstyle \Gamma _{0}=I\in \mathbb {R} ^{d\times d}} , the solution of the linear least squares problem given in the previous section can be computed by the following iteration:

Î" i = Î" i âˆ' 1 âˆ' Î" i âˆ' 1 x i x i T Î" i âˆ' 1 1 + x i T Î" i âˆ' 1 x i {\displaystyle \Gamma _{i}=\Gamma _{i-1}-{\frac {\Gamma _{i-1}x_{i}x_{i}^{T}\Gamma _{i-1}}{1+x_{i}^{T}\Gamma _{i-1}x_{i}}}}
w i = w i âˆ' 1 âˆ' Î" i x i ( x i T w i âˆ' 1 âˆ' y i ) {\displaystyle w_{i}=w_{i-1}-\Gamma _{i}x_{i}(x_{i}^{T}w_{i-1}-y_{i})}

The above iteration algorithm can be proved using induction on i {\displaystyle i} . The proof also shows that Î" i = Σ i âˆ' 1 {\displaystyle \Gamma _{i}=\Sigma _{i}^{-1}} . One can look at RLS also in the context of adaptive filters (see RLS).

The complexity for n {\displaystyle n} steps of this algorithm is O ( n d 2 ) {\displaystyle O(nd^{2})} , which is an order of magnitude faster than the corresponding batch learning complexity. The storage requirements at every step i {\displaystyle i} here are to store the matrix Î" i {\displaystyle \Gamma _{i}} , which is constant at O ( d 2 ) {\displaystyle O(d^{2})} . For the case when Σ i {\displaystyle \Sigma _{i}} is not invertible, consider the regularised version of the problem loss function âˆ' j = 1 n ( x j T w âˆ' y j ) 2 + λ | | w | | 2 2 {\displaystyle \sum _{j=1}^{n}(x_{j}^{T}w-y_{j})^{2}+\lambda ||w||_{2}^{2}} . Then, it's easy to show that the same algorithm works with Î" 0 = ( I + λ I ) âˆ' 1 {\displaystyle \Gamma _{0}=(I+\lambda I)^{-1}} , and the iterations proceed to give Î" i = ( Σ i + λ I ) âˆ' 1 {\displaystyle \Gamma _{i}=(\Sigma _{i}+\lambda I)^{-1}} .

Stochastic gradient descent

When this

w i = w i âˆ' 1 âˆ' Î" i x i ( x i T w i âˆ' 1 âˆ' y i ) {\displaystyle \textstyle w_{i}=w_{i-1}-\Gamma _{i}x_{i}(x_{i}^{T}w_{i-1}-y_{i})}

is replaced by

w i = w i âˆ' 1 âˆ' γ i x i ( x i T w i âˆ' 1 âˆ' y i ) = w i âˆ' 1 âˆ' γ i ∇ V ( ⟨ w i âˆ' 1 , x i ⟩ , y i ) {\displaystyle \textstyle w_{i}=w_{i-1}-\gamma _{i}x_{i}(x_{i}^{T}w_{i-1}-y_{i})=w_{i-1}-\gamma _{i}\nabla V(\langle w_{i-1},x_{i}\rangle ,y_{i})}

or Î" i ∈ R d × d {\displaystyle \Gamma _{i}\in \mathbb {R} ^{d\times d}} by γ i ∈ R {\displaystyle \gamma _{i}\in \mathbb {R} } , this becomes the stochastic gradient descent algorithm. In this case, the complexity for n {\displaystyle n} steps of this algorithm reduces to O ( n d ) {\displaystyle O(nd)} . The storage requirements at every step i {\displaystyle i} are constant at O ( d ) {\displaystyle O(d)} .

However, the stepsize γ i {\displaystyle \gamma _{i}} needs to be chosen carefully to solve the expected risk minimization problem, as detailed above. By choosing a decaying step size γ i ≈ 1 i , {\displaystyle \gamma _{i}\approx {\frac {1}{\sqrt {i}}},} one can prove the convergence of the average iterate w ¯ n = 1 n âˆ' i = 1 n w i {\displaystyle {\overline {w}}_{n}={\frac {1}{n}}\sum _{i=1}^{n}w_{i}} . This setting is a special case of stochastic optimization, a well known problem in optimization.

Incremental stochastic gradient descent

In practice, one can perform multiple stochastic gradient passes (also called cycles or epochs) over the data. The algorithm thus obtained is called incremental gradient method and corresponds to an iteration

w i = w i âˆ' 1 âˆ' γ i ∇ V ( ⟨ w i âˆ' 1 , x t i ⟩ , y t i ) {\displaystyle \textstyle w_{i}=w_{i-1}-\gamma _{i}\nabla V(\langle w_{i-1},x_{t_{i}}\rangle ,y_{t_{i}})}

The main difference with the stochastic gradient method is that here a sequence t i {\displaystyle t_{i}} is chosen to decide which training point is visited in the i {\displaystyle i} -th step. Such a sequence can be stochastic or deterministic. The number of iterations is then decoupled to the number of points (each point can be considered more than once). The incremental gradient method can be shown to provide a minimizer to the empirical risk. Incremental techniques can be advantageous when considering objective functions made up of a sum of many terms e.g. an empirical error corresponding to a very large dataset.

Kernel methods

Kernels can be used to extend the above algorithms to non-parametric models (or models where the parameters form an infinite dimensional space). The corresponding procedure will no longer be truly online and instead involve storing all the data points, but is still faster than the brute force method. This discussion is restricted to the case of the square loss, though it can be extended to any convex loss. It can be shown by an easy induction that if X i {\displaystyle X_{i}} is the data matrix and w i {\displaystyle w_{i}} is the output after i {\displaystyle i} steps of the SGD algorithm, then,

w i = X i T c i {\displaystyle w_{i}=X_{i}^{T}c_{i}}

where c i = ( ( c i ) 1 , ( c i ) 2 , . . . , ( c i ) i ) ∈ R i {\displaystyle \textstyle c_{i}=((c_{i})_{1},(c_{i})_{2},...,(c_{i})_{i})\in \mathbb {R} ^{i}} and the sequence c i {\displaystyle c_{i}} satisfies the recursion:

c 0 = 0 {\displaystyle c_{0}=0}
( c i ) j = ( c i âˆ' 1 ) j , j = 1 , 2 , . . . , i âˆ' 1 {\displaystyle (c_{i})_{j}=(c_{i-1})_{j},j=1,2,...,i-1} and
( c i ) i = γ i ( y i âˆ' âˆ' j = 1 i âˆ' 1 ( c i âˆ' 1 ) j ⟨ x j , x i ⟩ ) {\displaystyle (c_{i})_{i}=\gamma _{i}{\Big (}y_{i}-\sum _{j=1}^{i-1}(c_{i-1})_{j}\langle x_{j},x_{i}\rangle {\Big )}}

Notice that here ⟨ x j , x i ⟩ {\displaystyle \langle x_{j},x_{i}\rangle } is just the standard Kernel on R d {\displaystyle \mathbb {R} ^{d}} , and the predictor is of the form

f i ( x ) = ⟨ w i âˆ' 1 , x ⟩ = âˆ' j = 1 i âˆ' 1 ( c i âˆ' 1 ) j ⟨ x j , x ⟩ {\displaystyle f_{i}(x)=\langle w_{i-1},x\rangle =\sum _{j=1}^{i-1}(c_{i-1})_{j}\langle x_{j},x\rangle } .

Now, if a general kernel K {\displaystyle K} is introduced instead and let the predictor be

f i ( x ) = âˆ' j = 1 i âˆ' 1 ( c i âˆ' 1 ) j K ( x j , x ) {\displaystyle f_{i}(x)=\sum _{j=1}^{i-1}(c_{i-1})_{j}K(x_{j},x)}

then the same proof will also show that predictor minimising the least squares loss is obtained by changing the above recursion to

( c i ) i = γ i ( y i âˆ' âˆ' j = 1 i âˆ' 1 ( c i âˆ' 1 ) j K ( x j , x i ) ) {\displaystyle (c_{i})_{i}=\gamma _{i}{\Big (}y_{i}-\sum _{j=1}^{i-1}(c_{i-1})_{j}K(x_{j},x_{i}){\Big )}}

The above expression requires storing all the data for updating c i {\displaystyle c_{i}} . The total time complexity for the recursion when evaluating for the n {\displaystyle n} -th datapoint is O ( n 2 d k ) {\displaystyle O(n^{2}dk)} , where k {\displaystyle k} is the cost of evaluating the kernel on a single pair of points. Thus, the use of the kernel has allowed the movement from a finite dimensional parameter space w i ∈ R d {\displaystyle \textstyle w_{i}\in \mathbb {R} ^{d}} to a possibly infinite dimensional feature represented by a kernel K {\displaystyle K} by instead performing the recursion on the space of parameters c i ∈ R i {\displaystyle \textstyle c_{i}\in \mathbb {R} ^{i}} , whose dimension is the same as the size of the training dataset. In general, this is a consequence of the representer theorem.

Progressive learning

Progressive learning is an effective learning model which is demonstrated by the human learning process. It is the process of learning continuously from direct experience. Progressive learning technique (PLT) in machine learning can learn new classes/labels dynamically on the run. Though online learning can learn new samples of data that arrive sequentially, they cannot learn new classes of data being introduced to the model. The learning paradigm of progressive learning, is independent of the number of class constraints and it can learn new classes while still retaining the knowledge of previous classes. Whenever a new class (non-native to the knowledge learnt thus far) is encountered, the classifier gets remodeled automatically and the parameters are calculated in such a way that it retains the knowledge learnt thus far. This technique is suitable for real-world applications where the number of classes is often unknown and online learning from real-time data is required.

Online convex optimisation

In online convex optimisation (OCO), the hypothesis set and the loss functions are forced to be convex to obtain stronger learning bounds. The modified sequential game is now as follows:

For t = 1 , 2 , . . . , T {\displaystyle t=1,2,...,T}

  • Learner receives input x t {\displaystyle x_{t}}
  • Learner outputs w t {\displaystyle w_{t}} from a fixed convex set S {\displaystyle S}
  • Nature sends back a convex loss function v t : S â†' R {\displaystyle v_{t}:S\rightarrow \mathbb {R} } .
  • Learner suffers loss v t ( w t ) {\displaystyle v_{t}(w_{t})} and updates its model

Thus, when regret is minimised, competition against the best weight vector u ∈ H {\displaystyle u\in H} occurs. As an example, consider the case of online least squares linear regression. Here, the weight vectors come from the convex set S = R d {\displaystyle S=\mathbb {R} ^{d}} , and nature sends back the convex loss function v t ( w ) = ( ⟨ w , x t ⟩ âˆ' y t ) 2 {\displaystyle v_{t}(w)=(\langle w,x_{t}\rangle -y_{t})^{2}} . Note here that y t {\displaystyle y_{t}} is implicitly sent with v t {\displaystyle v_{t}} .

Some online prediction problems however cannot fit in the framework of OCO. For example, in online classification, the prediction domain and the loss functions are not convex. In such scenarios, two simple techniques for convexification are used: randomisation and surrogate loss functions.

Some simple online convex optimisation algorithms are:

Follow the leader (FTL)

The simplest learning rule to try is to select (at the current step) the hypothesis that has the least loss over all past rounds. This algorithm is called Follow the leader, and is simply given round t {\displaystyle t} by:

w t = a r g m i n w ∈ S ⁡ âˆ' i = 1 t âˆ' 1 v i ( w ) {\displaystyle w_{t}=\operatorname {arg\,min} _{w\in S}\sum _{i=1}^{t-1}v_{i}(w)}

This method can thus be looked as a greedy algorithm. For the case of online quadratic optimization (where the loss function is v t ( w ) = | | w âˆ' x t | | 2 2 {\displaystyle v_{t}(w)=||w-x_{t}||_{2}^{2}} ), one can show a regret bound that grows as log ⁡ ( T ) {\displaystyle \log(T)} . However, similar bounds cannot be obtained for the FTL algorithm for other important families of models like online linear optimization. To do so, one modifies FTL by adding regularisation.

Follow the regularised leader (FTRL)

This is a natural modification of FTL that is used to stabilise the FTL solutions and obtain better regret bounds. A regularisation function R : S â†' R {\displaystyle R:S\rightarrow \mathbb {R} } is chosen and learning performed in round t as follows:

w t = a r g m i n w ∈ S ⁡ âˆ' i = 1 t âˆ' 1 v i ( w ) + R ( w ) {\displaystyle w_{t}=\operatorname {arg\,min} _{w\in S}\sum _{i=1}^{t-1}v_{i}(w)+R(w)}

As a special example, consider the case of online linear optimisation i.e. where nature sends back loss functions of the form v t ( w ) = ⟨ w , z t ⟩ {\displaystyle v_{t}(w)=\langle w,z_{t}\rangle } . Also, let S = R d {\displaystyle S=\mathbb {R} ^{d}} . Suppose the regularisation function R ( w ) = 1 2 η | | w | | 2 2 {\displaystyle R(w)={\frac {1}{2\eta }}||w||_{2}^{2}} is chosen for some positive number η {\displaystyle \eta } . Then, one can show that the regret minimising iteration becomes

w t + 1 = âˆ' η âˆ' i = 1 t z i = w t âˆ' η z t {\displaystyle w_{t+1}=-\eta \sum _{i=1}^{t}z_{i}=w_{t}-\eta z_{t}}

Note that this can be rewritten as w t + 1 = w t âˆ' η ∇ v t ( w t ) {\displaystyle w_{t+1}=w_{t}-\eta \nabla v_{t}(w_{t})} , which looks exactly like online gradient descent.

If S is instead some convex subspace of R d {\displaystyle \mathbb {R} ^{d}} , S would need to be projected onto, leading to the modified update rule

w t + 1 = Π S ( âˆ' η âˆ' i = 1 t z i ) = Π S ( η θ t + 1 ) {\displaystyle w_{t+1}=\Pi _{S}(-\eta \sum _{i=1}^{t}z_{i})=\Pi _{S}(\eta \theta _{t+1})}

This algorithm is known as lazy projection, as the vector θ t + 1 {\displaystyle \theta _{t+1}} accumulates the gradients. It is also known as Nesterov's dual averaging algorithm. In this scenario of linear loss functions and quadratic regularisation, the regret is bounded by O ( T ) {\displaystyle O({\sqrt {T}})} , and thus the average regret goes to 0 as desired.

Online subgradient descent (OSD)

The above proved a regret bound for linear loss functions v t ( w ) = ⟨ w , z t ⟩ {\displaystyle v_{t}(w)=\langle w,z_{t}\rangle } . To generalise the algorithm to any convex loss function, the subgradient ∂ v t ( w t ) {\displaystyle \partial v_{t}(w_{t})} of v t {\displaystyle v_{t}} is used as a linear approximation to v t {\displaystyle v_{t}} near w t {\displaystyle w_{t}} , leading to the online subgradient descent algorithm:

Initialise parameter η , w 1 = 0 {\displaystyle \eta ,w_{1}=0}

For t = 1 , 2 , . . . , T {\displaystyle t=1,2,...,T}

  • Predict using w t {\displaystyle w_{t}} , receive f t {\displaystyle f_{t}} from nature.
  • Choose z t ∈ ∂ v t ( w t ) {\displaystyle z_{t}\in \partial v_{t}(w_{t})}
  • If S = R d {\displaystyle S=\mathbb {R} ^{d}} , update as w t + 1 = w t âˆ' η z t {\displaystyle w_{t+1}=w_{t}-\eta z_{t}}
  • If S ⊂ R d {\displaystyle S\subset \mathbb {R} ^{d}} , project cumulative gradients onto S {\displaystyle S} i.e. w t + 1 = Π S ( η θ t + 1 ) , θ t + 1 = θ t + z t {\displaystyle w_{t+1}=\Pi _{S}(\eta \theta _{t+1}),\theta _{t+1}=\theta _{t}+z_{t}}

One can use the OSD algorithm to derive O ( T ) {\displaystyle O({\sqrt {T}})} regret bounds for the online version of SVM's for classification, which use the hinge loss v t ( w ) = max { 0 , 1 âˆ' y t ( w â‹… x t ) } {\displaystyle v_{t}(w)=\max\{0,1-y_{t}(w\cdot x_{t})\}}

Other algorithms

Quadratically regularised FTRL algorithms lead to lazily projected gradient algorithms as described above. To use the above for arbitrary convex functions and regularisers, one uses online mirror descent. Another algorithm is called prediction with expert advice. In this case, the hypothesis set consists of d {\displaystyle d} functions. A distribution w t ∈ Î" d {\displaystyle w_{t}\in \Delta _{d}} over the d {\displaystyle d} experts is maintained, and prediction is performed by sampling an expert from this distribution. For the Euclidean regularisation, one can show a regret bound of O ( T ) {\displaystyle O({\sqrt {T}})} , which can be improved further to a O ( log ⁡ T ) {\displaystyle O({\sqrt {\log T}})} bound by using a better regulariser.

Interpretations of online learning


Conjecture: Scalable Machine Learning in Hadoop with Scalding ...
Conjecture: Scalable Machine Learning in Hadoop with Scalding .... Source : codeascraft.com

The paradigm of online learning interestingly has different interpretations depending on the choice of the learning model, each of which has distinct implications about the predictive quality of the sequence of functions f 1 , f 2 , … , f n {\displaystyle f_{1},f_{2},\ldots ,f_{n}} . The prototypical stochastic gradient descent algorithm is used for this discussion. As noted above, its recursion is given by

w t = w t âˆ' 1 âˆ' γ t ∇ V ( ⟨ w t âˆ' 1 , x t ⟩ , y t ) {\displaystyle \textstyle w_{t}=w_{t-1}-\gamma _{t}\nabla V(\langle w_{t-1},x_{t}\rangle ,y_{t})}

The first interpretation consider the stochastic gradient descent method as applied to the problem of minimizing the expected risk I [ w ] {\displaystyle I[w]} defined above. Indeed, in the case of an infinite stream of data, since the examples ( x 1 , y 1 ) , ( x 2 , y 2 ) , … {\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),\ldots } are assumed to be drawn i.i.d. from the distribution p ( x , y ) {\displaystyle p(x,y)} , the sequence of gradients of V ( â‹… , â‹… ) {\displaystyle V(\cdot ,\cdot )} in the above iteration are an i.i.d. sample of stochastic estimates of the gradient of the expected risk I [ w ] {\displaystyle I[w]} and therefore one can apply complexity results for the stochastic gradient descent method to bound the deviation I [ w t ] âˆ' I [ w ∗ ] {\displaystyle I[w_{t}]-I[w^{\ast }]} , where w ∗ {\displaystyle w^{\ast }} is the minimizer of I [ w ] {\displaystyle I[w]} . This interpretation is also valid in the case of a finite training set; although with multiple passes through the data the gradients are no longer independent, still complexity results can be obtained in special cases.

The second interpretation applies to the case of a finite training set and considers the SGD algorithm as an instance of incremental gradient descent method. In this case, one instead looks at the empirical risk:

I n [ w ] = 1 n âˆ' i = 1 n V ( ⟨ w , x i ⟩ , y i )   . {\displaystyle I_{n}[w]={\frac {1}{n}}\sum _{i=1}^{n}V(\langle w,x_{i}\rangle ,y_{i})\ .}

Since the gradients of V ( â‹… , â‹… ) {\displaystyle V(\cdot ,\cdot )} in the incremental gradient descent iterations are also stochastic estimates of the gradient of I n [ w ] {\displaystyle I_{n}[w]} , this interpretation is also related to the stochastic gradient descent method, but applied to minimize the empirical risk as opposed to the expected risk. Since this interpretation concerns the empirical risk and not the expected risk, multiple passes through the data are readily allowed and actually lead to tighter bounds on the deviations I n [ w t ] âˆ' I n [ w n ∗ ] {\displaystyle I_{n}[w_{t}]-I_{n}[w_{n}^{\ast }]} , where w n ∗ {\displaystyle w_{n}^{\ast }} is the minimizer of I n [ w ] {\displaystyle I_{n}[w]} .

Implementations


Six years later, Coursera's Andrew Ng returns with new Deep ...
Six years later, Coursera's Andrew Ng returns with new Deep .... Source : medium.freecodecamp.org

  • Vowpal Wabbit: Open-source fast out-of-core online learning system which is notable for supporting a number of machine learning reductions, importance weighting and a selection of different loss functions and optimisation algorithms. It uses the hashing trick for bounding the size of the set of features independent of the amount of training data.
  • scikit-learn: Provides out-of-core implementations of algorithms for
    • Classification: Perceptron, SGD classifier, Naive bayes classifier.
    • Regression: SGD Regressor, Passive Aggressive regressor.
    • Clustering: Mini-batch k-means.
    • Feature extraction: Mini-batch dictionary learning, Incremental PCA.

See also


Evaluating Machine Learning Models - O'Reilly Media
Evaluating Machine Learning Models - O'Reilly Media. Source : www.oreilly.com

  • Hierarchical temporal memory
  • k-nearest neighbor algorithm
  • Lazy learning
  • Learning vector quantization
  • Offline learning, the opposite model
  • Online algorithm
  • Streaming algorithm
  • Perceptron
  • Stochastic gradient descent
  • Supervised learning
  • Online optimization

References


Predicting CTR with online machine learning | MLWave
Predicting CTR with online machine learning | MLWave. Source : mlwave.com

External links


Every single Machine Learning course on the internet, ranked by ...
Every single Machine Learning course on the internet, ranked by .... Source : medium.freecodecamp.org

  • http://onlineprediction.net/, Wiki for On-Line Prediction.

Operationalize models, analytics, & web services in Machine ...
Operationalize models, analytics, & web services in Machine .... Source : docs.microsoft.com

 
Sponsored Links