by using $\mathbf U$, $\mathbf S$, and $\mathbf V$), without recalculating SVD from scratch? Fast online SVD revisions for lightweight recommender systems by Brand is an accessible first paper.
Then the Sherman-Morrison formula states that \begin (\mathbf A \mathbf u \mathbf v^T)^ = \mathbf A^ - .
" from the Computational Science SE gives a number of MATLAB and C implementations that you may want to consider. implementation are wrappers around C, C or FORTRAN implementations.
1 to both you and @whuber (and I don't think that "duplicating" any information provided on another SE site is to be avoided!
The accuracy of the trained classifier crucially depends on these features, its time complexity on their number.
As the number of available features is immense in most real-world problems, it becomes essential to use meta-heuristics for feature selection and/or feature optimization.
you have some decomposition already computed) you utilize these to get a faster and/or more stable estimates. Usually recommender system matrices are sparse and this makes the algorithms even more efficient.