美文网首页
16. Recommender Systems

16. Recommender Systems

作者: 玄语梨落 | 来源:发表于2021-01-30 10:06 被阅读0次

Recommender Systems

Problem formulation

Content-based recommendations

Optimization objective

To learn \theta^{(j)} (parameter for user j):

\min_{\theta^{(j)}}\frac{1}{2}\sum_{i:r(i,j)=1}((\theta^{(j)})^Tx^{(i)}-y^{(i,j)})^2+\frac{\lambda}{2}\sum_{k=1}^n(\theta_k^{(j)})^2

To learn all (\theta^{(1)},...,\theta^{(n_u)}):

\min_{\theta^{(1)},...,\theta^{(n_u)}}\frac{1}{2}\sum_{(j=1)}^{n_u}\sum_{i:r(i,j)=1}((\theta^{(j)})^Tx^{(i)}-y^{(i,j)})^2+\frac{\lambda}{2}\sum_{j=1}^{n_u}\sum_{k=1}^n(\theta_k^{(j)})^2

Collaborative Filtering

\min_{x^{(i)}}\frac{1}{2}\sum_{j:r(i,j)=1}((\theta^{(j)})^Tx^{(i)}-y^{(i,j)})^2+\frac{\lambda}{2}\sum_{k=1}^n(x_k^{(i)})^2

Given \theta^{(1)},...,\theta^{(n_u)}:

\min_{x^{(1)},...,x^{(n_m)}}\frac{1}{2}\sum_{(i=1)}^{n_m}\sum_{j:r(i,j)=1}((\theta^{(j)})^Tx^{(i)}-y^{(i,j)})^2+\frac{\lambda}{2}\sum_{i=1}^{n_m}\sum_{k=1}^n(x_k^{(i)})^2

Collaborative filtering algorithm

Minimizing x^{(1)},...,x^{(n_m)} and \theta^{(1)},...,\theta^{(n_u)} simultaneously:

J(x^{(1)},...,x^{(n_m)},\theta^{(1)},...,\theta^{(n_u)}) = \frac{1}{2}\sum_{(i,j):r(i,j)=1}((\theta^{(j)})^Tx^{(i)}-y^{(i,j)})^2+\frac{\lambda}{2}\sum_{i=1}^{n_m}\sum_{k=1}^n(x_k^{(i)})^2+\frac{\lambda}{2}\sum_{j=1}^{n_u}(\theta_k^{(j)})^2

Gradient dexcent:

x_k^{(i)}:=x_k^{(i)}-\alpha(\sum_{j:r(i,j)=1}((\theta^{(j)})^Tx^{(i)}-y^{(i,j)})\theta_k^{(j)}+\lambda x_k^{(i)}) \newline \theta_k^{(i)}:=\theta_k^{(i)}-\alpha(\sum_{i:r(i,j)=1}((\theta^{(j)})^Tx^{(i)}-y^{(i,j)})x_k^{(i)}+\lambda \theta_k^{(j)})

Vectorization: Low rank matrix factorization

X = \begin{bmatrix} (x^{(1)})^T \\ (x^{(2)})^T \\ ... \\ (x^{(n_m)})^T \end{bmatrix} \qquad \Theta=\begin{bmatrix} (\theta^{(1)})^T \\ (\theta^{(2)})^T \\ ... \\ (\theta^{(n_u)})^T \end{bmatrix}

Predicted ratings:

X\Theta^T

Implementational detail: Mean normalization

相关文章

网友评论

      本文标题:16. Recommender Systems

      本文链接:https://www.haomeiwen.com/subject/ctfgdktx.html