美文网首页
【机器学习】-Week3 2. Hypothesis Repre

【机器学习】-Week3 2. Hypothesis Repre

作者: Kitty_风花 | 来源:发表于2019-11-30 10:50 被阅读0次

Hypothesis Representation

We could approach the classification problem ignoring the fact that y is discrete-valued, and use our old linear regression algorithm to try to predict y given x. However, it is easy to construct examples where this method performs very poorly. Intuitively, it also doesn’t make sense for hθ​(x) to take values larger than 1 or smaller than 0 when we know that y ∈ {0, 1}. To fix this, let’s change the form for our hypotheses 

. This is accomplished by plugging θ^Tx into the Logistic Function.

Our new form uses the "Sigmoid Function," also called the "Logistic Function":

The following image shows us what the sigmoid function looks like:

The function g(z), shown here, maps any real number to the (0, 1) interval, making it useful for transforming an arbitrary-valued function into a function better suited for classification.

hθ​(x) will give us the probability that our output is 1. For example,

gives us a probability of 70% that our output is 1. Our probability that our prediction is 0 is just the complement of our probability that it is 1 (e.g. if probability that it is 1 is 70%, then the probability that it is 0 is 30%).

来源:coursera 斯坦福 吴恩达 机器学习

相关文章

网友评论

      本文标题:【机器学习】-Week3 2. Hypothesis Repre

      本文链接:https://www.haomeiwen.com/subject/aixkectx.html