python机器学习用梯度下降实现逻辑回归

老师给了个csv数据,还有些实例代码

def GradientDescent_sigmoid(x,y):

    alpha = 0.002

    n=100

    x=np.mat(x)  # x为m*(d+1)

    y=np.mat(y) #y为m*1

    w=np.mat([[1],[1],[1]])  #w为(d+1)*1

    for i in range(n):

        f=sigmoid(x*w)

        error=y-f

        w=w+alpha*x.T*error

        cost=y.T*np.log(f)+(1-y).T*np.log(1-f)

        cost=cost/m

        if abs(cost-precost)<0.001:

              break;

        precost=cost

    return w

数据:

0.697,0.46,1
0.774,0.376,1
0.634,0.264,1
0.608,0.318,1
0.556,0.215,1
0.403,0.237,1
0.481,0.149,1
0.437,0.211,1
0.666,0.091,0
0.243,0.267,0
0.245,0.057,0
0.343,0.099,0
0.639,0.161,0
0.657,0.198,0
0.360 ,0.370 ,0
0.593,0.042,0
0.719,0.103,0
 

不知道你这个问题是否已经解决, 如果还没有解决的话:

如果你已经解决了该问题, 非常希望你能够分享一下解决方案, 以帮助更多的人 ^-^