sigmoid 函数返回值均为0.5导致逻辑回归算法不能正常工作

使用pycharm写逻辑回归算法,sigmoid函数运算时exp(-z)太小,以至于在每次迭代中sigmoid函数返回值均为0.5,请问该如何解决?

以下是代码

import numpy as np
import pandas as pd

path = 'F:\machinelearningpractice\Machine-Learning-homework-master\Machine-Learning-homework-master\machine-learning-ex2\ex2\ex2data1.txt'
data = pd.read_csv(path, names=['Exam1', 'Exam2', 'Admitted'])

data.insert(0, 'Ones', 1)
colums = data.shape[1]
X = data.iloc[:, 0 : colums-1]
y = data.iloc[:, -1 : colums]

X = np.matrix(X)
y = np.matrix(y)
theta = np.matrix(np.zeros([1, 3]))

def sigmoid(z):
    return 1 / (1 + np.exp(-z))


def computeCost(X, y, theta):
    first = np.multiply(y, np.log(sigmoid(X * theta.T)))
    second = np.multiply(1 - y, np.log(1 - sigmoid(X * theta.T)))
    return -1 / len(X) * np.sum(first - second)


def gradientDescent(X, y, theta, alpha, iters):
    temp = np.matrix(np.zeros(theta.shape))
    parameters = int(X.shape[1])
    cost_in_process = np.zeros([iters, 1])

    for i in range(iters):
        difference = sigmoid(X * theta.T) - y

        for j in range(parameters):
            temp[0, j] = theta[0, j] - alpha / len(X) *np.sum(np.multiply(difference, X[:, j]))

        theta = temp
        cost_in_process[i, 0] = computeCost(X, y, theta)

    return theta, cost_in_process


alpha = 0.01
iters = 1500
theta_final, costInprocess = gradientDescent(X, y, theta, alpha, iters)

print(theta_final, costInprocess)

望采纳


调整一下你的逻辑回归初试权重值,并把学习率稍微调大一点,你这个应该是参数更新的时候步幅很小,所以z=WX+b的值变化不大,就一直在exp(-z)很小的区域内。