I am training a two layer neural network. I waited for 15000 epochs, still model doesn't converge.
ans = []
for i in range(1000):
x1,y1 = random.uniform(-3,3),random.uniform(-3,3)
if x1*x1 + y1 * y1 < 1:
ans.append([x1,y1,0])
elif x1*x1 + y1 * y1 >= 2 and x1*x1 + y1 * y1 <=8:
ans.append([x1,y1,1])
data = pd.DataFrame(ans)
print(data.shape)
X = np.array(data[[0,1]])
y = np.array(data[2])
I am generating random points generating data. the data looks like something like this.
weights_layer1 = np.random.normal(scale=1 / 10**.5, size=(2,20))
bias1 = np.zeros((1,20))
bias2 = np.zeros((1,1))
weights_layer2 = np.random.normal(scale=1 / 10**.5, size=(20,1))
for e in range(15000):
for x,y1 in zip(X,y):
x = x.reshape(1,2)
layer1 = sigmoid(np.dot(x,weights_layer1)+bias1)
layer2 = sigmoid(np.dot(layer1,weights_layer2)+bias2)
dk = (y1-layer2)*layer2*(1-layer2)
dw2 = learnrate * dk * layer1.T
dw2 =dw2.reshape(weights_layer2.shape)
# print(dw2.shape)
weights_layer2 += dw2
# bias2 += dk * learnrate
dj = weights_layer2.T* layer1*(1-layer1)*dk
dw1 = learnrate * np.dot(x.T,dj)
I am calculating loss in this manner.
loss = 0
for x,y1 in zip(X,y):
layer1 = sigmoid(np.dot(x,weights_layer1))
layer2 = sigmoid(np.dot(layer1,weights_layer2))
loss += (layer2 - y1)**2
print(loss)
cant find what is going wrong,can you see anything? Thanks. I trained the same with pytorch it is converging fine.
the final model looks like this on trained data. but on test data it is worse.
question from:
https://stackoverflow.com/questions/65936064/my-two-layer-neural-network-model-doesnt-converge 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…