Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.1k views
in Technique[技术] by (71.8m points)

keras - Tensorflow Neural Network Regression problem

I am very much a beginner to neural networks using python and I have been really enjoying it so far.

I have tried to build a simple neural network, but my test predictions seem to be restricted and I was wondering if there is anything that screams out to any expert out there.

This is the general shape of my response on an overall basis: img1

After building a simply neural network:

    [
        layers.Dense(43, name="layer1", input_shape=[43]),
        layers.Dense(16, activation="relu", name="layer2"), 
        layers.Dense(8, activation="relu", name="layer3"), 
        layers.Dense(1, activation="linear", name="layer4"), 
    ]
)
model.compile(loss="mae")
class PrintDot(keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs):
        if epoch%100==0: print('')
        print('.',end='')

history = model.fit(
normed_train_data, train_labels,epochs=100,validation_split=0.2, verbose=0, callbacks=[PrintDot()])

the distribution of my test predictions look like this: Now to me this feels like the network is lacking something quite trivial, maybe a combination of activation/optimizer but I am not sure what this is screaming. Test Predictions

My loss and validation loss (on mae basis) actually look OK i.e. not overfitting and validation loss is slightly above loss, however I would like it converge around 15-20 and not 33-34.

My data has 43 inputs, and I have standardised this data rather than normalizing. ..

If anything obvious jumps out to anyone please let me know Thankyou!

question from:https://stackoverflow.com/questions/65921877/tensorflow-neural-network-regression-problem

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

If you are just looking for the model to converge with fewer epochs, I would recommend using an exponential learning rate decay.

At the end of each epoch the Learning Rate is reduced by a specific factor (decay_rate). This enables you to benefit from fast convergence, but also maintain high model performance.

Using Tensorflow this is very easy to implement like in the following example.

initial_learning_rate = 0.1

lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
    initial_learning_rate,
    decay_steps=100000,
    decay_rate=0.96,
    staircase=True)

model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=lr_schedule),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

Here is a link to some more information about Exponential Decay from Tensorflow's Documentation tf.keras.optimizers.schedules.ExponentialDecay


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...