为什么是重量仅用于培训?

0

的问题

后叫的适应功能我可以看到模型聚光的培训,但之后我去叫评估方法,它的行为,如果该模型没有配件。 最好的例子是,下面我使用的培训,发电机用于培训和验证,并获得不同的结果。

import tensorflow as tf
from tensorflow.keras.callbacks import ModelCheckpoint

from ImageGenerator import ImageGenerator

if __name__== "__main__":

    batch_size=64

    train_gen = ImageGenerator('synthetic3/train/open/*.png', 'synthetic3/train/closed/*.png', batch_size=batch_size)

    model = tf.keras.applications.mobilenet_v2.MobileNetV2(weights=None, classes=2, input_shape=(256, 256, 3))

    model.compile(optimizer='adam', 
                loss=tf.keras.losses.CategoricalCrossentropy(),
                metrics=['accuracy'])

    history = model.fit(
        train_gen,
        validation_data=train_gen,
        epochs=5,
        verbose=1
    )
    
    model.evaluate(train_gen)

结果,

Epoch 1/5
19/19 [==============================] - 11s 600ms/step - loss: 0.7707 - accuracy: 0.5016 - val_loss: 0.6932 - val_accuracy: 0.5016
Epoch 2/5
19/19 [==============================] - 10s 533ms/step - loss: 0.6991 - accuracy: 0.5855 - val_loss: 0.6935 - val_accuracy: 0.4975
Epoch 3/5
19/19 [==============================] - 10s 509ms/step - loss: 0.6213 - accuracy: 0.6637 - val_loss: 0.6932 - val_accuracy: 0.4992
Epoch 4/5
19/19 [==============================] - 10s 514ms/step - loss: 0.4407 - accuracy: 0.8158 - val_loss: 0.6934 - val_accuracy: 0.5008
Epoch 5/5
19/19 [==============================] - 10s 504ms/step - loss: 0.3200 - accuracy: 0.8643 - val_loss: 0.6949 - val_accuracy: 0.5000
19/19 [==============================] - 3s 159ms/step - loss: 0.6953 - accuracy: 0.4967

这是有问题的,因为即使在节约重量节省,如果该模型还没有完成装修。

keras machine-learning python tensorflow
2021-11-24 04:34:14
2
0

评估()function需要一个验证的数据集输入评价已经培训的模式。

看起来你正在使用的一个培训数据集(train_gen)为validation_data和传递相同的数据集输入模型。评估()

2021-11-24 11:43:27

是的,我已经这样做的目的,显示,虽然火车准确性是改善验证不是。 甚至在相同的数据集
ac4824

最好的答案

0

嗨大家之后的许多天的痛苦终于发现了解决这个问题。 这是由于批正常化层的模型。 这一势头参数需要改变根据你的批大小如果你计划的培训作为一个自定义的数据集。

for layer in model.layers:
    if type(layer)==type(tf.keras.layers.BatchNormalization()):
        # renorm=True, Can have renomalization for smaller batch sizes
        layer.momentum=new_momentum

资料来源: https://github.com/tensorflow/tensorflow/issues/36065

2021-12-10 04:30:31

其他语言

此页面有其他语言版本

Русский
..................................................................................................................
Italiano
..................................................................................................................
Polski
..................................................................................................................
Română
..................................................................................................................
한국어
..................................................................................................................
हिन्दी
..................................................................................................................
Français
..................................................................................................................
Türk
..................................................................................................................
Česk
..................................................................................................................
Português
..................................................................................................................
ไทย
..................................................................................................................
Español
..................................................................................................................
Slovenský
..................................................................................................................