Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
177 views
in Technique[技术] by (71.8m points)

tensorflow - indices = 2 is not in [0, 1)

I'm working on a seq2sql project and I successfully build a model but when training I get an error. I'm not using any Keras embedding layer.

M=13 #Question Length
d=40 #Dimention of the LSTM
C=12 #number of table Columns 

batch_size=9
inputs1=Input(shape=(M,100),name='question_token')
Hq=Bidirectional(LSTM(d,return_sequences=True),name='QuestionENC')(inputs1) #this is HQ shape is (num_samples,13,80)

inputs2=Input(shape=(C,3,100),name='col_token')
col_lstm_layer=Bidirectional(LSTM(d,return_sequences=False),name='ColENC')

def hidd(te):
    t=tf.Variable(initial_value=1,dtype=tf.int32)

    for i in range(batch_size):  
        t=tf.assign(t,i)
        Z = tf.nn.embedding_lookup(te, t)
        print(col_lstm_layer(Z))
        h=tf.reshape(col_lstm_layer(Z),[1,C,d*2])
        if i==0:
#             cols_last_hidden=tf.Variable(initial_value=h)
            cols_last_hidden=tf.stack(h)#this is because it gives an error if we use tf.Variable here
        else:
            cols_last_hidden=tf.concat([cols_last_hidden,h],0)#shape of this one is (num_samples,num_col,80) 80 is last encoding of each column
    return cols_last_hidden

cols_last_hidden=Lambda(hidd)(inputs2)

Hq=Dense(d*2,name='QuestionLastEncode')(Hq)

I=tf.Variable(initial_value=1,dtype=tf.int32)
J=tf.Variable(initial_value=1,dtype=tf.int32)

K=1

def get_col_att(tensors):
    global K,all_col_attention
    if K:
        t=tf.Variable(initial_value=1,dtype=tf.int32)

        for i in range(batch_size):
            t=tf.assign(t,i)
            x = tf.nn.embedding_lookup(tensors[0], t)
    #         print("tensors[1]:",tensors[1])
            y = tf.nn.embedding_lookup(tensors[1], t)
    #         print("x shape",x.shape,"y shape",y.shape)
            y=tf.transpose(y)
#             print("x shape",x.shape,"y",y.shape)
            Ecol=tf.reshape(tf.transpose(tf.tensordot(x,y,axes=1)),[1,C,M])

            if i==0: 
#                 all_col_attention=tf.Variable(initial_value=Ecol,name=""+i)
                all_col_attention=tf.stack(Ecol)
            else:
                all_col_attention=tf.concat([all_col_attention,Ecol],0)

    K=0
    print("all_col_attention",all_col_attention)
    return all_col_attention

total_alpha_sel_lambda=Lambda(get_col_att,name="Alpha")([Hq,cols_last_hidden])   
total_alpha_sel=Dense(13,activation="softmax")(total_alpha_sel_lambda)
# print("Hq",Hq," total_alpha_sel_lambda shape",total_alpha_sel_lambda," total_alpha_sel shape",total_alpha_sel.shape)
def get_EQcol(tensors): 
    global K
    if K:
        t=tf.Variable(initial_value=1,dtype=tf.int32)
        global all_Eqcol

        for i in range(batch_size):
            t=tf.assign(t,i)
            x = tf.nn.embedding_lookup(tensors[0], t)
            y = tf.nn.embedding_lookup(tensors[1], t)
            Eqcol=tf.reshape(tf.tensordot(x,y,axes=1),[1,C,d*2])

            if i==0:
#                 all_Eqcol=tf.Variable(initial_value=Eqcol,name=""+i)
                all_Eqcol=tf.stack(Eqcol)
            else:
                all_Eqcol=tf.concat([all_Eqcol,Eqcol],0)

    K=0
    print("all_Eqcol",all_Eqcol)
    return all_Eqcol
K=1
EQcol=Lambda(get_EQcol,name='EQcol')([total_alpha_sel,Hq])#total_alpha_sel(12x13) Hq(13xd*2)
EQcol=Dropout(.2)(EQcol)

L1=Dense(d*2,name='L1')(cols_last_hidden)
L2=Dense(d*2,name='L2')(EQcol)
L1_plus_L2=Add()([L1,L2])
pre=Flatten()(L1_plus_L2)
Psel=Dense(12,activation="softmax")(pre)

model=Model(inputs=[inputs1,inputs2],outputs=Psel)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
model.summary()

earlyStopping=EarlyStopping(monitor='val_loss', patience=7, verbose=0, mode='auto')

history=model.fit([Equestion,Col_Embeddings],y_train,epochs=50,validation_split=.1,shuffle=False,callbacks=[earlyStopping],batch_size=batch_size)

The shapes of the Equestion, Col_Embeddings, and y_train are (10, 12, 3, 100) ,(10, 13, 100) and (10, 12).

I searched about this error but in all cases they have used an embedding layer incorrectly. Here I get this error even though I'm not using one.

indices = 2 is not in [0, 1)
[[{{node lambda_3/embedding_lookup_2}} = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@col_token_2"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_col_token_2_0_1, lambda_3/Assign_2, lambda_3/embedding_lookup_2/axis)]]
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The problem here was the batch size is defined at the graph level.here i have used batch_size =9 for the graph and yes i get batch size of 9 for training by the validation split .1 for the full batch size of 10 but for the validation i left only one sample because 10*.1 is one.

So the batch size of 1 cannot be passed to the graph because it needs batch size of 9.that's why this error comes

As for the solution i put the batch_size=1 and then it works fine also got a good accuracy by using batch_size=1.

Hope this will help someone.

Cheers ..


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...