Based on the data you added. The fastest way solve it will be by merging the two dataframes, checking where you have the NaNs (which will point you to unfound keys) and then filter them out:
Here how to do it:
data1 = {'userid': [1, 2, 5, 5, 7, 10, 10, 10, 15, 15],
'checkinid': [100, 120, 90, 95, 100, 130, 90, 80, 200, 120]}
data2 = {'checkinid': [100, 120, 90, 95],
'latitude': [-90, -92, 48, 52],
'longitude': [42, 54, 51, -27]}
expectingoutput= {'userid': [1, 2, 5, 5, 7, 10, 15],
'checkinid': [100, 120, 90, 95, 100,90,120]}
# Create df1
df1 = pd.DataFrame(data1)
df1
# Create df2
df2 = pd.DataFrame(data2)
df2
# Merge both dataframes using using the key checkinid
merged_df = df1.merge(df2, how='left', on=['checkinid'])
merged_df
# Find those rows where NaNs are present and remove them from the original DataFrame
df1[~merged_df.isna().any(axis=1)]