You should post new plans by editing your question rather than in the comments, that way they could be formatted correctly.
Buckets: 16384 Batches: 4096 Memory Usage: 842kB
Your work_mem seems to be 1MB. That is low on any modern system, and especially for this type of query. As a result of the low memory, each table is divided into 4096 batches and processed one batch at a time. In read-only statement, that should not be too bad because all the IO is done sequentially (I have no idea why it took 3 hours though, that does seem pretty long and I can only guess that your hardware is really quite horrible). But for an UPDATE, this is completely devastating as it can't update the batched table files, it has to update the original table. It is essentially making 4096 passes over the table to be updated.
Increasing work_mem should make both the select and the update faster, but the update may still be unusably slow. Making a new table from the select and then replacing the old table with the new one maybe be the best option.
If you do set enable_hashjoin=off;
in your current session, it will force the planner to use some other join method, and it might be interesting to see what that is but there is not much reason to think it would be hugely faster.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…