Each process spawned by the multiprocessing
module is in a separate address space. All physical and virtual memory that the original process had is at least logically independent of the new ones after the new ones are created, but initially each new process is an exact duplicate (well, see footnote) of the old. Thus, each will have the same virtual size (16.7 GB) as the original.
Actual underlying physical pages are shared as much as possible, using "copy-on-write". As the various copies run and make changes to their virtual memory, the kernel will copy the underlying physical page as needed. Memory that is never written-to can be shared between all the copies. So even though each process appears to be chewing up a lot of RAM, they aren't, really. If you write to most of it, though—i.e., if each separate process changes most of the 16 GB of data—then they will all have separate copies, and use much more physical RAM.
The multiprocessing
module does offer some methods of sharing data (see the "shared memory" section in http://docs.python.org/library/multiprocessing.html) if you want them to share modifications (but then think about how the locking works; see the documentation).
footnote: There's one tiny difference between the original and the clone, after a fork or clone system call: the original gets back the ID of the clone, and the clone gets back the number zero.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…