Yes, this might be the third time you see this code, because I asked two other questions about it (this and this)..
The code is fairly simple:
#include <vector>
int main() {
std::vector<int> v;
}
Then I build and run it with Valgrind on Linux:
g++ test.cc && valgrind ./a.out
==8511== Memcheck, a memory error detector
...
==8511== HEAP SUMMARY:
==8511== in use at exit: 72,704 bytes in 1 blocks
==8511== total heap usage: 1 allocs, 0 frees, 72,704 bytes allocated
==8511==
==8511== LEAK SUMMARY:
==8511== definitely lost: 0 bytes in 0 blocks
==8511== indirectly lost: 0 bytes in 0 blocks
==8511== possibly lost: 0 bytes in 0 blocks
==8511== still reachable: 72,704 bytes in 1 blocks
==8511== suppressed: 0 bytes in 0 blocks
...
==8511== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Here, Valgrind reports there is no memory leak, even though there is 1 alloc and 0 free.
The answer here points out the allocator used by C++ standard library don't necessarily return the memory back to the OS - it might keep them in an internal cache.
Question is:
1) Why keep them in an internal cache? If it is for speed, how is it faster? Yes, the OS needs to maintain a data structure to keep track of memory allocation, but this the maintainer of this cache also needs to do so.
2) How is this implemented? Because my program a.out
terminates already, there is no other process that is maintaining this memory cache - or, is there one?
Edit: for question (2) - Some answers I've seen suggest "C++ runtime", what does it mean? If "C++ runtime" is the C++ library, but the library is just a bunch of machine code sitting on the disk, it is not a running process - the machine code is either linked to my a.out
(static library, .a
) or is invoked during runtime (shared objects, .so
) in the process of a.out
.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…