Heap and memory management is a facility provided by your C library (likely glibc). It maintains the heap and returns chunks of memory to you every time you do a malloc()
. It doesn't know heap size limit: every time you request more memory than what is available on the heap, it just goes and asks the kernel for more (either using sbrk()
or mmap()
).
By default, kernel will almost always give you more memory when asked. This means that malloc()
will always return a valid address. It's only when you refer to an allocated page for the first time that the kernel will actually bother to find a page for you. If it finds that it cannot hand you one it runs an OOM killer which according to certain measure called badness (which includes your process's and its children's virtual memory sizes, nice level, overall running time etc) selects a victim and sends it a SIGTERM
. This memory management technique is called overcommit and is used by the kernel when /proc/sys/vm/overcommit_memory
is 0 or 1. See overcommit-accounting in kernel documentation for details.
By writing 2 into /proc/sys/vm/overcommit_memory
you can disable the overcommit. If you do that the kernel will actually check whether it has memory before promising it. This will result in malloc()
returning NULL if no more memory is available.
You can also set a limit on the virtual memory a process can allocate with setrlimit()
and RLIMIT_AS
or with the ulimit -v
command. Regardless of the overcommit setting described above, if the process tries to allocate more memory than the limit, kernel will refuse it and malloc()
will return NULL. Note than in modern Linux kernel (including entire 2.6.x series) the limit on the resident size (setrlimit()
with RLIMIT_RSS
or ulimit -m
command) is ineffective.
The session below was run on kernel 2.6.32 with 4GB RAM and 8GB swap.
$ cat bigmem.c
#include <stdlib.h>
#include <stdio.h>
int main() {
int i = 0;
for (; i < 13*1024; i++) {
void* p = malloc(1024*1024);
if (p == NULL) {
fprintf(stderr, "malloc() returned NULL on %dth request
", i);
return 1;
}
}
printf("Allocated it all
");
return 0;
}
$ cc -o bigmem bigmem.c
$ cat /proc/sys/vm/overcommit_memory
0
$ ./bigmem
Allocated it all
$ sudo bash -c "echo 2 > /proc/sys/vm/overcommit_memory"
$ cat /proc/sys/vm/overcommit_memory
2
$ ./bigmem
malloc() returned NULL on 8519th request
$ sudo bash -c "echo 0 > /proc/sys/vm/overcommit_memory"
$ cat /proc/sys/vm/overcommit_memory
0
$ ./bigmem
Allocated it all
$ ulimit -v $(( 1024*1024 ))
$ ./bigmem
malloc() returned NULL on 1026th request
$
In the example above swapping or OOM kill could never occur, but this would change significantly if the process actually tried to touch all the memory allocated.
To answer your question directly: unless you have virtual memory limit explicitly set with ulimit -v
command, there is no heap size limit other than machine's physical resources or logical limit of your address space (relevant in 32-bit systems). Your glibc will keep allocating memory on the heap and will request more and more from the kernel as your heap grows. Eventually you may end up swapping badly if all physical memory is exhausted. Once the swap space is exhausted a random process will be killed by kernel's OOM killer.
Note however, that memory allocation may fail for many more reasons than lack of free memory, fragmentation or reaching a configured limit. The sbrk()
and mmap()
calls used by glib's allocator have their own failures, e.g. the program break reached another, already allocated address (e.g. shared memory or a page previously mapped with mmap()
) or process's maximum number of memory mappings has been exceeded.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…