failure in memory allocation of more than ~53GB

I’m benchmarking a program atm, and I stumbled across a memory-allocation error. I have 64GB RAM installed and nothing else is running on the system:

top: Tasks: 222 total, 1 running, 221 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 64403M total, 2231M used, 62171M free, 1M buffers Swap: 2053M total, 26M used, 2027M free, 1498M cached
If I start the program, I can tell it, how much memory it can use in mw, and, reproducible, it prints out following error with 6907mw or more, not with 6906 or less:

Failure in attempting memory allocation of 6907230002 words (52697 Mbyte) This error has been generated by the operating system, and may be the result of insufficient system memory or paging space
I don’t really know, how to find the source of the problem. I do not think, that the program uses GBs of memory, more than given to it, but I’d like to check that. Or is there a maximal amount of memory a program can allocate in an instance? I’d be happy about any advice you can give me.

Hash: SHA1

Perhaps test it with something else in case your program is crazy:

Just be careful; you may want to use one of the samples that exit when
they get stuck so you don’t hang your system for a bit.

You could also possibly use something like valgrind, or even strace, to
see how memory requests are really happening.

Good luck.
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Mozilla -


Hi Sve_n,

have you had a look at the effective ulimits for the process? -d, -l and -m might be worth a closer look.


Thanks a lot, that did it, for me. Originally the values where:

ulimit -aS core file size (blocks, -c) 1 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 515134 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 56056480 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 515134 virtual memory (kbytes, -v) 54441680 file locks (-x) unlimited

By setting the virtual memory to 63.2GB the program runs without complains. Some questions remain:
What exactly are:

  • max locked memory
  • stack size (I think I know the difference between stack and heap, when programming, but 8MB seems really small…)
  • virtual memory
    Why can I set my own limits without su-privileges?
    And why are the soft limits not just the same as the hard limits (or at least much nearer, only letting some MB etc. for the system-core)?

Thanks for you answer - I was thinking about writing an allocation-program, but did not exactly know, how to, and at least shortly tried the other two things, but I don’t know enough about both, too… (With valgrind I read, that memtest causes the program to use more memory than usually, but I did not spend much time trying to find out, how to use it correctly.)

Can I mark this thread as solved in a way…?

[QUOTE=Sve_n;9407]Some questions remain:
What exactly are:

  • max locked memory
  • stack size (I think I know the difference between stack and heap, when programming, but 8MB seems really small…)[/QUOTE]
    For a typical program, that’s really sufficient. We run rather complex programs, with many nested function calls, and have not yet hit that limit.

The overall amount of memory (hardware plus swap) allocated by the process

You cannot… at least not above the hard limits.

That’s left to the sysadmin to decide - those settings can give you some “room to expand”, while regular programs won’t be able to take too much resources.