Linux build with custom config using all RAM (8GB)?
I am trying to compile the mainline Linux kernel with a custom config. This one!
Running on a 64 bit system.
At the last step, when linking the Kernel, it fails because it goes OOM (error 137).
[...] DESCEND objtool INSTALL libsubcmd_headers CALL scripts/checksyscalls.sh LD vmlinux.o Killed make: *** [scripts/Makefile.vmlinux_o:61: vmlinux.o] Error 137 make: *** Deleting file 'vmlinux.o' [...]
ulimit -a says that per process memory is unlimited.
I have tried
make -j4, no difference whatsoever.
Same results with gcc as compiler instead of clang.
Does anyone have a freaking clue on why the compilation eats up so much RAM? It’s getting unaffordable to develop Linux :
System "freezes" or OOM kills are often caused by running too many, too large programs and running out of available memory. Use
free to see if you have swap space, read
man mkswap swapon fstab fallocate to create some. Swap space must be contiguous. use
dd. Traditionally, swap space of 1.5 × RAM has been recommended, but YMMV. If you don’t plan to hibernate your system, you can have less than 1.0 × RAM.
One uses swap space to control what happens when programs allocate all the real memory, and want more. After all releasable cache (some cached blocks are "in use", and cannot be freed) has been released, the system enters the Out-Of-Memory state. In the Out-Of-Memory condition, with swap, some task’s memory is written to disk, freed for reuse, and later returned to memory when the task runs. Without swap, the dreaded OOM-Killer (a fake process, hard-coded in the kernel) runs, and picks a process to KILL, in order to free memory. The OOM-Killer is known for inconvenient choices.
man mkswap fallocate swapon fstab.
It’s getting unaffordable to develop Linux
I am afraid it has always been.
32GB RAM is common on kernel devs desktops.
And yet some of them started encountering ooms when building their allyesconfig-ed kernel.
Lucky you… who are apparently not allyesconfig-ing… you should not need more than 32G… 😉
On a side note, reading
CONFIG_HAVE_OBJTOOL=y as part of your .config file, you might take some benefits from the patches submitted as part of the discussion linked hereabove.
Does anyone have a freaking clue on why the compilation eats up so
You are probably the only one who could precisely tell. (after considering the size of the miscellaneous *.o files you might be able to find in each top level directory of the kernel source distribution (since compilation was achieved successfuly))
From the information you provide (the kernel.config file) I can only venture a priori :
A/ every component of your kernel will be statically linked :
(since I notice that all your selected OPTION_* are marked "=y")
There is nothing wrong with this per se since there can be many good reasons for building everything in-kernel but this will definitely significantly increase the RAM needed when linking all this together.
=> You probably should consider building kernel parts as modules wherever possible.
B/ a good amount of CONFIG_DEBUG appear set.
Once again there is nothing wrong with that per se however it is likely to increase significantly the RAM needed to link the different parts, not to say even more since it implies CONFIG_KALLSYMS_*=y
On a side note, considering the debugging feature selected, in addition to CONFIG_HZ_100=y I would assume that you are not searching for best possible latencies / performances.
=> I would then consider the opportunity to prefer CONFIG_CC_OPTIMIZE_FOR_SIZE