On a MacBook Pro, 16GB of RAM, 500 GB SSD, OS Sequoia 15.7.1, M3 chip, I am running some python3 code in a conda environment that requires lots of RAM and sure enough, once physical memory is almost exhausted, swapfiles of about 1GB each start being created, which I can see in /System/Volumes/VM. This folder has about 470 GB of available space at the start of the process (I can see this through get info) however, once about 40 or so swapfiles are created, for a total of about 40GB of virtual memory occupied (and thus still plenty of available space in VM), zsh kills the python process responsible for the RAM usage (notably, it does not kill another python process using only about 100 MB of RAM). The message received is "zsh: killed" in the tmux pane where the logging of the process is printed.
All the documentation I was able to consult says that macOS is designed to use up to all available storage on the startup disk (which is the one I am using since I have only one disk and the available space aforementioned reflects this) for swapping, when physical RAM is not enough. Then why is the process killed long before the swapping area is exhausted? In contrast, the same process on a Linux machine (basic python venv here) just keeps swapping, and never gets killed until swap area is exhausted.
One last note, I do not have administrator rights on this device, so I could not run dmesg to retrieve more precise information, I can only check with df -h how the swap area increases little by little. My employer's IT team confirmed that they do not mess with memory usage on managed profiles, so macOS is just doing its thing.
Thanks for any insight you can share on this issue, is it a known bug (perhaps with conda/python environments) or is it expected behaviour? Is there a way to keep the process from being killed?
Just adding a quick follow-up, in case you have some other ideas: I tested with many values of vm_compression_limit, from 0 up to 10^12, but the behaviour when the VMM kills a process is not affected: when the swap size reaches approximately 44GB, the process gets killed.
I had a chance to play with this today and I think you're just setting it wrong. You need to set this as a boot argument, so the nvram command looks like this:
nvram boot-args="debug=<existing value> vm_compression_limit=4000000000"
With that configuration, I got to ~130GB of memory usage and ~100GB of swap before the process was terminated. You can use "nvram -p" to print the full list of firmware variables, find your existing "boot-args" value, then insert it back into the command above to preserve it.
Note: It's also possible that boot-args doesn't exist or you can choose to overwrite the value.
__
Kevin Elliott
DTS Engineer, CoreOS/Hardware