[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <y2kf16bc8301004200640r9485da73p920a2f5c3e2d949e@mail.gmail.com>
Date: Tue, 20 Apr 2010 09:40:27 -0400
From: Dave Wright <wrightd@...il.com>
To: linux-kernel@...r.kernel.org
Subject: Should calculation of vm.overcommit_ratio be changed?
The current calculation of VM overcommit (particularly with default
vm.overcommit_ratio==50) seems to be a hold-over from the days when we
had more swap than physical memory. For example, 1/2 phy mem + swap
made sense when you had a 1GB of memory and 2GB of swap, however I
recently ran into an issue on a server that had 8GB RAM and 2GB swap.
The OOM killer was getting triggered as VM commit hit 6GB, even though
there was plenty of RAM available. Once I figured out what was going
on, I manually tweaked the ratio to be 110%.
It looks like current distro recommendations are still "have as much
swap as you have RAM", in which case the current calculation is fine,
but with SSD becoming more common on boot drives, I think many users
will end up with less swap than RAM - consider a desktop user who
might have 4GB RAM and 1GB swap. I don't think you would expect
Desktop users to understand or tweak overcommit_ratio, but I also
don't think having the distro simply change the default from 50 (to
100 or something else) would cover all the cases well.
Would it make more sense to have the overcommit formula be calculated as:
max commit = min(swap, ram) * overcommit_ratio + max(swap, ram) ?
When swap>=ram, the formula works exactly the same as it does now, but
when ram>>swap, you are guaranteed to always be able to your full RAM
(even when swap=0).
-Dave Wright
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists