[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5735AF81.7010803@laposte.net>
Date: Fri, 13 May 2016 12:42:09 +0200
From: Sebastian Frias <sf84@...oste.net>
To: Mason <slash.tmp@...e.fr>, Michal Hocko <mhocko@...nel.org>
CC: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: add config option to select the initial overcommit
mode
Hi,
On 05/13/2016 12:18 PM, Mason wrote:
> On 13/05/2016 11:52, Michal Hocko wrote:
>> On Fri 13-05-16 10:44:30, Mason wrote:
>>> On 13/05/2016 10:04, Michal Hocko wrote:
>>>
>>>> On Tue 10-05-16 13:56:30, Sebastian Frias wrote:
>>>> [...]
>>>>> NOTE: I understand that the overcommit mode can be changed dynamically thru
>>>>> sysctl, but on embedded systems, where we know in advance that overcommit
>>>>> will be disabled, there's no reason to postpone such setting.
>>>>
>>>> To be honest I am not particularly happy about yet another config
>>>> option. At least not without a strong reason (the one above doesn't
>>>> sound that way). The config space is really large already.
>>>> So why a later initialization matters at all? Early userspace shouldn't
>>>> consume too much address space to blow up later, no?
>>>
>>> One thing I'm not quite clear on is: why was the default set
>>> to over-commit on?
>>
>> Because many applications simply rely on large and sparsely used address
>> space, I guess.
>
> What kind of applications are we talking about here?
>
> Server apps? Client apps? Supercomputer apps?
>
> I heard some HPC software use large sparse matrices, but is it a common
> idiom to request large allocations, only to use a fraction of it?
>
Let's say there are specific applications that require overcommit.
Shouldn't overcommit be changed for those specific circumstances?
In other words, why is overcommit=GUESS default for everybody?
> If you'll excuse the slight trolling, I'm sure many applications don't
> expect being randomly zapped by the OOM killer ;-)
>
>> That's why the default is GUESS where we ignore the cumulative
>> charges and simply check the current state and blow up only when
>> the current request is way too large.
>
> I wouldn't call denying a request "blowing up". Application will
> receive NULL, and is supposed to handle it gracefully.
>
> "Blowing up" is receiving SIGKILL because another process happened
> to allocate too much memory.
I agree.
Furthermore, the "blow up when the current request is too large" is more complex than that due to delay between the allocation and the time when the system realises it cannot honour the promise, there must be lots of code/heuristics involved there.
Anyway, it'd be nice to understand the real history behind overcommit (as I stated earlier, my understanding of the history is that in the early days there was no overcommit) and why it is there by default if only specific applications would require it.
Best regards,
Sebastian
Powered by blists - more mailing lists