[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5edcfeb8-4f53-0fe6-1e5b-c1e485f91d0d@suse.cz>
Date: Thu, 28 Feb 2019 09:06:54 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Oscar Salvador <osalvador@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-api@...r.kernel.org, hughd@...gle.com, kirill@...temov.name,
joel@...lfernandes.org, jglisse@...hat.com,
yang.shi@...ux.alibaba.com, mgorman@...hsingularity.net
Subject: Re: [PATCH] mm,mremap: Bail out earlier in mremap_to under map
pressure
On 2/27/19 10:32 PM, Oscar Salvador wrote:
> On Tue, Feb 26, 2019 at 02:04:28PM -0800, Andrew Morton wrote:
>> How is this going to affect existing userspace which is aware of the
>> current behaviour?
>
> Well, current behavior is not really predictable.
> Our customer was "surprised" that the call to mremap() failed, but the regions
> got unmapped nevertheless.
> They found it the hard way when they got a segfault when trying to write to those
> regions when cleaning up.
>
> As I said in the changelog, the possibility for false positives exists, due to
> the fact that we might get rid of several vma's when unmapping, but I do not
> expect existing userspace applications to start failing.
> Should be that the case, we can revert the patch, it is not that it adds a lot
> of churn.
Hopefully the only program that would start failing would be a LTP test
testing the current behavior near the limit (if such test exists). And
that can be adjusted.
>> And how does it affect your existing cleanup code, come to that? Does
>> it work as well or better after this change?
>
> I guess the customer can trust more reliable that the maps were left untouched.
> I still have my reserves though.
>
> We can get as far as move_vma(), and copy_vma() can fail returning -ENOMEM.
> (Or not due to the "too small to fail" ?)
>
Powered by blists - more mailing lists