[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6336c469-c946-c300-7392-87052c990266@suse.cz>
Date: Tue, 11 Apr 2017 08:35:02 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Zi Yan <zi.yan@...rutgers.edu>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Rik van Riel <riel@...hat.com>,
Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm, numa: Fix bad pmd by atomically check for
pmd_trans_huge when marking page tables prot_numa
On 04/11/2017 12:28 AM, Zi Yan wrote:
> On 10 Apr 2017, at 17:09, Andrew Morton wrote:
>
>> On Mon, 10 Apr 2017 19:07:14 +0100 Mel Gorman <mgorman@...hsingularity.net> wrote:
>>
>>> On Mon, Apr 10, 2017 at 12:49:40PM -0500, Zi Yan wrote:
>>>> On 10 Apr 2017, at 12:20, Mel Gorman wrote:
>>>>
>>>>> On Mon, Apr 10, 2017 at 11:45:08AM -0500, Zi Yan wrote:
>>>>>>> While this could be fixed with heavy locking, it's only necessary to
>>>>>>> make a copy of the PMD on the stack during change_pmd_range and avoid
>>>>>>> races. A new helper is created for this as the check if quite subtle and the
>>>>>>> existing similar helpful is not suitable. This passed 154 hours of testing
>>>>>>> (usually triggers between 20 minutes and 24 hours) without detecting bad
>>>>>>> PMDs or corruption. A basic test of an autonuma-intensive workload showed
>>>>>>> no significant change in behaviour.
>>>>>>>
>>>>>>> Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
>>>>>>> Cc: stable@...r.kernel.org
>>>>>>
>>>>>> Does this patch fix the same problem fixed by Kirill's patch here?
>>>>>> https://lkml.org/lkml/2017/3/2/347
>>>>>>
>>>>>
>>>>> I don't think so. The race I'm concerned with is due to locks not being
>>>>> held and is in a different path.
>>>>
>>>> I do not agree. Kirill's patch is fixing the same race problem but in
>>>> zap_pmd_range().
>>>>
>>>> The original autoNUMA code first clears PMD then sets it to protnone entry.
>>>> pmd_trans_huge() does not return TRUE because it saw cleared PMD, but
>>>> pmd_none_or_clear_bad() later saw the protnone entry and reported it as bad.
>>>> Is this the problem you are trying solve?
>>>>
>>>> Kirill's patch will pmdp_invalidate() the PMD entry, which keeps _PAGE_PSE bit,
>>>> so pmd_trans_huge() will return TRUE. In this case, it also fixes
>>>> your race problem in change_pmd_range().
>>>>
>>>> Let me know if I miss anything.
>>>>
>>>
>>> Ok, now I see. I think you're correct and I withdraw the patch.
>>
>> I have Kirrill's
>>
>> thp-reduce-indentation-level-in-change_huge_pmd.patch
>> thp-fix-madv_dontneed-vs-numa-balancing-race.patch
>> mm-drop-unused-pmdp_huge_get_and_clear_notify.patch
>> thp-fix-madv_dontneed-vs-madv_free-race.patch
>> thp-fix-madv_dontneed-vs-madv_free-race-fix.patch
>> thp-fix-madv_dontneed-vs-clear-soft-dirty-race.patch
>>
>> scheduled for 4.12-rc1. It sounds like
>> thp-fix-madv_dontneed-vs-madv_free-race.patch and
>> thp-fix-madv_dontneed-vs-madv_free-race.patch need to be boosted to
>> 4.11 and stable?
>
> thp-fix-madv_dontneed-vs-numa-balancing-race.patch is the fix for
> numa balancing problem reported in this thread.
>
> mm-drop-unused-pmdp_huge_get_and_clear_notify.patch,
> thp-fix-madv_dontneed-vs-madv_free-race.patch,
> thp-fix-madv_dontneed-vs-madv_free-race-fix.patch, and
> thp-fix-madv_dontneed-vs-clear-soft-dirty-race.patch
>
> are the fixes for other potential race problems similar to this one.
>
> I think it is better to have all these patches applied.
Yeah we should get all such fixes to stable IMHO (after review :). It's
not the first time that a fix for MADV_DONTNEED turned out to also fix a
race that involved "normal operation" with THP, without such syscalls.
> --
> Best Regards
> Yan Zi
>
Powered by blists - more mailing lists