lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170120103843.24587-1-vbabka@suse.cz>
Date:   Fri, 20 Jan 2017 11:38:39 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Mel Gorman <mgorman@...hsingularity.net>,
        Michal Hocko <mhocko@...nel.org>,
        Hillf Danton <hillf.zj@...baba-inc.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Vlastimil Babka <vbabka@...e.cz>
Subject: [PATCH v2 0/4] fix premature OOM regression in 4.7+ due to cpuset races

Changes since v1:
- add/remove comments per Michal Hocko and Hillf Danton
- move no_zone: label in patch 3 so we don't miss part of ac initialization

This is v2 of my attempt to fix the recent report based on LTP cpuset stress
test [1]. The intention is to go to stable 4.9 LTSS with this, as triggering
repeated OOMs is not nice. That's why the patches try to be not too intrusive.

Unfortunately why investigating I found that modifying the testcase to use
per-VMA policies instead of per-task policies will bring the OOM's back, but
that seems to be much older and harder to fix problem. I have posted a RFC [2]
but I believe that fixing the recent regressions has a higher priority.

Longer-term we might try to think how to fix the cpuset mess in a better and
less error prone way. I was for example very surprised to learn, that cpuset
updates change not only task->mems_allowed, but also nodemask of mempolicies.
Until now I expected the parameter to alloc_pages_nodemask() to be stable.
I wonder why do we then treat cpusets specially in get_page_from_freelist()
and distinguish HARDWALL etc, when there's unconditional intersection between
mempolicy and cpuset. I would expect the nodemask adjustment for saving
overhead in g_p_f(), but that clearly doesn't happen in the current form.
So we have both crazy complexity and overhead, AFAICS.

[1] https://lkml.kernel.org/r/CAFpQJXUq-JuEP=QPidy4p_=FN0rkH5Z-kfB4qBvsf6jMS87Edg@mail.gmail.com
[2] https://lkml.kernel.org/r/7c459f26-13a6-a817-e508-b65b903a8378@suse.cz

Vlastimil Babka (4):
  mm, page_alloc: fix check for NULL preferred_zone
  mm, page_alloc: fix fast-path race with cpuset update or removal
  mm, page_alloc: move cpuset seqcount checking to slowpath
  mm, page_alloc: fix premature OOM when racing with cpuset mems update

 include/linux/mmzone.h |  6 ++++-
 mm/page_alloc.c        | 68 ++++++++++++++++++++++++++++++++++----------------
 2 files changed, 52 insertions(+), 22 deletions(-)

-- 
2.11.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ