lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171122152832.iayefrlxbugphorp@dhcp22.suse.cz>
Date:   Wed, 22 Nov 2017 16:28:32 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     linux-mm@...ck.org
Cc:     Mike Kravetz <mike.kravetz@...cle.com>,
        Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
        LKML <linux-kernel@...r.kernel.org>
Subject: hugetlb page migration vs. overcommit

Hi,
is there any reason why we enforce the overcommit limit during hugetlb
pages migration? It's in alloc_huge_page_node->__alloc_buddy_huge_page
path. I am wondering whether this is really an intentional behavior.
The page migration allocates a page just temporarily so we should be
able to go over the overcommit limit for the migration duration. The
reason I am asking is that hugetlb pages tend to be utilized usually
(otherwise the memory would be just wasted and pool shrunk) but then
the migration simply fails which breaks memory hotplug and other
migration dependent functionality which is quite suboptimal. You can
workaround that by increasing the overcommit limit.

Why don't we simply migrate as long as we are able to allocate the
target hugetlb page? I have a half baked patch to remove this
restriction, would there be an opposition to do something like that?
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ