lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 10 Jun 2020 12:36:14 +0900
From:   Joonsoo Kim <js1304@...il.com>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Linux Memory Management List <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>, kernel-team@....com,
        Vlastimil Babka <vbabka@...e.cz>,
        Christoph Hellwig <hch@...radead.org>,
        Roman Gushchin <guro@...com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v2 06/12] mm/hugetlb: make hugetlb migration target
 allocation APIs CMA aware

2020년 6월 9일 (화) 오후 10:53, Michal Hocko <mhocko@...nel.org>님이 작성:
>
> On Wed 27-05-20 15:44:57, Joonsoo Kim wrote:
> > From: Joonsoo Kim <iamjoonsoo.kim@....com>
> >
> > There is a user who do not want to use CMA memory for migration. Until
> > now, it is implemented by caller side but it's not optimal since there
> > is limited information on caller. This patch implements it on callee side
> > to get better result.
>
> I do not follow this changelog and honestly do not see an improvement.
> skip_cma in the alloc_control sound like a hack to me. I can now see

new_non_cma_page() want to allocate the new page that is not on the
CMA area. new_non_cma_page() implements it by not specifying
__GFP_MOVALBE mask or removing this mask.

hugetlb page allocation has two steps. First is dequeing from the pool. And,
if there is no available page on the pool, allocating from the page allocator.

new_non_cma_page() can control allocating from the page allocator in hugetlb
via the gfp flags. However, dequeing cannot be controlled by this way so it
skips dequeing completely. This is why new_non_cma_page() uses
alloc_migrate_huge_page() instead of alloc_huge_page_nodemask().

My patch makes hugetlb code CMA aware so that new_non_cma_page()
can get the benefit of the hugetlb pool.

> why your earlier patch has started to or the given gfp_mask. If anything
> this should be folded here. But even then I do not like a partial
> gfp_mask (__GFP_NOWARN on its own really has GFP_NOWAIT like semantic).

Will not use partial gfp_mask.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ