lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0d09382-c6ea-bf60-efbd-11a57a09263d@linux.alibaba.com>
Date:   Mon, 11 Oct 2021 10:14:01 +0800
From:   Baolin Wang <baolin.wang@...ux.alibaba.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     mike.kravetz@...cle.com, mhocko@...nel.org, guro@...com,
        corbet@....net, yaozhenguo1@...il.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org
Subject: Re: [PATCH] hugetlb: Support node specified when using cma for
 gigantic hugepages



On 2021/10/11 4:55, Andrew Morton wrote:
> On Sun, 10 Oct 2021 13:24:08 +0800 Baolin Wang <baolin.wang@...ux.alibaba.com> wrote:
> 
>> Now the size of CMA area for gigantic hugepages runtime allocation is
>> balanced for all online nodes, but we also want to specify the size of
>> CMA per-node, or only one node in some cases, which are similar with
> 
> Please describe in full detail why "we want to" do this.  In other
> words, what is the benefit to our users?  What are the use-cases, etc?

Sure. On some multi-nodes systems, each node's memory can be different, 
allocating the same size of CMA for each node is not suitable for the 
low-memory nodes. Meanwhile some workloads like DPDK mentioned by 
Zhenguo only need hugepages in one node.

On the other hand, we have some machines with multiple types of memory, 
like DRAM and PMEM (persistent memory). On this system, we may want to 
specify all the hugepages on DRAM node, or specify the proportion of 
DRAM node and PMEM node, to tuning the performance of the workloads.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ