lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 11 May 2021 07:48:07 +0000
From:   "Chu,Kaiping" <chukaiping@...du.com>
To:     Khalid Aziz <khalid.aziz@...cle.com>,
        David Rientjes <rientjes@...gle.com>
CC:     "mcgrof@...nel.org" <mcgrof@...nel.org>,
        "keescook@...omium.org" <keescook@...omium.org>,
        "yzaikin@...gle.com" <yzaikin@...gle.com>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "vbabka@...e.cz" <vbabka@...e.cz>,
        "nigupta@...dia.com" <nigupta@...dia.com>,
        "bhe@...hat.com" <bhe@...hat.com>,
        "iamjoonsoo.kim@....com" <iamjoonsoo.kim@....com>,
        "mateusznosek0@...il.com" <mateusznosek0@...il.com>,
        "sh_def@....com" <sh_def@....com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: 答复: [PATCH v3] mm/compaction:let proactive compaction order configurable



> -----邮件原件-----
> 发件人: Khalid Aziz <khalid.aziz@...cle.com>
> 发送时间: 2021年5月7日 5:27
> 收件人: David Rientjes <rientjes@...gle.com>; Chu,Kaiping
> <chukaiping@...du.com>
> 抄送: mcgrof@...nel.org; keescook@...omium.org; yzaikin@...gle.com;
> akpm@...ux-foundation.org; vbabka@...e.cz; nigupta@...dia.com;
> bhe@...hat.com; iamjoonsoo.kim@....com; mateusznosek0@...il.com;
> sh_def@....com; linux-kernel@...r.kernel.org; linux-fsdevel@...r.kernel.org;
> linux-mm@...ck.org
> 主题: Re: [PATCH v3] mm/compaction:let proactive compaction order
> configurable
> 
> On 4/25/21 9:15 PM, David Rientjes wrote:
> > On Sun, 25 Apr 2021, chukaiping wrote:
> >
> >> Currently the proactive compaction order is fixed to
> >> COMPACTION_HPAGE_ORDER(9), it's OK in most machines with lots of
> >> normal 4KB memory, but it's too high for the machines with small
> >> normal memory, for example the machines with most memory configured
> >> as 1GB hugetlbfs huge pages. In these machines the max order of free
> >> pages is often below 9, and it's always below 9 even with hard
> >> compaction. This will lead to proactive compaction be triggered very
> >> frequently. In these machines we only care about order of 3 or 4.
> >> This patch export the oder to proc and let it configurable by user,
> >> and the default value is still COMPACTION_HPAGE_ORDER.
> >>
> >
> > As asked in the review of the v1 of the patch, why is this not a
> > userspace policy decision?  If you are interested in order-3 or
> > order-4 fragmentation, for whatever reason, you could periodically
> > check /proc/buddyinfo and manually invoke compaction on the system.
> >
> > In other words, why does this need to live in the kernel?
> >
> 
> I have struggled with this question. Fragmentation and allocation stalls are
> significant issues on large database systems which also happen to use memory
> in similar ways (90+% of memory is allocated as hugepages) leaving just
> enough memory to run rest of the userspace processes. I had originally
> proposed a kernel patch to monitor, do a trend analysis of memory usage and
> take proactive action -
> <https://lore.kernel.org/lkml/20190813014012.30232-1-khalid.aziz@oracle.c
> om/>. Based upon feedback, I moved the implementation to userspace -
> <https://github.com/oracle/memoptimizer>. Test results across multiple
> workloads have been very good. Results from one of the workloads are in this
> blog - <https://blogs.oracle.com/linux/anticipating-your-memory-needs>. It
> works well from userspace but it has limited ways to influence reclamation and
> compaction. It uses watermark_scale_factor to boost watermarks and cause
> reclamation to kick in earlier and run longer. It uses
> /sys/devices/system/node/node%d/compact to force compaction on the node
> expected to reach high level of fragmentation soon. Neither of these is very
> efficient from userspace even though they get the job done. Scaling watermark
> has longer lasting impact than raising scanning priority in balance_pgdat()
> temporarily. I plan to experiment with watermark_boost_factor to see if I can
> use it in place of /sys/devices/system/node/node%d/compact and get the
> same results. Doing all of this in the kernel can be more efficient and lessen
> potential negative impact on the system. On the other hand, it is easier to fix
> and update such policies in userspace although at the cost of having a
> performance critical component live outside the kernel and thus not be active
> on the system by default.
>
I studied your memoptimizer these days, I also agree to move the implementation into kernel to co-work with current proactive compaction mechanism to get higher efficiency.
By the way I am interested about the memoptimizer, I want to have a test of it, but how to evaluate its effectiveness?


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ