lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 10 May 2021 02:10:46 +0000
From:   "Chu,Kaiping" <chukaiping@...du.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
CC:     "mcgrof@...nel.org" <mcgrof@...nel.org>,
        "keescook@...omium.org" <keescook@...omium.org>,
        "yzaikin@...gle.com" <yzaikin@...gle.com>,
        "vbabka@...e.cz" <vbabka@...e.cz>,
        "nigupta@...dia.com" <nigupta@...dia.com>,
        "bhe@...hat.com" <bhe@...hat.com>,
        "khalid.aziz@...cle.com" <khalid.aziz@...cle.com>,
        "iamjoonsoo.kim@....com" <iamjoonsoo.kim@....com>,
        "mateusznosek0@...il.com" <mateusznosek0@...il.com>,
        "sh_def@....com" <sh_def@....com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        David Rientjes <rientjes@...gle.com>
Subject: 答复: [PATCH v4] mm/compaction: let proactive compaction order configurable



-----邮件原件-----
发件人: Andrew Morton <akpm@...ux-foundation.org> 
发送时间: 2021年5月10日 8:18
收件人: Chu,Kaiping <chukaiping@...du.com>
抄送: mcgrof@...nel.org; keescook@...omium.org; yzaikin@...gle.com; vbabka@...e.cz; nigupta@...dia.com; bhe@...hat.com; khalid.aziz@...cle.com; iamjoonsoo.kim@....com; mateusznosek0@...il.com; sh_def@....com; linux-kernel@...r.kernel.org; linux-fsdevel@...r.kernel.org; linux-mm@...ck.org; Mel Gorman <mgorman@...hsingularity.net>; David Rientjes <rientjes@...gle.com>
主题: Re: [PATCH v4] mm/compaction: let proactive compaction order configurable

On Wed, 28 Apr 2021 10:28:21 +0800 chukaiping <chukaiping@...du.com> wrote:

> > Currently the proactive compaction order is fixed to 
> > COMPACTION_HPAGE_ORDER(9), it's OK in most machines with lots of 
> > normal 4KB memory, but it's too high for the machines with small 
> > normal memory, for example the machines with most memory configured as 
> > 1GB hugetlbfs huge pages. In these machines the max order of free 
> > pages is often below 9, and it's always below 9 even with hard 
> > compaction. This will lead to proactive compaction be triggered very 
> > frequently. In these machines we only care about order of 3 or 4.
> > This patch export the oder to proc and let it configurable by user, 
> > and the default value is still COMPACTION_HPAGE_ORDER.

> It would be great to do this automatically?  It's quite simple to see when memory is being handed out to hugetlbfs - so can we tune proactive_compaction_order in response to this?  That would be far better than adding a manual tunable.

> But from having read Khalid's comments, that does sound quite involved.
> Is there some partial solution that we can come up with that will get most people out of trouble?

> That being said, this patch is super-super-simple so perhaps we should just merge it just to get one person (and hopefully a few more) out of trouble.  But on the other hand, once we add a /proc tunable we must maintain that tunable for ever (or at least a very long time) even if the internal implementations change a lot.

Currently the fragment index of each zone is per order, there is no single fragment index for the whole system, so we can only use a user defined order for proactive compaction. I am keep thinking of the way to calculating the average fragment index of the system, but till now I doesn't think out it. I think that we can just use the proc file to configure the order manually, if we think out better solution in future, we can keep the proc file but remove the implementation internally.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ