lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9e24d3fa3098fc338bf396ffdf82555cc033ae48.camel@nvidia.com>
Date:   Thu, 19 Sep 2019 23:22:19 +0000
From:   Nitin Gupta <nigupta@...dia.com>
To:     "mgorman@...hsingularity.net" <mgorman@...hsingularity.net>
CC:     "keescook@...omium.org" <keescook@...omium.org>,
        "willy@...radead.org" <willy@...radead.org>,
        "aryabinin@...tuozzo.com" <aryabinin@...tuozzo.com>,
        "vbabka@...e.cz" <vbabka@...e.cz>,
        "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
        "hannes@...xchg.org" <hannes@...xchg.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "cai@....pw" <cai@....pw>,
        "arunks@...eaurora.org" <arunks@...eaurora.org>,
        "yuzhao@...gle.com" <yuzhao@...gle.com>,
        "janne.huttunen@...ia.com" <janne.huttunen@...ia.com>,
        "jannh@...gle.com" <jannh@...gle.com>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        "mhocko@...e.com" <mhocko@...e.com>, "guro@...com" <guro@...com>,
        "khlebnikov@...dex-team.ru" <khlebnikov@...dex-team.ru>,
        "dan.j.williams@...el.com" <dan.j.williams@...el.com>
Subject: Re: [RFC] mm: Proactive compaction

On Thu, 2019-08-22 at 09:51 +0100, Mel Gorman wrote:
> As unappealing as it sounds, I think it is better to try improve the
> allocation latency itself instead of trying to hide the cost in a kernel
> thread. It's far harder to implement as compaction is not easy but it
> would be more obvious what the savings are by looking at a histogram of
> allocation latencies -- there are other metrics that could be considered
> but that's the obvious one.
> 

Do you mean reducing allocation latency especially when it hits direct
compaction path? Do you have any ideas in mind for this? I'm open to
working on them and report back latency nummbers, while I think more on less
tunable-heavy background (pro-active) compaction approaches.

-Nitin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ