lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190917120646.GT29434@bombadil.infradead.org>
Date:   Tue, 17 Sep 2019 05:06:46 -0700
From:   Matthew Wilcox <willy@...radead.org>
To:     Lin Feng <linf@...gsu.com>
Cc:     corbet@....net, mcgrof@...nel.org, akpm@...ux-foundation.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        keescook@...omium.org, mchehab+samsung@...nel.org,
        mgorman@...hsingularity.net, vbabka@...e.cz, mhocko@...e.com,
        ktkhai@...tuozzo.com, hannes@...xchg.org
Subject: Re: [PATCH] [RFC] vmscan.c: add a sysctl entry for controlling
 memory reclaim IO congestion_wait length

On Tue, Sep 17, 2019 at 07:58:24PM +0800, Lin Feng wrote:
> In direct and background(kswapd) pages reclaim paths both may fall into
> calling msleep(100) or congestion_wait(HZ/10) or wait_iff_congested(HZ/10)
> while under IO pressure, and the sleep length is hard-coded and the later
> two will introduce 100ms iowait length per time.
> 
> So if pages reclaim is relatively active in some circumstances such as high
> order pages reappings, it's possible to see a lot of iowait introduced by
> congestion_wait(HZ/10) and wait_iff_congested(HZ/10).
> 
> The 100ms sleep length is proper if the backing drivers are slow like
> traditionnal rotation disks. While if the backing drivers are high-end
> storages such as high iops ssds or even faster drivers, the high iowait
> inroduced by pages reclaim is really misleading, because the storage IO
> utils seen by iostat is quite low, in this case the congestion_wait time
> modified to 1ms is likely enough for high-end ssds.
> 
> Another benifit is that it's potentially shorter the direct reclaim blocked
> time when kernel falls into sync reclaim path, which may improve user
> applications response time.

This is a great description of the problem.

> +mm_reclaim_congestion_wait_jiffies
> +==========
> +
> +This control is used to define how long kernel will wait/sleep while
> +system memory is under pressure and memroy reclaim is relatively active.
> +Lower values will decrease the kernel wait/sleep time.
> +
> +It's suggested to lower this value on high-end box that system is under memory
> +pressure but with low storage IO utils and high CPU iowait, which could also
> +potentially decrease user application response time in this case.
> +
> +Keep this control as it were if your box are not above case.
> +
> +The default value is HZ/10, which is of equal value to 100ms independ of how
> +many HZ is defined.

Adding a new tunable is not the right solution.  The right way is
to make Linux auto-tune itself to avoid the problem.  For example,
bdi_writeback contains an estimated write bandwidth (calculated by the
memory management layer).  Given that, we should be able to make an
estimate for how long to wait for the queues to drain.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ