lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220214211050.31049-1-szhai2@cs.rochester.edu>
Date:   Mon, 14 Feb 2022 16:10:50 -0500
From:   Shuang Zhai <szhai2@...rochester.edu>
To:     mgorman@...hsingularity.net
Cc:     akpm@...ux-foundation.org, djwong@...nel.org, efault@....de,
        hakavlad@...ox.lv, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org, mhocko@...e.com,
        regressions@...ts.linux.dev, riel@...riel.com, vbabka@...e.cz
Subject: [PATCH v4 1/1] mm: vmscan: Reduce throttling due to a failure to make progress

Hi Mel,

Mel Gorman wrote:
>
> Mike Galbraith, Alexey Avramov and Darrick Wong all reported similar
> problems due to reclaim throttling for excessive lengths of time.
> In Alexey's case, a memory hog that should go OOM quickly stalls for
> several minutes before stalling. In Mike and Darrick's cases, a small
> memcg environment stalled excessively even though the system had enough
> memory overall.
>

I recently found a regression when I tested MGLRU with fio on Linux
5.16-rc6 [1]. After this patch was applied, I re-ran the test with Linux
5.16, but the regression has not been fixed yet. 

The workload is to let fio perform random access on files with buffered
IO. The total file size is 2x the memory size. Files are stored on pmem.
For each configuration, I ran fio 10 times and reported the average and
the standard deviation.

Fio command
===========

$ numactl --cpubind=0 --membind=0 fio --name=randread \
  --directory=/mnt/pmem/ --size={10G, 5G} --io_size=1000TB \
  --time_based --numjobs={40, 80} --ioengine=io_uring \
  --ramp_time=20m --runtime=10m --iodepth=128 \
  --iodepth_batch_submit=32 --iodepth_batch_complete=32 \
  --rw=randread --random_distribution=random \
  --direct=0 --norandommap --group_reporting

Results in throughput (MB/s):
=============================

+------------+------+-------+------+-------+----------+-------+
| Jobs / CPU | 5.15 | stdev | 5.16 | stdev | 5.17-rc3 | stdev |
+------------+------+-------+------+-------+----------+-------+
| 1          | 8411 | 75    | 7459 | 38    | 7331     | 36    |
+------------+------+-------+------+-------+----------+-------+
| 2          | 8417 | 54    | 7491 | 41    | 7383     | 15    |
+------------+------+-------+------+-------+----------+-------+

[1] https://lore.kernel.org/linux-mm/20220105024423.26409-1-szhai2@cs.rochester.edu/

Thanks!

Shuang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ