lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmJKBaq1yj6/iBJ3@ziqianlu-desk1>
Date:   Fri, 22 Apr 2022 14:24:05 +0800
From:   Aaron Lu <aaron.lu@...el.com>
To:     "ying.huang@...el.com" <ying.huang@...el.com>
CC:     Yang Shi <shy828301@...il.com>, Michal Hocko <mhocko@...e.com>,
        "Andrew Morton" <akpm@...ux-foundation.org>,
        Linux MM <linux-mm@...ck.org>,
        "Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: swap: determine swap device by using page nid

On Thu, Apr 21, 2022 at 04:34:09PM +0800, ying.huang@...el.com wrote:
> On Thu, 2022-04-21 at 16:17 +0800, Aaron Lu wrote:
> > On Thu, Apr 21, 2022 at 03:49:21PM +0800, ying.huang@...el.com wrote:

... ...

> > > For swap-in latency, we can use pmbench, which can output latency
> > > information.
> > > 
> > 
> > OK, I'll give pmbench a run, thanks for the suggestion.
> 
> Better to construct a senario with more swapin than swapout.  For
> example, start a memory eater, then kill it later.

What about vm-scalability/case-swapin?
https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-swapin

I think you are pretty familiar with it but still:
1) it starts $nr_task processes and each mmaps $size/$nr_task area and
   then consumes the memory, after this, it waits for a signal;
2) start another process to consume $size memory to push the memory in
   step 1) to swap device;
3) kick processes in step 1) to start accessing their memory, thus
   trigger swapins. The metric of this testcase is the swapin throughput.

I plan to restrict the cgroup's limit to $size.

Considering there is only one NVMe drive attached to node 0, I will run
the test as described before:
1) bind processes to run on node 0, allocate on node 1 to test the
   performance when reclaimer's node id is the same as swap device's.
2) bind processes to run on node 1, allocate on node 0 to test the
   performance when page's node id is the same as swap device's.

Ying and Yang,

Let me know what you think about the case used and the way the test is
conducted.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ