lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170802102255.4tc7jpyt52kjj7tq@suse.de>
Date:   Wed, 2 Aug 2017 11:22:55 +0100
From:   Mel Gorman <mgorman@...e.de>
To:     riel@...hat.com
Cc:     linux-kernel@...r.kernel.org, peterz@...radead.org,
        mingo@...nel.org, jhladky@...hat.com, lvenanci@...hat.com
Subject: Re: [RHEL-ALT-7.4 PATCH 2/2] sched,numa: scale scan period with
 tasks in group and shared/private

On Mon, Jul 31, 2017 at 03:28:47PM -0400, Rik van Riel wrote:
> From: Rik van Riel <riel@...hat.com>
> 
> Running 80 tasks in the same group, or as threads of the same process,
> results in the memory getting scanned 80x as fast as it would be if a
> single task was using the memory.
> 
> This really hurts some workloads.
> 

It would be nice to specify what workloads in particular and what sort
of machine because I'm willing to bet it has a bigger impact on machines
with 4+ nodes, particularly if they are not fully connected topologies.
Furthermore, I'm willing to bet that there would be small regressions on
2-socket machines but with less time spent scanning and processing
faults even if remote accesses are marginally increased.

Still, on balance, this is preferred behaviour.

> Scale the scan period by the number of tasks in the numa group, and
> the shared / private ratio, so the average rate at which memory in
> the group is scanned corresponds roughly to the rate at which a single
> task would scan its memory.
> 
> Signed-off-by: Rik van Riel <riel@...hat.com>

Acked-by: Mel Gorman <mgorman@...e.de>

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ