lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160415095950.GB32386@dhcp22.suse.cz>
Date:	Fri, 15 Apr 2016 11:59:50 +0200
From:	Michal Hocko <mhocko@...nel.org>
To:	Colum Paget <colum.paget@...omgb.com>
Cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: Terrible disk performance when files cached > 4GB

On Fri 15-04-16 10:20:33, Colum Paget wrote:
> Hi all,
> 
> I suspect that many people will have reported this, but I thought I'd drop you 
> a line just in case everyone figures someone else has reported it. It's 
> possible we're just doing something wrong and so encountering this problem, 
> but I can't find anyone saying they've found a solution, and the problem 
> doesn't seem to be present in 3.x kernels, which makes us think it could be a 
> bug.
> 
> We are seeing a problem in 4.4.5 and 4.4.6 32-bit 'hugemem' kernels running on 
> machines with > 4GB ram.

I would generally discourage you from using much more than 4G on 32b
system. Lowmem mem pressure is a real problem which is inherent to the
highmem kernels.

> The problem results in disk performance dropping 
> from 120 MB/s to 1MB/s or even less. 3.18.x 32-bit kernels do not seem to 
> exhibit this behaviour, or at least we can't make it happen reliably. We've 
> tried 3.14.65 and 3.14.65 and they don't exhibit the same degree of problem.

I would expect this is due to dirty memory throttling. Highmem is not
considered dirtyable normally (see global_dirtyable_memory) and so all
the writers will get throttled earlier. Basically any change to how much
memory can be dirtied in in the lowmem will change the balance for you.

> We've not yet been able to test 64 bit kernels, it will be a while before we 
> can. We've been able to reproduce the problem on multiple machines with 
> different hardware configs, and with different kernel configs as regards 
> SMP , NUMA support and transparent hugepages.
> 
> This problem can be reproduced thusly:

Have you tried
echo 1 > /proc/sys/vm/highmem_is_dirtyable

Please note that this might help but it is a double edge sword because
it might cause pre mature OOM killers in certain loads. 32b is simply
not that great with a lot of memory.

HTH
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ