lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 Sep 2017 10:28:30 -0700
From:   Taras Kondratiuk <takondra@...co.com>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     linux-mm@...ck.org, xe-linux-external@...co.com,
        Ruslan Ruslichenko <rruslich@...co.com>,
        linux-kernel@...r.kernel.org
Subject: Re: Detecting page cache trashing state

Quoting Michal Hocko (2017-09-15 07:36:19)
> On Thu 14-09-17 17:16:27, Taras Kondratiuk wrote:
> > Hi
> > 
> > In our devices under low memory conditions we often get into a trashing
> > state when system spends most of the time re-reading pages of .text
> > sections from a file system (squashfs in our case). Working set doesn't
> > fit into available page cache, so it is expected. The issue is that
> > OOM killer doesn't get triggered because there is still memory for
> > reclaiming. System may stuck in this state for a quite some time and
> > usually dies because of watchdogs.
> > 
> > We are trying to detect such trashing state early to take some
> > preventive actions. It should be a pretty common issue, but for now we
> > haven't find any existing VM/IO statistics that can reliably detect such
> > state.
> > 
> > Most of metrics provide absolute values: number/rate of page faults,
> > rate of IO operations, number of stolen pages, etc. For a specific
> > device configuration we can determine threshold values for those
> > parameters that will detect trashing state, but it is not feasible for
> > hundreds of device configurations.
> > 
> > We are looking for some relative metric like "percent of CPU time spent
> > handling major page faults". With such relative metric we could use a
> > common threshold across all devices. For now we have added such metric
> > to /proc/stat in our kernel, but we would like to find some mechanism
> > available in upstream kernel.
> > 
> > Has somebody faced similar issue? How are you solving it?
> 
> Yes this is a pain point for a _long_ time. And we still do not have a
> good answer upstream. Johannes has been playing in this area [1].
> The main problem is that our OOM detection logic is based on the ability
> to reclaim memory to allocate new memory. And that is pretty much true
> for the pagecache when you are trashing. So we do not know that
> basically whole time is spent refaulting the memory back and forth.
> We do have some refault stats for the page cache but that is not
> integrated to the oom detection logic because this is really a
> non-trivial problem to solve without triggering early oom killer
> invocations.
> 
> [1] http://lkml.kernel.org/r/20170727153010.23347-1-hannes@cmpxchg.org

Thanks Michal. memdelay looks promising. We will check it.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ