[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <acbf4417-4ded-fa03-7b8d-34dc0803027c@cisco.com>
Date: Thu, 28 Sep 2017 18:49:07 +0300
From: "Ruslan Ruslichenko -X (rruslich - GLOBALLOGIC INC at Cisco)"
<rruslich@...co.com>
To: Johannes Weiner <hannes@...xchg.org>,
Taras Kondratiuk <takondra@...co.com>
Cc: Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
xe-linux-external@...co.com, linux-kernel@...r.kernel.org
Subject: Re: Detecting page cache trashing state
Hi Johannes,
Hopefully I was able to rebase the patch on top v4.9.26 (latest
supported version by us right now)
and test a bit.
The overall idea definitely looks promising, although I have one
question on usage.
Will it be able to account the time which processes spend on handling
major page faults
(including fs and iowait time) of refaulting page?
As we have one big application which code space occupies big amount of
place in page cache,
when the system under heavy memory usage will reclaim some of it, the
application will
start constantly thrashing. Since it code is placed on squashfs it
spends whole CPU time
decompressing the pages and seem memdelay counters are not detecting
this situation.
Here are some counters to indicate this:
19:02:44 CPU %user %nice %system %iowait %steal %idle
19:02:45 all 0.00 0.00 100.00 0.00 0.00 0.00
19:02:44 pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s
pgscand/s pgsteal/s %vmeff
19:02:45 15284.00 0.00 428.00 352.00 19990.00 0.00
0.00 15802.00 0.00
And as nobody actively allocating memory anymore looks like memdelay
counters are not
actively incremented:
[:~]$ cat /proc/memdelay
268035776
6.13 5.43 3.58
1.90 1.89 1.26
Just in case, I have attached the v4.9.26 rebased patched.
Also attached the patch with our current solution. In current
implementation it will mostly
fit to squashfs only thrashing situation as in general case iowait time
would be major part of
page fault handling thus it need to be accounted too.
Thanks,
Ruslan
On 09/18/2017 07:34 PM, Johannes Weiner wrote:
> Hi Taras,
>
> On Fri, Sep 15, 2017 at 10:28:30AM -0700, Taras Kondratiuk wrote:
>> Quoting Michal Hocko (2017-09-15 07:36:19)
>>> On Thu 14-09-17 17:16:27, Taras Kondratiuk wrote:
>>>> Has somebody faced similar issue? How are you solving it?
>>> Yes this is a pain point for a _long_ time. And we still do not have a
>>> good answer upstream. Johannes has been playing in this area [1].
>>> The main problem is that our OOM detection logic is based on the ability
>>> to reclaim memory to allocate new memory. And that is pretty much true
>>> for the pagecache when you are trashing. So we do not know that
>>> basically whole time is spent refaulting the memory back and forth.
>>> We do have some refault stats for the page cache but that is not
>>> integrated to the oom detection logic because this is really a
>>> non-trivial problem to solve without triggering early oom killer
>>> invocations.
>>>
>>> [1] http://lkml.kernel.org/r/20170727153010.23347-1-hannes@cmpxchg.org
>> Thanks Michal. memdelay looks promising. We will check it.
> Great, I'm obviously interested in more users of it :) Please find
> attached the latest version of the patch series based on v4.13.
>
> It needs a bit more refactoring in the scheduler bits before
> resubmission, but it already contains a couple of fixes and
> improvements since the first version I sent out.
>
> Let me know if you need help rebasing to a different kernel version.
View attachment "0002-mm-sched-memdelay-memory-health-interface-for-system.patch" of type "text/x-patch" (36379 bytes)
View attachment "0001-mm-workingset-tell-cache-transitions-from-workingset.patch" of type "text/x-patch" (14452 bytes)
View attachment "0001-proc-stat-add-major-page-faults-time-accounting.patch" of type "text/x-patch" (4083 bytes)
Powered by blists - more mailing lists