lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 18 Jul 2018 17:27:37 +0200
From:   Bruce Merry <bmerry@....ac.za>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Johannes Weiner <hannes@...xchg.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>
Subject: Re: Showing /sys/fs/cgroup/memory/memory.stat very slow on some machines

On 18 July 2018 at 16:47, Michal Hocko <mhocko@...nel.org> wrote:
>> Thanks for looking into this. I'm not familiar with ftrace. Can you
>> give me a specific command line to run? Based on "perf record cat
>> /sys/fs/cgroup/memory/memory.stat"/"perf report", I see the following:
>>
>>   42.09%  cat      [kernel.kallsyms]  [k] memcg_stat_show
>>   29.19%  cat      [kernel.kallsyms]  [k] memcg_sum_events.isra.22
>>   12.41%  cat      [kernel.kallsyms]  [k] mem_cgroup_iter
>>    5.42%  cat      [kernel.kallsyms]  [k] _find_next_bit
>>    4.14%  cat      [kernel.kallsyms]  [k] css_next_descendant_pre
>>    3.44%  cat      [kernel.kallsyms]  [k] find_next_bit
>>    2.84%  cat      [kernel.kallsyms]  [k] mem_cgroup_node_nr_lru_pages
>
> I would just use perf record as you did. How long did the call take?
> Also is the excessive time an outlier or a more consistent thing? If the
> former does perf record show any difference?

I didn't note the exact time for that particular run, but it's pretty
consistently 372-377ms on the machine that has that perf report. The
times differ between machines showing the symptom (anywhere from
200-500ms), but are consistent (within a few ms) in back-to-back runs
on each machine.

>> Ubuntu 16.04 with kernel 4.13.0-41-generic (so presumably includes
>> some Ubuntu special sauce).
>
> Do you see the same whe running with the vanilla kernel?

We don't currently have any boxes running vanilla kernels. While I
could install a test box with a vanilla kernel, I don't know how to
reproduce the problem, what piece of our production environment is
triggering it, or even why some machines are unaffected, so if the
problem didn't re-occur on the test box I wouldn't be able to conclude
anything useful.

Do you have suggestions on things I could try that might trigger this?
e.g. are there cases where a cgroup no longer shows up in the
filesystem but is still lingering while waiting for its refcount to
hit zero? Does every child cgroup contribute to the stat_show cost of
its parent or does it have to have some non-trivial variation from its
parent?

Thanks
Bruce
-- 
Bruce Merry
Senior Science Processing Developer
SKA South Africa

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ