lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 Jan 2021 16:47:33 -0800
From:   Shakeel Butt <shakeelb@...gle.com>
To:     Feng Tang <feng.tang@...el.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...e.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Linux MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>, andi.kleen@...el.com,
        "Chen, Tim C" <tim.c.chen@...el.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Huang Ying <ying.huang@...el.com>, Roman Gushchin <guro@...com>
Subject: Re: [PATCH 2/2] mm: memcg: add a new MEMCG_UPDATE_BATCH

On Tue, Dec 29, 2020 at 6:35 AM Feng Tang <feng.tang@...el.com> wrote:
>
> When profiling memory cgroup involved benchmarking, status update
> sometimes take quite some CPU cycles. Current MEMCG_CHARGE_BATCH
> is used for both charging and statistics/events updating, and is
> set to 32, which may be good for accuracy of memcg charging, but
> too small for stats update which causes concurrent access to global
> stats data instead of per-cpu ones.
>
> So handle them differently, by adding a new bigger batch number
> for stats updating, while keeping the value for charging (though
> comments in memcontrol.h suggests to consider a bigger value too)
>
> The new batch is set to 512, which considers 2MB huge pages (512
> pages), as the check logic mostly is:
>
>     if (x > BATCH), then skip updating global data
>
> so it will save 50% global data updating for 2MB pages
>
> Following are some performance data with the patch, against
> v5.11-rc1, on several generations of Xeon platforms. Each category
> below has several subcases run on different platform, and only the
> worst and best scores are listed:
>
> fio:                             +2.0% ~  +6.8%
> will-it-scale/malloc:            -0.9% ~  +6.2%
> will-it-scale/page_fault1:       no change
> will-it-scale/page_fault2:      +13.7% ~ +26.2%
>
> One thought is it could be dynamically calculated according to
> memcg limit and number of CPUs, and another is to add a periodic
> syncing of the data for accuracy reason similar to vmstat, as
> suggested by Ying.
>

I am going to push back on this change. On a large system where jobs
can run on any available cpu, this will totally mess up the stats
(which is actually what happens on our production servers). These
stats are used for multiple purposes like debugging or understanding
the memory usage of the job or doing data analysis.

Powered by blists - more mailing lists