lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 18 Aug 2022 12:04:46 +0200
From:   Michal Koutný <mkoutny@...e.com>
To:     Shakeel Butt <shakeelb@...gle.com>
Cc:     Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Muchun Song <songmuchun@...edance.com>,
        David Hildenbrand <david@...hat.com>,
        Yosry Ahmed <yosryahmed@...gle.com>,
        Greg Thelen <gthelen@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        cgroups@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH] Revert "memcg: cleanup racy sum avoidance code"

On Wed, Aug 17, 2022 at 05:21:39PM +0000, Shakeel Butt <shakeelb@...gle.com> wrote:
> $ grep "sock " /mnt/memory/job/memory.stat
> sock 253952
> total_sock 18446744073708724224
> 
> Re-run after couple of seconds
> 
> $ grep "sock " /mnt/memory/job/memory.stat
> sock 253952
> total_sock 53248
> 
> For now we are only seeing this issue on large machines (256 CPUs) and
> only with 'sock' stat. I think the networking stack increase the stat on
> one cpu and decrease it on another cpu much more often. So, this
> negative sock is due to rstat flusher flushing the stats on the CPU that
> has seen the decrement of sock but missed the CPU that has increments. A
> typical race condition.

This theory adds up :-) (Provided the numbers.)

> For easy stable backport, revert is the most simple solution.

Sounds reasonable.

> For long term solution, I am thinking of two directions. First is just
> reduce the race window by optimizing the rstat flusher. Second is if
> the reader sees a negative stat value, force flush and restart the
> stat collection.  Basically retry but limited.

Or just stick with the revert since it already reduces the observed
error by rounding to zero in simple way.

(Or if the imprecision was worth extra storage, use two-stage flushing
to accumulate (cpus x cgroups) and assign in two steps.)

Thanks,
Michal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ