lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-WIDWP1o4g-N5mg@google.com>
Date: Thu, 27 Mar 2025 17:17:01 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: Greg Thelen <gthelen@...gle.com>, Tejun Heo <tj@...nel.org>,
	Johannes Weiner <hannes@...xchg.org>,
	Michal Koutný <mkoutny@...e.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Eric Dumazet <edumzaet@...gle.com>, cgroups@...r.kernel.org,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH] cgroup/rstat: avoid disabling irqs for O(num_cpu)

On Thu, Mar 27, 2025 at 03:38:50PM +0100, Mateusz Guzik wrote:
> On Wed, Mar 19, 2025 at 05:18:05PM +0000, Yosry Ahmed wrote:
> > On Wed, Mar 19, 2025 at 11:47:32AM +0100, Mateusz Guzik wrote:
> > > Is not this going a little too far?
> > > 
> > > the lock + irq trip is quite expensive in its own right and now is
> > > going to be paid for each cpu, as in the total time spent executing
> > > cgroup_rstat_flush_locked is going to go up.
> > > 
> > > Would your problem go away toggling this every -- say -- 8 cpus?
> > 
> > I was concerned about this too, and about more lock bouncing, but the
> > testing suggests that this actually overall improves the latency of
> > cgroup_rstat_flush_locked() (at least on tested HW).
> > 
> > So I don't think we need to do something like this unless a regression
> > is observed.
> > 
> 
> To my reading it reduces max time spent with irq disabled, which of
> course it does -- after all it toggles it for every CPU.
> 
> Per my other e-mail in the thread the irq + lock trips remain not cheap
> at least on Sapphire Rapids.
> 
> In my testing outlined below I see 11% increase in total execution time
> with the irq + lock trip for every CPU in a 24-way vm.
> 
> So I stand by instead doing this every n CPUs, call it 8 or whatever.
> 
> How to repro:
> 
> I employed a poor-man's profiler like so:
> 
> bpftrace -e 'kprobe:cgroup_rstat_flush_locked { @start[tid] = nsecs; } kretprobe:cgroup_rstat_flush_locked /@...rt[tid]/ { print(nsecs - @start[tid]); delete(@start[tid]); } interval:s:60 { exit(); }'
> 
> This patch or not, execution time varies wildly even while the box is idle.
> 
> The above runs for a minute, collecting 23 samples (you may get
> "lucky" and get one extra, in that case remove it for comparison).
> 
> A sysctl was added to toggle the new behavior vs old one. Patch at the
> end.
> 
> "enabled"(1) means new behavior, "disabled"(0) means the old one.
> 
> Sum of nsecs (results piped to: awk '{ sum += $1 } END { print sum }'):
> disabled:	903610
> enabled:	1006833 (+11.4%)

IIUC this calculates the amount of elapsed time between start and
finish, not necessarily the function's own execution time. Is it
possible that the increase in time is due to more interrupts arriving
during the function execution (which is what we want), rather than more
time being spent on disabling/enabling IRQs?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ