lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 16 Dec 2022 23:47:19 +0100
From:   Frederic Weisbecker <frederic@...nel.org>
To:     Marcelo Tosatti <mtosatti@...hat.com>
Cc:     atomlin@...hat.com, cl@...ux.com, tglx@...utronix.de,
        mingo@...nel.org, peterz@...radead.org, pauld@...hat.com,
        neelx@...hat.com, oleksandr@...alenko.name,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v9 3/5] mm/vmstat: manage per-CPU stats from CPU context
 when NOHZ full

On Fri, Dec 16, 2022 at 01:16:09PM -0300, Marcelo Tosatti wrote:
> On Wed, Dec 14, 2022 at 02:33:02PM +0100, Frederic Weisbecker wrote:
> > On Tue, Dec 06, 2022 at 01:18:29PM -0300, Marcelo Tosatti wrote:
> > >  static inline void vmstat_mark_dirty(void)
> > >  {
> > > +	int cpu = smp_processor_id();
> > > +
> > > +	if (tick_nohz_full_cpu(cpu) && !this_cpu_read(vmstat_dirty)) {
> > > +		struct delayed_work *dw;
> > > +
> > > +		dw = &per_cpu(vmstat_work, cpu);
> > > +		if (!delayed_work_pending(dw)) {
> > > +			unsigned long delay;
> > > +
> > > +			delay = round_jiffies_relative(sysctl_stat_interval);
> > > +			queue_delayed_work_on(cpu, mm_percpu_wq, dw, delay);
> > 
> > Currently the vmstat_work is flushed on cpu_hotplug (CPUHP_AP_ONLINE_DYN).
> > vmstat_shepherd makes sure to not rearm it afterward. But now it looks
> > possible for the above to do that mistake?
> 
> Don't think the mistake is an issue. In case of a
> queue_delayed_work_on being called after cancel_delayed_work_sync,
> either vmstat_update executes on the local CPU, or on a
> different CPU (after the bound kworkers have been moved).

But after the CPU goes offline, its workqueue pool becomes UNBOUND. Which means
that the vmstat_update() from the offline CPU can then execute partly on CPU 0, then
gets preempted and executes halfway on CPU 1, then gets preempted and...

Having a quick look at refresh_cpu_vm_stats(), I doesn't look ready for that...

Thanks.

> 
> Each case is fine (see vmstat_update).
> 
> > > +		}
> > > +	}
> > >  	this_cpu_write(vmstat_dirty, true);
> > >  }
> > > @@ -2009,6 +2028,10 @@ static void vmstat_shepherd(struct work_
> > >  	for_each_online_cpu(cpu) {
> > >  		struct delayed_work *dw = &per_cpu(vmstat_work, cpu);
> > >  
> > > +		/* NOHZ full CPUs manage their own vmstat flushing */
> > > +		if (tick_nohz_full_cpu(smp_processor_id()))
> > 
> > It should be the remote CPU instead of the current one.
> 
> Fixed.
> 

Powered by blists - more mailing lists