lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <78c60d531c8ec862a0fe5e6ef4a40c1f65fe6544.camel@redhat.com>
Date: Mon, 21 Jul 2025 17:23:12 +0200
From: Gabriele Monaco <gmonaco@...hat.com>
To: Nam Cao <namcao@...utronix.de>
Cc: linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>, 
 Masami Hiramatsu <mhiramat@...nel.org>, linux-trace-kernel@...r.kernel.org,
 Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
 Tomas Glozar <tglozar@...hat.com>, Juri Lelli <jlelli@...hat.com>,  Clark
 Williams <williams@...hat.com>, John Kacur <jkacur@...hat.com>
Subject: Re: [PATCH v4 10/14] rv: Retry when da monitor detects race
 conditions



On Mon, 2025-07-21 at 17:01 +0200, Nam Cao wrote:
> On Mon, Jul 21, 2025 at 10:23:20AM +0200, Gabriele Monaco wrote:
> > DA monitor can be accessed from multiple cores simultaneously, this
> > is
> > likely, for instance when dealing with per-task monitors reacting
> > on
> > events that do not always occur on the CPU where the task is
> > running.
> > This can cause race conditions where two events change the next
> > state
> > and we see inconsistent values. E.g.:
> > 
> >   [62] event_srs: 27: sleepable x sched_wakeup -> running (final)
> >   [63] event_srs: 27: sleepable x sched_set_state_sleepable ->
> > sleepable
> >   [63] error_srs: 27: event sched_switch_suspend not expected in
> > the state running
> > 
> > In this case the monitor fails because the event on CPU 62 wins
> > against
> > the one on CPU 63, although the correct state should have been
> > sleepable, since the task get suspended.
> > 
> > Detect if the current state was modified by using try_cmpxchg while
> > storing the next value. If it was, try again reading the current
> > state.
> > After a maximum number of failed retries, react by calling a
> > special
> > tracepoint, print on the console and reset the monitor.
> > 
> > Remove the functions da_monitor_curr_state() and
> > da_monitor_set_state()
> > as they only hide the underlying implementation in this case.
> > 
> > Monitors where this type of condition can occur must be able to
> > account
> > for racing events in any possible order, as we cannot know the
> > winner.
> > 
> > Cc: Ingo Molnar <mingo@...hat.com>
> > Cc: Peter Zijlstra <peterz@...radead.org>
> > Signed-off-by: Gabriele Monaco <gmonaco@...hat.com>
> > ---
> > 
> >  static inline
> > bool										\
> >  da_event_##name(struct da_monitor *da_mon, enum events_##name
> > event)				\
> >  {								
> > 				\
> > -	type curr_state =
> > da_monitor_curr_state_##name(da_mon);					\
> > -	type next_state = model_get_next_state_##name(curr_state,
> > event);			\
> > -
> > 												\
> > -	if (next_state != INVALID_STATE)
> > {							\
> > -		da_monitor_set_state_##name(da_mon,
> > next_state);				\
> > -
> > 												\
> > -
> > 		trace_event_##name(model_get_state_name_##name(curr_state),			\
> > -				  
> > model_get_event_name_##name(event),				\
> > -				  
> > model_get_state_name_##name(next_state),			\
> > -				  
> > model_is_final_state_##name(next_state));			\
> > -
> > 												\
> > -		return
> > true;									\
> > +	enum states_##name curr_state,
> > next_state;						\
> > +								
> > 				\
> > +	curr_state = READ_ONCE(da_mon-
> > >curr_state);						\
> > +	for (int i = 0; i < MAX_DA_RETRY_RACING_EVENTS; i++)
> > {					\
> > +		next_state =
> > model_get_next_state_##name(curr_state, event);			\
> > +		if (next_state == INVALID_STATE)
> > {						\
> > +			cond_react_##name(curr_state,
> > event);					\
> > +			trace_error_##name(model_get_state_name_##
> > name(curr_state),		\
> > +					  
> > model_get_event_name_##name(event));			\
> > +			return
> > false;								\
> > +		}						
> > 				\
> > +		if (likely(try_cmpxchg(&da_mon->curr_state,
> > &curr_state, next_state))) {	\
> > +			trace_event_##name(model_get_state_name_##
> > name(curr_state),		\
> > +					  
> > model_get_event_name_##name(event),			\
> > +					  
> > model_get_state_name_##name(next_state),		\
> > +					  
> > model_is_final_state_##name(next_state));		\
> > +			return
> > true;								\
> > +		}						
> > 				\
> >  	}							
> > 				\
> >  								
> > 				\
> > -	cond_react_##name(curr_state,
> > event);							\
> > -
> > 												\
> > -
> > 	trace_error_##name(model_get_state_name_##name(curr_state),				\
> > -			  
> > model_get_event_name_##name(event));					\
> > -
> > 												\
> > +	trace_rv_retries_error(#name,
> > smp_processor_id());					\
> > +	pr_warn("rv: "
> > __stringify(MAX_DA_RETRY_RACING_EVENTS)					\
> > +		" retries reached, resetting monitor %s",
> > #name);				\
> 
> smp_processor_id() requires preemption to be disabled.
> 
> At the moment, trace point handler is called with preemption
> disabled, so
> we are fine. But there is plan to change that:
> https://lore.kernel.org/lkml/20241206120709.736f943e@gandalf.local.home/T/#u
> 
> Perhaps use get_cpu() and put_cpu() instead?

Mmh, then I'd need to execute them only if the tracepoint is enabled,
I'm not sure it's worth the effort..
I wanted to avoid creating two different tracepoints (implicit and id),
but I might have to. The CPU is rarely needed there since (for now)
per-cpu monitors assume event cpu and monitor cpu are the same.

I'll have a thought about it, thanks for pointing it out!

Gabriele


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ