lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 06 Feb 2024 12:36:55 +0100
From: Anna-Maria Behnsen <anna-maria@...utronix.de>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
 John Stultz <jstultz@...gle.com>, Thomas Gleixner <tglx@...utronix.de>,
 Eric Dumazet <edumazet@...gle.com>, "Rafael J . Wysocki"
 <rafael.j.wysocki@...el.com>, Arjan van de Ven <arjan@...radead.org>,
 "Paul E . McKenney" <paulmck@...nel.org>, Rik van Riel <riel@...riel.com>,
 Steven Rostedt <rostedt@...dmis.org>, Sebastian Siewior
 <bigeasy@...utronix.de>, Giovanni Gherdovich <ggherdovich@...e.cz>, Lukasz
 Luba <lukasz.luba@....com>, "Gautham R . Shenoy" <gautham.shenoy@....com>,
 Srinivas Pandruvada <srinivas.pandruvada@...el.com>, K Prateek Nayak
 <kprateek.nayak@....com>
Subject: Re: [PATCH v10 18/20] timers: Implement the hierarchical pull model

Frederic Weisbecker <frederic@...nel.org> writes:

> Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit :
>> +/*
>> + * Returns true, if there is nothing to be propagated to the next level
>> + *
>> + * @data->firstexp is set to expiry of first gobal event of the (top level of
>> + * the) hierarchy, but only when hierarchy is completely idle.
>> + *
>> + * This is the only place where the group event expiry value is set.
>> + */
>> +static
>> +bool tmigr_update_events(struct tmigr_group *group, struct tmigr_group *child,
>> +			 struct tmigr_walk *data, union tmigr_state childstate,
>> +			 union tmigr_state groupstate)
>> +{
>> +	struct tmigr_event *evt, *first_childevt;
>> +	bool walk_done, remote = data->remote;
>> +	bool leftmost_change = false;
>> +	u64 nextexp;
>> +
>> +	if (child) {
>> +		raw_spin_lock(&child->lock);
>> +		raw_spin_lock_nested(&group->lock, SINGLE_DEPTH_NESTING);
>> +
>> +		if (childstate.active) {
>> +			walk_done = true;
>> +			goto unlock;
>> +		}
>> +
>> +		first_childevt = tmigr_next_groupevt(child);
>> +		nextexp = child->next_expiry;
>> +		evt = &child->groupevt;
>> +	} else {
>> +		nextexp = data->nextexp;
>> +
>> +		first_childevt = evt = data->evt;
>> +
>> +		/*
>> +		 * Walking the hierarchy is required in any case when a
>> +		 * remote expiry was done before. This ensures to not lose
>> +		 * already queued events in non active groups (see section
>> +		 * "Required event and timerqueue update after a remote
>> +		 * expiry" in the documentation at the top).
>> +		 *
>> +		 * The two call sites which are executed without a remote expiry
>> +		 * before, are not prevented from propagating changes through
>> +		 * the hierarchy by the return:
>> +		 *  - When entering this path by tmigr_new_timer(), @evt->ignore
>> +		 *    is never set.
>> +		 *  - tmigr_inactive_up() takes care of the propagation by
>> +		 *    itself and ignores the return value. But an immediate
>> +		 *    return is required because nothing has to be done in this
>> +		 *    level as the event could be ignored.
>> +		 */
>> +		if (evt->ignore && !remote)
>> +			return true;
>> +
>> +		raw_spin_lock(&group->lock);
>> +	}
>> +
>> +	if (nextexp == KTIME_MAX) {
>> +		evt->ignore = true;
>> +
>> +		/*
>> +		 * When the next child event could be ignored (nextexp is
>> +		 * KTIME_MAX) and there was no remote timer handling before or
>> +		 * the group is already active, there is no need to walk the
>> +		 * hierarchy even if there is a parent group.
>> +		 *
>> +		 * The other way round: even if the event could be ignored, but
>> +		 * if a remote timer handling was executed before and the group
>> +		 * is not active, walking the hierarchy is required to not miss
>> +		 * an enqueued timer in the non active group. The enqueued timer
>> +		 * of the group needs to be propagated to a higher level to
>> +		 * ensure it is handled.
>> +		 */
>> +		if (!remote || groupstate.active) {
>> +			walk_done = true;
>> +			goto unlock;
>
> So if the current tmc going inactive was the migrator for the whole hierarchy
> and it is reaching here the top-level, this assumes that if none of this tmc's
> groups have a timer, then it can just return. But what if the top level has
> timers from other children? Who is going to handle them then?
>
> Should this be "goto check_toplvl" instead?
>

Simply replacing this goto will not work. Then we chould end up with a
'data->firstexp' set even if we do not want to have it (when remote is
not set).

There is another issue in here. When the event could be ignored and it
is propagated because of e.g. remote timer handling, then the timerqueue
dance is done nevertheless. It's not a big problem (as the ignore flag
is set and event is removed of queue when revisting the timer queue),
but its obviously more work than it is required to have.

Thanks

>> +		}
>> +	} else {
>> +		/*
>> +		 * An update of @evt->cpu and @evt->ignore flag is required only
>> +		 * when @child is set (the child is equal or higher than lvl0),
>> +		 * but it doesn't matter if it is written once more to the per
>> +		 * CPU event; make the update unconditional.
>> +		 */
>> +		evt->cpu = first_childevt->cpu;
>> +		evt->ignore = false;
>> +	}
>> +
>> +	walk_done = !group->parent;
>> +
>> +	/*
>> +	 * If the child event is already queued in the group, remove it from the
>> +	 * queue when the expiry time changed only.
>> +	 */
>> +	if (timerqueue_node_queued(&evt->nextevt)) {
>> +		if (evt->nextevt.expires == nextexp)
>> +			goto check_toplvl;
>> +
>> +		leftmost_change = timerqueue_getnext(&group->events) == &evt->nextevt;
>> +		if (!timerqueue_del(&group->events, &evt->nextevt))
>> +			WRITE_ONCE(group->next_expiry, KTIME_MAX);
>> +	}
>> +
>> +	evt->nextevt.expires = nextexp;
>> +
>> +	if (timerqueue_add(&group->events, &evt->nextevt)) {
>> +		leftmost_change = true;
>> +		WRITE_ONCE(group->next_expiry, nextexp);
>> +	}
>> +
>> +check_toplvl:
>> +	if (walk_done && (groupstate.migrator == TMIGR_NONE)) {
>> +		/*
>> +		 * Nothing to do when first event didn't changed and update was
>> +		 * done during remote timer handling.
>> +		 */
>> +		if (remote && !leftmost_change)
>> +			goto unlock;
>> +		/*
>> +		 * The top level group is idle and it has to be ensured the
>> +		 * global timers are handled in time. (This could be optimized
>> +		 * by keeping track of the last global scheduled event and only
>> +		 * arming it on the CPU if the new event is earlier. Not sure if
>> +		 * its worth the complexity.)
>> +		 */
>> +		data->firstexp = tmigr_next_groupevt_expires(group);
>> +	}
>> +
>> +unlock:
>> +	raw_spin_unlock(&group->lock);
>> +
>> +	if (child)
>> +		raw_spin_unlock(&child->lock);
>> +
>> +	return walk_done;
>> +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ