lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240731115418.GD33588@noisy.programming.kicks-ass.net>
Date: Wed, 31 Jul 2024 13:54:18 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: K Prateek Nayak <kprateek.nayak@....com>
Cc: oe-lkp@...ts.linux.dev, lkp@...el.com, linux-kernel@...r.kernel.org,
	aubrey.li@...ux.intel.com, yu.c.chen@...el.com,
	kernel test robot <oliver.sang@...el.com>
Subject: Re: [peterz-queue:sched/prep] [sched/fair] 124c8f4374:
 WARNING:at_kernel/sched/sched.h:#update_load_avg

On Wed, Jul 31, 2024 at 11:46:48AM +0530, K Prateek Nayak wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index cd4a6bf14828..c437b408d29b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -13297,10 +13297,34 @@ void unregister_fair_sched_group(struct task_group *tg)
>  			if (se->sched_delayed) {
>  				guard(rq_lock_irqsave)(rq);
>  				if (se->sched_delayed) {
> +					/*
> +					 * We can reach here when processing RCU_SOFTIRQ on exit path from
> +					 * a reschedule IPI. wakeup_preempt() may have set RQCF_REQ_SKIP to
> +					 * skip a close clock update in schedule(), however, in presence of
> +					 * a delayed entity, this trips the check in rq_clock_pelt() which
> +					 * now believes the clock value is stale and needs updating. To
> +					 * prevent such situation, cancel any pending skip updates, and
> +					 * update the rq clock.
> +					 */
> +					rq_clock_cancel_skipupdate(rq);
> +
> +					/*
> +					 * XXX: Will this trip WARN_DOUBLE_CLOCK? In which case, can
> +					 * rq_clock_cancel_skipupdate() be made to return a bool if
> +					 * RQCF_REQ_SKIP is set and we avoid this update?
> +					 */
>  					update_rq_clock(rq);
> +
>  					dequeue_entities(rq, se, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
> +
> +					/* Avoid updating the clock again if a schedule() is pending */
> +					if (task_on_rq_queued(rq->curr) &&
> +					    test_tsk_need_resched(rq->curr))
> +						rq_clock_skip_update(rq);
>  				}
>  				list_del_leaf_cfs_rq(cfs_rq);
> +
> +
>  			}
>  			remove_entity_load_avg(se);
>  		}

So I did update this to simply add update_rq_clock() before the
dequeue_entity(SLEEP|DELAYED). I initially had, these, then confused
myself between deactivate_task() and dequeue_entity(), where the former
updates the clock but the latter does not, and removed them. Then Mike
complained, and I restored it for the regular exit path and forgot the
cgroup exit path.

But now they should both be doing update_rq_clock() here.

  https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git/commit/?h=sched/eevdf&id=5b3a132d4dd5c91f26beb3e8973c03cdb77d7873

Since this is all with our own rq->lock held, I don't think skip would
be relevant here.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ