[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <92F18B23-0B35-486F-BF69-17A73FFB10FD@oracle.com>
Date: Wed, 13 Nov 2024 17:40:54 +0000
From: Prakash Sangappa <prakash.sangappa@...cle.com>
To: K Prateek Nayak <kprateek.nayak@....com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
"peterz@...radead.org"
<peterz@...radead.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
Daniel
Jordan <daniel.m.jordan@...cle.com>
Subject: Re: [RFC PATCH 2/4] Scheduler time extention
> On Nov 12, 2024, at 7:57 PM, K Prateek Nayak <kprateek.nayak@....com> wrote:
>
> Hello Prakash,
>
> Full disclaimer: I haven't looked closely at the complete series but ...
>
> On 11/13/2024 5:31 AM, Prakash Sangappa wrote:
>> [..snip..]
>> @@ -99,8 +100,12 @@ __always_inline unsigned long exit_to_user_mode_loop(struct pt_regs *regs,
>> local_irq_enable_exit_to_user(ti_work);
>> - if (ti_work & _TIF_NEED_RESCHED)
>> - schedule();
>> + if (ti_work & _TIF_NEED_RESCHED) {
>> + if (irq && taskshrd_delay_resched())
>> + clear_tsk_need_resched(current);
>
> Suppose the current task had requested for a delayed resched but an RT
> task's wakeup sets the TIF_NEED_RESCHED flag via an IPI, doesn't this
> clear the flag indiscriminately and allow the task to run for an
> extended amount of time? Am I missing something?
If the scheduler extension delay has already been granted when the IPI from wakeup occurs, then it would not clear the TIF_NEED_RESCHED flag and so would preempt.
[...]
>
>> }
>> @@ -830,6 +831,8 @@ static enum hrtimer_restart hrtick(struct hrtimer *timer)
>> WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
>> + taskshrd_delay_resched_tick();
>> +
>> rq_lock(rq, &rf);
>> update_rq_clock(rq);
>> rq->curr->sched_class->task_tick(rq, rq->curr, 1);
>> @@ -903,6 +906,16 @@ void hrtick_start(struct rq *rq, u64 delay)
>> #endif /* CONFIG_SMP */
>> +void hrtick_local_start(u64 delay)
>> +{
>> + struct rq *rq = this_rq();
>> + struct rq_flags rf;
>> +
>> + rq_lock(rq, &rf);
>
> You can use guard(rq_lock)(rq) and avoid declaring rf.
Will take a look and address it.
[...]
>
>> +bool taskshrd_delay_resched(void)
>> +{
>> + struct task_struct *t = current;
>> + struct task_ushrd_struct *shrdp = t->task_ushrd;
>> +
>> + if (!IS_ENABLED(CONFIG_SCHED_HRTICK))
>> + return false;
>> +
>> + if(shrdp == NULL || shrdp->kaddr == NULL)
>> + return false;
>> +
>> + if (t->taskshrd_sched_delay)
>> + return false;
>> +
>> + if (!(shrdp->kaddr->ts.sched_delay))
>> + return false;
>> +
>> + shrdp->kaddr->ts.sched_delay = 0;
>> + t->taskshrd_sched_delay = 1;
>> +
>> + return true;
>
> Perhaps this needs to also check
> "rq->nr_running == rq->cfs.h_nr_running" since I believe it only makes
> sense for fair tasks to request that extra slice?
From the discussion in
https://lore.kernel.org/lkml/20231025054219.1acaa3dd@gandalf.local.home/
It was ok to have this behavior for all tasks. It could be changed to work only for fair tasks, if there is agreement.
Thanks,
-Prakash
>
> --
> Thanks and Regards,
> Prateek
>
>> +}
>> +
>> +void taskshrd_delay_resched_fini(void)
>> +{
>> +#ifdef CONFIG_SCHED_HRTICK
>> + struct task_struct *t = current;
>> + /*
>> + * IRQs off, guaranteed to return to userspace, start timer on this CPU
>> + * to limit the resched-overdraft.
>> + *
>> + * If your critical section is longer than 50 us you get to keep the
>> + * pieces.
>> + */
>> + if (t->taskshrd_sched_delay)
>> + hrtick_local_start(50 * NSEC_PER_USEC);
>> +#endif
>> +}
>> +
>> +void taskshrd_delay_resched_tick(void)
>> +{
>> +#ifdef CONFIG_SCHED_HRTICK
>> + struct task_struct *t = current;
>> +
>> + if (t->taskshrd_sched_delay) {
>> + set_tsk_need_resched(t);
>> + }
>> +#endif
>> +}
>> +
>> /*
>> * Get Task Shared structure, allocate if needed and return mapped user address.
>
Powered by blists - more mailing lists