[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250515050159.3dbba5f5@batman.local.home>
Date: Thu, 15 May 2025 05:01:59 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Prakash Sangappa <prakash.sangappa@...cle.com>
Cc: Madadi Vineeth Reddy <vineethr@...ux.ibm.com>, "peterz@...radead.org"
<peterz@...radead.org>, "mathieu.desnoyers@...icios.com"
<mathieu.desnoyers@...icios.com>, "tglx@...utronix.de"
<tglx@...utronix.de>, "bigeasy@...utronix.de" <bigeasy@...utronix.de>,
"kprateek.nayak@....com" <kprateek.nayak@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH V4 1/6] Sched: Scheduler time slice extension
On Wed, 14 May 2025 23:12:26 +0000
Prakash Sangappa <prakash.sangappa@...cle.com> wrote:
> > As mentioned in previous versions, does this not change the semantics for
> > sched_yield()? Why is this necessary to immediately call schedule() and skip
> > going through do_sched_yield()?
>
> Expectation is that the user thread/application yield the cpu once it is done executing
> any critical section in the extra time granted. Question was which system
> call should it call, and yield seems appropriate. It could call any system call actually.
>
> Since thread is just yielding the cpu it should retain its position in the queue. So it does
> not have to go thru do_sched_yield() as that would put the task at and of the queue.
If it was granted an extension, from the POV of user space, it actually
shouldn't keep it's place in the queue, because it's place is currently
"promoted" and according to the scheduler, it shouldn't be running in
the first place. But in the kernel, we are just dealing with
implementation details. Going back to user space should cause it to be
scheduled out otherwise it shouldn't be extended in the first place.
-- Steve
Powered by blists - more mailing lists