[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+zs-xGTnL-g=poPxeF3yLwyHD_usZw+GAP0CQmOagCdgkgkRQ@mail.gmail.com>
Date: Wed, 14 Oct 2020 14:20:44 +0800
From: Qi Zheng <arch0.zheng@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, juri.lelli@...hat.com,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
rostedt@...dmis.org, Benjamin Segall <bsegall@...gle.com>,
mgorman@...e.de, bristot@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/deadline: Replace rq_of_dl_rq(dl_rq_of_se(dl_se))
with ... ...task_rq(dl_task_of(dl_se))
On 2020/10/13 下午11:48, Peter Zijlstra wrote:
> On Tue, Oct 13, 2020 at 10:31:40PM +0800, Qi Zheng wrote:
>> The rq is already obtained in the dl_rq_of_se() function:
>> struct task_struct *p = dl_task_of(dl_se);
>> struct rq *rq = task_rq(p);
>> So there is no need to do extra conversion.
>>
>> Signed-off-by: Qi Zheng <arch0.zheng@...il.com>
>> ---
>> kernel/sched/deadline.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>> index 6d93f4518734..f64e577f6aba 100644
>> --- a/kernel/sched/deadline.c
>> +++ b/kernel/sched/deadline.c
>> @@ -1152,7 +1152,7 @@ void init_dl_task_timer(struct sched_dl_entity *dl_se)
>> static inline void dl_check_constrained_dl(struct sched_dl_entity *dl_se)
>> {
>> struct task_struct *p = dl_task_of(dl_se);
>> - struct rq *rq = rq_of_dl_rq(dl_rq_of_se(dl_se));
>> + struct rq *rq = task_rq(p);
>>
>> if (dl_time_before(dl_se->deadline, rq_clock(rq)) &&
>> dl_time_before(rq_clock(rq), dl_next_period(dl_se))) {
>> @@ -1498,7 +1498,7 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se,
>> replenish_dl_entity(dl_se, pi_se);
>> } else if ((flags & ENQUEUE_RESTORE) &&
>> dl_time_before(dl_se->deadline,
>> - rq_clock(rq_of_dl_rq(dl_rq_of_se(dl_se))))) {
>> + rq_clock(task_rq(dl_task_of(dl_se))))) {
>> setup_new_dl_entity(dl_se);
>> }
>
> Consider where we're going:
>
> https://lkml.kernel.org/r/20200807095051.385985-1-juri.lelli@redhat.com
>
> then a dl_entity is no longer immediately a task and the above is no
> longer true.
>
Thanks for your reply, I saw in the patch below that the dl_rq_of_se()
has been changed to the rq_of_dl_se(), so the above is no longer needed.
[RFC PATCH v2 4/6] sched/deadline: Introduce deadline servers
In addition, when will the SCHED_DEADLINE server infrastructure is
expected to be integrated into the mainline? It looks great!
Peter Zijlstra <peterz@...radead.org> 于2020年10月13日周二 下午11:48写道:
>
> On Tue, Oct 13, 2020 at 10:31:40PM +0800, Qi Zheng wrote:
> > The rq is already obtained in the dl_rq_of_se() function:
> > struct task_struct *p = dl_task_of(dl_se);
> > struct rq *rq = task_rq(p);
> > So there is no need to do extra conversion.
> >
> > Signed-off-by: Qi Zheng <arch0.zheng@...il.com>
> > ---
> > kernel/sched/deadline.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > index 6d93f4518734..f64e577f6aba 100644
> > --- a/kernel/sched/deadline.c
> > +++ b/kernel/sched/deadline.c
> > @@ -1152,7 +1152,7 @@ void init_dl_task_timer(struct sched_dl_entity *dl_se)
> > static inline void dl_check_constrained_dl(struct sched_dl_entity *dl_se)
> > {
> > struct task_struct *p = dl_task_of(dl_se);
> > - struct rq *rq = rq_of_dl_rq(dl_rq_of_se(dl_se));
> > + struct rq *rq = task_rq(p);
> >
> > if (dl_time_before(dl_se->deadline, rq_clock(rq)) &&
> > dl_time_before(rq_clock(rq), dl_next_period(dl_se))) {
> > @@ -1498,7 +1498,7 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se,
> > replenish_dl_entity(dl_se, pi_se);
> > } else if ((flags & ENQUEUE_RESTORE) &&
> > dl_time_before(dl_se->deadline,
> > - rq_clock(rq_of_dl_rq(dl_rq_of_se(dl_se))))) {
> > + rq_clock(task_rq(dl_task_of(dl_se))))) {
> > setup_new_dl_entity(dl_se);
> > }
>
> Consider where we're going:
>
> https://lkml.kernel.org/r/20200807095051.385985-1-juri.lelli@redhat.com
>
> then a dl_entity is no longer immediately a task and the above is no
> longer true.
Powered by blists - more mailing lists