[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4d05f8c0-250b-bc1e-360f-18dfa197064c@linux.ibm.com>
Date: Tue, 12 May 2020 13:21:59 +0530
From: Parth Shah <parth@...ux.ibm.com>
To: Pavan Kondeti <pkondeti@...eaurora.org>
Cc: linux-kernel@...r.kernel.org, peterz@...radead.org,
mingo@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, qais.yousef@....com,
chris.hyser@...cle.com, valentin.schneider@....com,
rjw@...ysocki.net
Subject: Re: [RFC 2/4] sched/core: Set nr_lat_sensitive counter at various
scheduler entry/exit points
On 5/9/20 8:09 AM, Pavan Kondeti wrote:
> On Fri, May 08, 2020 at 04:45:16PM +0530, Parth Shah wrote:
>> Hi Pavan,
>>
>> Thanks for going through this patch-set.
>>
>> On 5/8/20 2:03 PM, Pavan Kondeti wrote:
>>> Hi Parth,
>>>
>>> On Thu, May 07, 2020 at 07:07:21PM +0530, Parth Shah wrote:
>>>> Monitor tasks at:
>>>> 1. wake_up_new_task() - forked tasks
>>>>
>>>> 2. set_task_cpu() - task migrations, Load balancer
>>>>
>>>> 3. __sched_setscheduler() - set/unset latency_nice value
>>>> Increment the nr_lat_sensitive count on the CPU with task marked with
>>>> latency_nice == -20.
>>>> Similarly, decrement the nr_lat_sensitive counter upon re-marking the task
>>>> with >-20 latency_nice task.
>>>>
>>>> 4. finish_task_switch() - dying task
>>>>
>>>
>>>
>>>> Signed-off-by: Parth Shah <parth@...ux.ibm.com>
>>>> ---
>>>> kernel/sched/core.c | 30 ++++++++++++++++++++++++++++--
>>>> kernel/sched/sched.h | 5 +++++
>>>> 2 files changed, 33 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>>>> index 2d8b76f41d61..ad396c36eba6 100644
>>>> --- a/kernel/sched/core.c
>>>> +++ b/kernel/sched/core.c
>>>> @@ -1744,6 +1744,11 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
>>>> trace_sched_migrate_task(p, new_cpu);
>>>>
>>>> if (task_cpu(p) != new_cpu) {
>>>> + if (task_is_lat_sensitive(p)) {
>>>> + per_cpu(nr_lat_sensitive, task_cpu(p))--;
>>>> + per_cpu(nr_lat_sensitive, new_cpu)++;
>>>> + }
>>>> +
>>>
>>> Since we can come here without rq locks, there is a possibility
>>> of a race and incorrect updates can happen. Since the counters
>>> are used to prevent C-states, we don't want that to happen.
>>
>> I did tried using task_lock(p) wherever we do change refcount and when
>> latency_nice value is set. There I was using nr_lat_sensitive with atomic_t
>> type.
>>
>> After lots of thinking to optimize it and thinking that we anyways hold rq
>> lock, I thought of not using any lock here and see if scheduler community
>> has well known solution for this :-)
>>
>> But in brief, using atomic_t nr_lat_sensitive and task_lock(p) when changin
>> refcount should solve problem, right?
>>
>> If you or anyone have solution for this kind of pattern, then that surely
>> will be helpful.
>>
> I am not sure if task_lock() can help here, because we are operating the
> counter on per CPU basis here. May be cmpxchg based loop works here to make
> sure that increment/decrement operation happens atomically here.
>
>>>
>>>> if (p->sched_class->migrate_task_rq)
>>>> p->sched_class->migrate_task_rq(p, new_cpu);
>>>> p->se.nr_migrations++;
>
> [...]
>
>>>> @@ -4732,8 +4749,17 @@ static void __setscheduler_params(struct task_struct *p,
>>>> p->normal_prio = normal_prio(p);
>>>> set_load_weight(p, true);
>>>>
>>>> - if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE)
>>>> + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) {
>>>> + if (p->state != TASK_DEAD &&
>>>> + attr->sched_latency_nice != p->latency_nice) {
>>>> + if (attr->sched_latency_nice == MIN_LATENCY_NICE)
>>>> + per_cpu(nr_lat_sensitive, task_cpu(p))++;
>>>> + else if (task_is_lat_sensitive(p))
>>>> + per_cpu(nr_lat_sensitive, task_cpu(p))--;
>>>> + }
>>>> +
>>>> p->latency_nice = attr->sched_latency_nice;
>>>> + }
>>>> }
>>>
>>> There is a potential race here due to which we can mess up the refcount.
>>>
>>> - A latency sensitive task is marked TASK_DEAD
>>> <snip>
>>> - sched_setattr() called on the task to clear the latency nice. Since
>>> we check the task state here, we skip the decrement.
>>> - The task is finally context switched out and we skip the decrement again
>>> since it is not a latency senstivie task.
>>
>> if task is already marked TASK_DEAD then we should have already decremented
>> its refcount in finish_task_switch().
>> am I missing something?
>
> There is a window (context switch and dropping rq lock) between
> marking a task DEAD (in do_task_dead()) and dropping the ref counter
> (in finish_task_switch()) during which we can run into here and skip
> the checking because task is marked as DEAD.
>
Yeah, TASK_DEAD seems to be genuine race conditions. At every other places
we do hold task_rq_lock() except when the task is dying. There is a window
between do_task_dead() and finish_task_switch() which may create race
condition, so if marking TASK_DEAD is protected under task_rq_lock() then
this can be prevented. I will have to look at it more thoroughly at the
code and figure out a way to protect the refcount under such circumstances.
Thanks,
Parth
Powered by blists - more mailing lists