[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1c9f2383-ec9f-f819-d7be-23aed2bf121a@bytedance.com>
Date: Tue, 22 Mar 2022 23:28:41 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, acme@...nel.org, mark.rutland@....com,
alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
namhyung@...nel.org, eranian@...gle.com,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
duanxiongchun@...edance.com, songmuchun@...edance.com
Subject: Re: [External] Re: [PATCH v2 1/6] perf/core: Fix incosistency between
cgroup sched_out and sched_in
On 2022/3/22 11:16 下午, Chengming Zhou wrote:
> Hi peter,
>
> On 2022/3/22 10:54 下午, Peter Zijlstra wrote:
>> On Tue, Mar 22, 2022 at 09:38:21PM +0800, Chengming Zhou wrote:
>>> On 2022/3/22 8:59 下午, Peter Zijlstra wrote:
>>>> On Tue, Mar 22, 2022 at 08:08:29PM +0800, Chengming Zhou wrote:
>>>>> There is a race problem that can trigger WARN_ON_ONCE(cpuctx->cgrp)
>>>>> in perf_cgroup_switch().
>>>>>
>>>>> CPU1 CPU2
>>>>> (in context_switch) (attach running task)
>>>>> perf_cgroup_sched_out(prev, next)
>>>>> cgrp1 == cgrp2 is True
>>>>> next->cgroups = cgrp3
>>>>> perf_cgroup_attach()
>>>>> perf_cgroup_sched_in(prev, next)
>>>>> cgrp1 == cgrp3 is False
I see, you must have been misled by my wrong drawing above ;-)
I'm sorry, perf_cgroup_attach() on the right should be put at the bottom.
CPU1 CPU2
(in context_switch) (attach running task)
perf_cgroup_sched_out(prev, next)
cgrp1 == cgrp2 is True
next->cgroups = cgrp3
perf_cgroup_sched_in(prev, next)
cgrp1 == cgrp3 is False
__perf_cgroup_move()
Thanks.
>>>>>
>>>>> The commit a8d757ef076f ("perf events: Fix slow and broken cgroup
>>>>> context switch code") would save cpuctx switch out/in when the
>>>>> perf_cgroup of "prev" and "next" are the same.
>>>>>
>>>>> But perf_cgroup of task can change in concurrent with context_switch.
>>>>
>>>> Can you clarify? IIRC then a task changes cgroup it goes throught the
>>>> whole ->attach() dance, and that serializes against the context switch
>>>> code.
>>>>
>>>
>>> task->cgroups changed before perf_cgroup_attach(), and is not serialized
>>> against the context switch, since task->cgroups can be changed without
>>> rq lock held. (cgroup v1 or cgroup v2 with PSI disabled)
>>>
>>> So perf_cgroup_sched_out() in perf_cgroup_switch() may see the old or
>>> new perf_cgroup when do context switch.
>>
>> __schedule()
>> local_irq_disable(); <--- IRQ disable
>> rq_lock();
>>
>> ...
>>
>> context_switch()
>> prepare_task_switch()
>> perf_event_task_sched_out()
>> __perf_event_task_sched_out()
>> perf_cgroup_sched_out();
>
> here compare perf_cgroup_from_task(prev) and perf_cgroup_from_task(next)
>
>>
>> switch_to()
>> finish_task_switch()
>> perf_event_task_sched_in()
>> __perf_event_task_sched_in()
>> perf_cgroup_sched_in();
>
> here compare perf_cgroup_from_task(prev) and perf_cgroup_from_task(next)
>
>> finish_lock_switch()
>> raw_spin_irq_unlock_irq(); <--- IRQ enable
>>
>>
>> vs
>>
>
> rcu_assign_pointer(p->cgroups, to) <--- task perf_cgroup changed
>
> task->cgroups has changed before sending IPI
>
>> perf_event_cgrp_subsys.attach = perf_cgroup_attach()
>> cgroup_taskset_for_each()
>> task_function_call(task, __perf_cgroup_move) <--- sends IPI
>>
>>
>> Please explain how this can interleave.
>
> __perf_cgroup_move in IPI is of course serialized against context switch,
> but the task->cgroups has changed before that, without rq lock held.
> So perf_cgroup_from_task() may see the old or new perf_cgroup.
>
> Thanks.
>
Powered by blists - more mailing lists