[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <be45b190-d96c-1893-3ef0-f574eb595256@de.ibm.com>
Date: Mon, 2 Mar 2020 12:16:26 +0100
From: Christian Borntraeger <borntraeger@...ibm.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380
enqueue_task_fair+0x328/0x440
On 28.02.20 17:35, Vincent Guittot wrote:
> Le vendredi 28 févr. 2020 à 16:42:27 (+0100), Christian Borntraeger a écrit :
>>
>>
>> On 28.02.20 16:37, Vincent Guittot wrote:
>>> On Fri, 28 Feb 2020 at 16:08, Christian Borntraeger
>>> <borntraeger@...ibm.com> wrote:
>>>>
>>>> Also happened with 5.4:
>>>> Seems that I just happen to have an interesting test workload/system size interaction
>>>> on a newly installed system that triggers this.
>>>
>>> you will probably go back to 5.1 which is the version where we put
>>> back the deletion of unused cfs_rq from the list which can trigger the
>>> warning:
>>> commit 039ae8bcf7a5 : (Fix O(nr_cgroups) in the load balancing path)
>>>
>>> AFAICT, we haven't changed this since
>>
>> So you do know what is the problem? If not is there any debug option or
>> patch that I could apply to give you more information?
>
> No I don't know what is happening. Your test probably goes through an unexpected path
>
> Would it be difficult for me to reproduce your test env ?
Not sure. Its a 32CPU (SMT2 -> 64) host. I have about 10 KVM guests running doing different
things.
>
> There is an optimization in the code which could generate problem if assumption is not
> true. Could you try the patch below ?
>
> ---
> kernel/sched/fair.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 3c8a379c357e..beb773c23e7d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4035,8 +4035,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> __enqueue_entity(cfs_rq, se);
> se->on_rq = 1;
>
> + list_add_leaf_cfs_rq(cfs_rq);
> if (cfs_rq->nr_running == 1) {
> - list_add_leaf_cfs_rq(cfs_rq);
> check_enqueue_throttle(cfs_rq);
> }
> }
Now running for 3 hours. I have not seen the issue yet. I can tell tomorrow if this fixes
the issue.
Powered by blists - more mailing lists