[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <69e2eea0-51de-bdcd-cdda-ce5cd841786d@linux.intel.com>
Date: Fri, 22 Mar 2019 16:28:39 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Subhra Mazumdar <subhra.mazumdar@...cle.com>,
Julien Desfossez <jdesfossez@...italocean.com>,
Peter Zijlstra <peterz@...radead.org>, mingo@...nel.org,
tglx@...utronix.de, pjt@...gle.com, torvalds@...ux-foundation.org
Cc: linux-kernel@...r.kernel.org, fweisbec@...il.com,
keescook@...omium.org, kerrnel@...gle.com,
Vineeth Pillai <vpillai@...italocean.com>,
Nishanth Aravamudan <naravamudan@...italocean.com>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Aubrey <aubrey.li@...el.com>
Subject: Re: [RFC][PATCH 03/16] sched: Wrap rq::lock access
On 3/19/19 7:29 PM, Subhra Mazumdar wrote:
>
> On 3/18/19 8:41 AM, Julien Desfossez wrote:
>> The case where we try to acquire the lock on 2 runqueues belonging to 2
>> different cores requires the rq_lockp wrapper as well otherwise we
>> frequently deadlock in there.
>>
>> This fixes the crash reported in
>> 1552577311-8218-1-git-send-email-jdesfossez@...italocean.com
>>
>> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
>> index 76fee56..71bb71f 100644
>> --- a/kernel/sched/sched.h
>> +++ b/kernel/sched/sched.h
>> @@ -2078,7 +2078,7 @@ static inline void double_rq_lock(struct rq *rq1, struct rq *rq2)
>> raw_spin_lock(rq_lockp(rq1));
>> __acquire(rq2->lock); /* Fake it out ;) */
>> } else {
>> - if (rq1 < rq2) {
>> + if (rq_lockp(rq1) < rq_lockp(rq2)) {
>> raw_spin_lock(rq_lockp(rq1));
>> raw_spin_lock_nested(rq_lockp(rq2), SINGLE_DEPTH_NESTING);
>> } else {
Pawan was seeing occasional crashes and lock up that's avoided by doing the following.
We're trying to dig a little more tracing to see why pick_next_entity is returning
NULL.
Tim
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5349ebedc645..4c7f353b8900 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7031,6 +7031,8 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
}
se = pick_next_entity(cfs_rq, curr);
+ if (!se)
+ return NULL;
cfs_rq = group_cfs_rq(se);
} while (cfs_rq);
@@ -7070,6 +7072,8 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
do {
se = pick_next_entity(cfs_rq, NULL);
+ if (!se)
+ return NULL;
set_next_entity(cfs_rq, se);
cfs_rq = group_cfs_rq(se);
} while (cfs_rq);
Powered by blists - more mailing lists