[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170516091053.5f0868b5@gandalf.local.home>
Date: Tue, 16 May 2017 09:10:53 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Juri Lelli <juri.lelli@....com>
Cc: Byungchul Park <byungchul.park@....com>, peterz@...radead.org,
mingo@...nel.org, linux-kernel@...r.kernel.org,
juri.lelli@...il.com, bristot@...hat.com, kernel-team@....com
Subject: Re: [PATCH v4 1/5] sched/deadline: Refer to cpudl.elements
atomically
On Tue, 16 May 2017 11:32:41 +0100
Juri Lelli <juri.lelli@....com> wrote:
> Not sure, but if we are going to retry a lot it might be better off to
> put proper locking instead? We could also simply bail out when we notice
Actually, locking can make it much worse. I've been playing with RT on
boxes with 240 cores (with HT turned off!), and *any* locks in the
scheduler can cause huge contention.
> that something is changed under our feet. I'd say (again :) we might
> first want to understand (with some numbers) how bad the problem is and
> then fix it. I guess numbers might also help us to better understand
> what the best fix is?
Exactly. I haven't seen any numbers. Yes, it is not perfect, but we
don't know how unperfect it is. Numbers will help to know if there is
even a problem or not with the current solution.
-- Steve
Powered by blists - more mailing lists