[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170523011123.GH28017@X58A-UD3R>
Date: Tue, 23 May 2017 10:11:23 +0900
From: Byungchul Park <byungchul.park@....com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Juri Lelli <juri.lelli@....com>, peterz@...radead.org,
mingo@...nel.org, linux-kernel@...r.kernel.org,
juri.lelli@...il.com, bristot@...hat.com, kernel-team@....com
Subject: Re: [PATCH v4 1/5] sched/deadline: Refer to cpudl.elements atomically
On Tue, May 16, 2017 at 09:10:53AM -0400, Steven Rostedt wrote:
> On Tue, 16 May 2017 11:32:41 +0100
> Juri Lelli <juri.lelli@....com> wrote:
>
>
> > Not sure, but if we are going to retry a lot it might be better off to
> > put proper locking instead? We could also simply bail out when we notice
>
> Actually, locking can make it much worse. I've been playing with RT on
> boxes with 240 cores (with HT turned off!), and *any* locks in the
> scheduler can cause huge contention.
OK. I give up patches adding locks here. Thank you for explaning why you
did not write code in such a way like mine.
> > that something is changed under our feet. I'd say (again :) we might
> > first want to understand (with some numbers) how bad the problem is and
> > then fix it. I guess numbers might also help us to better understand
> > what the best fix is?
>
> Exactly. I haven't seen any numbers. Yes, it is not perfect, but we
> don't know how unperfect it is. Numbers will help to know if there is
> even a problem or not with the current solution.
Yes. I am also curious about the number and want to check the number,
but it's not easy to me since I don't have such a big machine. For now,
only thing I can do is to skip the patch.
Thank you very much for all of your opinions.
Powered by blists - more mailing lists