[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOGi=dOkVK3p9CEJ=7MTb_gkWDpxXEQSNnGEpoLxSzQCtaJDpQ@mail.gmail.com>
Date: Sat, 9 Jan 2016 06:56:19 +0800
From: Ling Ma <ling.ma.program@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Waiman Long <waiman.long@....com>, mingo@...hat.com,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
Ling <ling.ml@...baba-inc.com>
Subject: Re: [RFC PATCH] alispinlock: acceleration from lock integration on
multi-core platform
> So I have a whole bunch of problems with this thing.. For one I object
> to this being called a lock. Its much more like an async work queue like
> thing.
Ok, I will fix it.
> It suffers the typical problems all those constructs do; namely it
> wrecks accountability.
Ok, I will fix it.
> But here that is compounded by the fact that you inject other people's
> work into 'your' lock region, thereby bloating lock hold times. Worse,
> afaict (from a quick reading) there really isn't a bound on the amount
> of work you inject.
>
> This will completely wreck scheduling latency. At the very least the
> callback loop should have a need_resched() test on, but even that will
> not work if this has IRQs disabled.
>
>
> And while its a cute collapse of an MCS lock and lockless list style
> work queue (MCS after all is a lockless list), saving a few cycles from
> the naive spinlock+llist implementation of the same thing, I really
> do not see enough justification for any of this.
we can fix it if we really don't need it.
Thanks
Ling
Powered by blists - more mailing lists