[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150108103708.GE29390@twins.programming.kicks-ass.net>
Date: Thu, 8 Jan 2015 11:37:08 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: Huang Ying <ying.huang@...ux.intel.com>,
LKML <linux-kernel@...r.kernel.org>, LKP ML <lkp@...org>
Subject: Re: [LKP] [mm] c8c06efa8b5: -7.6% unixbench.score
On Thu, Jan 08, 2015 at 12:59:59AM -0800, Davidlohr Bueso wrote:
> > > > 721721 ± 1% +303.6% 2913110 ± 3% unixbench.time.voluntary_context_switches
> > > > 11767 ± 0% -7.6% 10867 ± 1% unixbench.score
> heh I was actually looking at the reader code. We really do:
>
> /* wait until we successfully acquire the lock */
> set_current_state(TASK_UNINTERRUPTIBLE);
> while (true) {
> if (rwsem_try_write_lock(count, sem))
> break;
> raw_spin_unlock_irq(&sem->wait_lock);
>
> /* Block until there are no active lockers. */
> do {
> schedule();
> set_current_state(TASK_UNINTERRUPTIBLE);
> } while ((count = sem->count) & RWSEM_ACTIVE_MASK);
>
> raw_spin_lock_irq(&sem->wait_lock);
> }
>
>
> Which still has similar issues with even two barriers, I guess for both
> the rwsem_try_write_lock call (less severe) and count checks. Anyway...
So its actually scheduling a lot more, this could also mean the opt
spinning thing isn't working as well (I've no real idea what the
workload is).
One thing I noticed is that we set sem->owner very late in comparison
with the mutex code, this could cause us to break out of the spin loop
prematurely.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists