[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <EA929A9653AAE14F841771FB1DE5A1365FE4FCD4BC@rrsmsx501.amr.corp.intel.com>
Date: Fri, 9 Apr 2010 17:48:23 -0600
From: "Chen, Tim C" <tim.c.chen@...el.com>
To: "tytso@....edu" <tytso@....edu>, Andi Kleen <andi@...stfloor.org>
CC: john stultz <johnstul@...ibm.com>,
"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
Mingming Cao <cmm@...ibm.com>,
keith maanthey <kmannth@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, Darren Hart <dvhltc@...ibm.com>
Subject: RE: ext4 dbench performance with CONFIG_PREEMPT_RT
>tytso@....edu wrote
>
>Yeah, I'm very much aware of that. What worries me is that locking
>problems in the jbd2 layer could be very hard to debug, so we need to
>make sure we have some really good testing as we make any changes.
>
>Not taking the j_state_lock spinlock in jbd2_stop_lock() was relatively
>easy to prove to be safe, but I'm really worried about
>start_this_handle() the locking around that is going to be subtle, and
>it's not just the specific fields in the transaction and journal
>handle.
>
>And even with the jbd2_stop_lock() change, I'd really prefer some
>pretty exhaustive testing, including power fail testing, just to make
>sure we're in practice when/if we make more subtle or more invasive
>changes to the jbd2 layer...
>
>So I'm mot waving the red flag, but the yellow flag (as they would say
>in auto racing circles).
>
Your patch did remove the contention on the j_state_lock for dbench
in my testing with 64 threads. The contention point now
moves dcache_lock, which is also another tricky bottleneck.
In our other testing with FFSB that creates/rename/remove a lot of directories,
we found that journal->j_revoke_lock was also heavily contended.
Tim--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists