[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100408211054.GB1849@thunk.org>
Date: Thu, 8 Apr 2010 17:10:54 -0400
From: tytso@....edu
To: john stultz <johnstul@...ibm.com>
Cc: linux-ext4@...r.kernel.org, Mingming Cao <cmm@...ibm.com>,
keith maanthey <kmannth@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, Darren Hart <dvhltc@...ibm.com>
Subject: Re: ext4 dbench performance with CONFIG_PREEMPT_RT
On Thu, Apr 08, 2010 at 01:41:57PM -0700, john stultz wrote:
>
> I'll continue to play with your patch and see if I can con some some
> folks with more interesting storage setups to do some testing as well.
You might want to ask djwong to play with it with his nice big
machine. (We don't need a big file system, but we want as many CPU's
as possible, and to use his "mailserver" workload to really stress the
journal. I'd recommend using barrier=0 for additional journal
lock-level stress testing, and then try some forced sysrq-b reboots
and then make sure that the filesystem is consistent after the journal
replay.)
I've since done basic two-CPU testing using xfstests under KVM, but
that's really not going to test any locking issues.
> Any thoughts for ways to rework the state_lock in start_this_handle?
> (Now that its at the top of the contention logs? :)
That's going to be much harder. We're going to have to take
j_state_lock at some point inside start_this_handle. We might be able
to decrease the amount of code which is run while the spinlock is
taken, but I very much doubt it's possible to eliminate that spinlock
entirely.
Do you have detailed lockstat information showing the hold-time and
wait-time of j_lock_stat (especially in start_this_handle)?
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists