lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1280803949.3966.86.camel@localhost.localdomain>
Date:	Mon, 02 Aug 2010 19:52:29 -0700
From:	john stultz <johnstul@...ibm.com>
To:	"Ted Ts'o" <tytso@....edu>
Cc:	Ext4 Developers List <linux-ext4@...r.kernel.org>,
	Keith Maanthey <kmannth@...ibm.com>,
	Eric Whitney <eric.whitney@...com>
Subject: Re: [PATCH] jbd2: Use atomic variables to avoid taking
 t_handle_lock in jbd2_journal_stop

On Mon, 2010-08-02 at 17:53 -0700, john stultz wrote:
> On Mon, 2010-08-02 at 20:06 -0400, Ted Ts'o wrote:
> > On Mon, Aug 02, 2010 at 04:02:32PM -0700, john stultz wrote:
> > > >From these numbers, it looks like the atomic variables are a minor
> > > improvement for -rt, but the improvement isn't as drastic as the earlier
> > > j_state lock change, or the vfs scalability patchset.
> > 
> > Thanks for doing this quick test run!  I was expecting to see a more
> > dramatic difference, since the j_state_lock patch removed one of the
> > two global locks in jbd2_journal_stop, and the t_handle_lock patch
> > removed the second of the two global locks.  But I guess the
> > j_state_lock contention in start_this_handle() is still the dominating factor.
> > 
> > It's interesting that apparently the latest t_handle_lock patch
> > doesn't seem to make much difference unless the VFS scalability patch
> > is also applied.  I'm not sure why that makes a difference, but it's
> > nice to know that with the VFS scalability patch it does seem to help,
> > even if it doesn't help as much as I had hoped.
> 
> Well, its likely that with the -rt kernel and without the
> vfs-scalability changes, we're just burning way more time on vfs lock
> contention then we are on anything in the ext4 code. Just a theory, but
> I can try to verify with perf logs if you'd like.

I went ahead and generated perf data for the PREEMPT_RT cases that you
can find here:
http://sr71.net/~jstultz/dbench-scalability/perflogs/2.6.33-rt-ext4-atomic/

As a reminder the 2.6.33.5-rt23 kernel includes both the vfs-scalability
patches and the j_state locking change. The 2.6.33.6-rt26 kernel does
not include those changes. 

>>From those logs you can see that the atomic change on top of the vfs
patch (ie: comparing the two 2.6.33.5-rt23 logs) pulls start_this_handle
down quite a bit.

With the non-vfs scalability patched kernels, we see that the j_state
lock and atomic changes pull start_this_handle out of the top contender
handle, but there is still quite a large amount of contention on the
dput paths.

So yea, the change does help, but its just not the top cause of
contention when aren't using the vfs patches, so we don't see as much
benefit at this point.

thanks
-john


--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ