lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20120120022649.GA12463@gmail.com>
Date:	Fri, 20 Jan 2012 10:26:49 +0800
From:	Zheng Liu <gnehzuil.liu@...il.com>
To:	Frank Mayhar <fmayhar@...gle.com>
Cc:	Allison Henderson <achender@...ux.vnet.ibm.com>,
	Dave Chinner <david@...morbit.com>,
	Lukas Czerner <lczerner@...hat.com>,
	Ext4 Developers List <linux-ext4@...r.kernel.org>,
	Tao Ma <tm@....ma>, xfs@....sgi.com
Subject: Re: working on extent locks for i_mutex

On Thu, Jan 19, 2012 at 01:16:10PM -0800, Frank Mayhar wrote:
> On Wed, 2012-01-18 at 20:02 +0800, Zheng Liu wrote:
> > For this project, do you have a schedule? Would you like to share to me? This
> > lock contention heavily impacts the performance of direct IO in our production
> > environment. So we hope to improve it ASAP.
> > 
> > I have done some direct IO benchmarks to compare ext4 with xfs using fio
> > in Intel SSD. The result shows that, in direct IO, xfs outperforms ext4 and
> > ext4 with dioread_nolock.
> > 
> > To understand the effect of lock contention, I define a new function called 
> > ext4_file_aio_write() that calls __generic_file_aio_write() without acquiring 
> > i_mutex lock. Meanwhile, I remove DIO_LOCKING flag when __blockdev_direct_IO() 
> > is called and do the similar benchmarks. The result shows that the performance 
> > in ext4 is almost the same to the xfs. Thus, it proves that the i_mutex heavily
> > impacts the performance. Hopefully the result is useful for you. :-)
> 
> For the record, I have a patchset that, while not affecting i_mutex (or
> locking in general), does allow AIO append writes to actually be done
> asynchronously.  (Currently they're forced to be done synchronously.)
> It makes a big difference in performance for that particular case, even
> for spinning media.  Performance roughly doubled when testing with fio
> against a regular two-terabyte drive; the performance improvement
> against SSD would have to be much greater.
> 
> One day soon I'll accumulate enough spare time to port the patchset
> forward to the latest kernel and submit it here.
Interesting. I think it might help us to improve this issue. So could
you please post your test case and result in detail? Thank you. :-)

Regards,
Zheng
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ