lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Aug 2019 23:34:43 -0400
From:   "Theodore Y. Ts'o" <tytso@....edu>
To:     Joseph Qi <joseph.qi@...ux.alibaba.com>
Cc:     Jan Kara <jack@...e.cz>, Joseph Qi <jiangqi903@...il.com>,
        Dave Chinner <david@...morbit.com>,
        Andreas Dilger <adilger@...ger.ca>,
        Ext4 Developers List <linux-ext4@...r.kernel.org>,
        Xiaoguang Wang <xiaoguang.wang@...ux.alibaba.com>,
        Liu Bo <bo.liu@...ux.alibaba.com>
Subject: Re: [RFC] performance regression with "ext4: Allow parallel DIO
 reads"

On Wed, Aug 21, 2019 at 09:04:57AM +0800, Joseph Qi wrote:
> On 19/8/21 00:08, Theodore Y. Ts'o wrote:
> > On Tue, Aug 20, 2019 at 11:00:39AM +0800, Joseph Qi wrote:
> >>
> >> I've tested parallel dio reads with dioread_nolock, it doesn't have
> >> significant performance improvement and still poor compared with reverting
> >> parallel dio reads. IMO, this is because with parallel dio reads, it take
> >> inode shared lock at the very beginning in ext4_direct_IO_read().
> > 
> > Why is that a problem?  It's a shared lock, so parallel threads should
> > be able to issue reads without getting serialized?
> > 
> The above just tells the result that even mounting with dioread_nolock,
> parallel dio reads still has poor performance than before (w/o parallel
> dio reads).

Right, but you were asserting that performance hit was *because* of
the shared lock.  I'm asking what leading you to have that opinion.
The fact that parallel dioread reads doesn't necessarily say that it
was because of that particular shared lock.  It could be due to any
number of other things.  Have you looked at /proc/lock_stat (enabeld
via CONFIG_LOCK_STAT) to see where the locking bottlenecks might be?

						- Ted
							   

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ