lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190823101623.GV7777@dread.disaster.area>
Date:   Fri, 23 Aug 2019 20:16:23 +1000
From:   Dave Chinner <david@...morbit.com>
To:     Joseph Qi <joseph.qi@...ux.alibaba.com>
Cc:     "Theodore Y. Ts'o" <tytso@....edu>, Jan Kara <jack@...e.cz>,
        Joseph Qi <jiangqi903@...il.com>,
        Andreas Dilger <adilger@...ger.ca>,
        Ext4 Developers List <linux-ext4@...r.kernel.org>,
        Xiaoguang Wang <xiaoguang.wang@...ux.alibaba.com>,
        Liu Bo <bo.liu@...ux.alibaba.com>
Subject: Re: [RFC] performance regression with "ext4: Allow parallel DIO
 reads"

On Fri, Aug 23, 2019 at 03:57:02PM +0800, Joseph Qi wrote:
> Hi Dave,
> 
> On 19/8/22 13:40, Dave Chinner wrote:
> > On Wed, Aug 21, 2019 at 09:04:57AM +0800, Joseph Qi wrote:
> >> Hi Ted,
> >>
> >> On 19/8/21 00:08, Theodore Y. Ts'o wrote:
> >>> On Tue, Aug 20, 2019 at 11:00:39AM +0800, Joseph Qi wrote:
> >>>>
> >>>> I've tested parallel dio reads with dioread_nolock, it doesn't have
> >>>> significant performance improvement and still poor compared with reverting
> >>>> parallel dio reads. IMO, this is because with parallel dio reads, it take
> >>>> inode shared lock at the very beginning in ext4_direct_IO_read().
> >>>
> >>> Why is that a problem?  It's a shared lock, so parallel threads should
> >>> be able to issue reads without getting serialized?
> >>>
> >> The above just tells the result that even mounting with dioread_nolock,
> >> parallel dio reads still has poor performance than before (w/o parallel
> >> dio reads).
> >>
> >>> Are you using sufficiently fast storage devices that you're worried
> >>> about cache line bouncing of the shared lock?  Or do you have some
> >>> other concern, such as some other thread taking an exclusive lock?
> >>>
> >> The test case is random read/write described in my first mail. And
> > 
> > Regardless of dioread_nolock, ext4_direct_IO_read() is taking
> > inode_lock_shared() across the direct IO call.  And writes in ext4
> > _always_ take the inode_lock() in ext4_file_write_iter(), even
> > though it gets dropped quite early when overwrite && dioread_nolock
> > is set.  But just taking the lock exclusively in write fro a short
> > while is enough to kill all shared locking concurrency...
> > 
> >> from my preliminary investigation, shared lock consumes more in such
> >> scenario.
> > 
> > If the write lock is also shared, then there should not be a
> > scalability issue. The shared dio locking is only half-done in ext4,
> > so perhaps comparing your workload against XFS would be an
> > informative exercise... 
> 
> I've done the same test workload on xfs, it behaves the same as ext4
> after reverting parallel dio reads and mounting with dioread_lock.

Ok, so the problem is not shared locking scalability ('cause that's
what XFS does and it scaled fine), the problem is almost certainly
that ext4 is using exclusive locking during writes...

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ