lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 18 Jun 2020 14:46:03 +0200 From: Andreas Gruenbacher <agruenba@...hat.com> To: Matthew Wilcox <willy@...radead.org> Cc: Andreas Grünbacher <andreas.gruenbacher@...il.com>, Andrew Morton <akpm@...ux-foundation.org>, linux-xfs <linux-xfs@...r.kernel.org>, Junxiao Bi <junxiao.bi@...cle.com>, William Kucharski <william.kucharski@...cle.com>, Joseph Qi <joseph.qi@...ux.alibaba.com>, John Hubbard <jhubbard@...dia.com>, LKML <linux-kernel@...r.kernel.org>, linux-f2fs-devel@...ts.sourceforge.net, cluster-devel <cluster-devel@...hat.com>, Linux-MM <linux-mm@...ck.org>, ocfs2-devel@....oracle.com, linux-fsdevel <linux-fsdevel@...r.kernel.org>, linux-ext4 <linux-ext4@...r.kernel.org>, linux-erofs@...ts.ozlabs.org, Christoph Hellwig <hch@....de>, linux-btrfs@...r.kernel.org, Steven Whitehouse <swhiteho@...hat.com>, Bob Peterson <rpeterso@...hat.com> Subject: Re: [Cluster-devel] [PATCH v11 16/25] fs: Convert mpage_readpages to mpage_readahead On Wed, Jun 17, 2020 at 4:22 AM Matthew Wilcox <willy@...radead.org> wrote: > On Wed, Jun 17, 2020 at 02:57:14AM +0200, Andreas Grünbacher wrote: > > Am Mi., 17. Juni 2020 um 02:33 Uhr schrieb Matthew Wilcox <willy@...radead.org>: > > > > > > On Wed, Jun 17, 2020 at 12:36:13AM +0200, Andreas Gruenbacher wrote: > > > > Am Mi., 15. Apr. 2020 um 23:39 Uhr schrieb Matthew Wilcox <willy@...radead.org>: > > > > > From: "Matthew Wilcox (Oracle)" <willy@...radead.org> > > > > > > > > > > Implement the new readahead aop and convert all callers (block_dev, > > > > > exfat, ext2, fat, gfs2, hpfs, isofs, jfs, nilfs2, ocfs2, omfs, qnx6, > > > > > reiserfs & udf). The callers are all trivial except for GFS2 & OCFS2. > > > > > > > > This patch leads to an ABBA deadlock in xfstest generic/095 on gfs2. > > > > > > > > Our lock hierarchy is such that the inode cluster lock ("inode glock") > > > > for an inode needs to be taken before any page locks in that inode's > > > > address space. > > > > > > How does that work for ... > > > > > > writepage: yes, unlocks (see below) > > > readpage: yes, unlocks > > > invalidatepage: yes > > > releasepage: yes > > > freepage: yes > > > isolate_page: yes > > > migratepage: yes (both) > > > putback_page: yes > > > launder_page: yes > > > is_partially_uptodate: yes > > > error_remove_page: yes > > > > > > Is there a reason that you don't take the glock in the higher level > > > ops which are called before readhead gets called? I'm looking at XFS, > > > and it takes the xfs_ilock SHARED in xfs_file_buffered_aio_read() > > > (called from xfs_file_read_iter). > > > > Right, the approach from the following thread might fix this: > > > > https://lore.kernel.org/linux-fsdevel/20191122235324.17245-1-agruenba@redhat.com/T/#t > > In general, I think this is a sound approach. > > Specifically, I think FAULT_FLAG_CACHED can go away. map_pages() > will bring in the pages which are in the page cache, so when we get to > gfs2_fault(), we know there's a reason to acquire the glock. We'd still be grabbing a glock while holding a dependent page lock. Another process could be holding the glock and could try to grab the same page lock (i.e., a concurrent writer), leading to the same kind of deadlock. Andreas
Powered by blists - more mailing lists