[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200822042059.1805541-1-jhubbard@nvidia.com>
Date: Fri, 21 Aug 2020 21:20:54 -0700
From: John Hubbard <jhubbard@...dia.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Alexander Viro <viro@...iv.linux.org.uk>,
Christoph Hellwig <hch@...radead.org>,
Ilya Dryomov <idryomov@...il.com>,
Jens Axboe <axboe@...nel.dk>, Jeff Layton <jlayton@...nel.org>,
<linux-xfs@...r.kernel.org>, <linux-fsdevel@...r.kernel.org>,
<linux-block@...r.kernel.org>, <ceph-devel@...r.kernel.org>,
<linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
John Hubbard <jhubbard@...dia.com>
Subject: [PATCH 0/5] bio: Direct IO: convert to pin_user_pages_fast()
Hi,
This converts the Direct IO block/bio layer over to use FOLL_PIN pages
(those acquired via pin_user_pages*()). This effectively converts
several file systems (ext4, for example) that use the common Direct IO
routines. See "Remaining work", below for a bit more detail there.
Quite a few approaches have been considered over the years. This one is
inspired by Christoph Hellwig's July, 2019 observation that there are
only 5 ITER_ types, and we can simplify handling of them for Direct IO
[1]. After working through how bio submission and completion works, I
became convinced that this is the simplest and cleanest approach to
conversion.
Not content to let well enough alone, I then continued on to the
unthinkable: adding a new flag to struct bio, whose "short int" flags
field was full, thuse triggering an expansion of the field from 16, to
32 bits. This allows for a nice assertion in bio_release_pages(), that
the bio page release mechanism matches the page acquisition mechanism.
This is especially welcome for a change that affects a lot of callers
and could really make a mess if there is a bug somewhere.
I'm unable to spot any performance implications, either theoretically or
via (rather light) performance testing, from enlarging bio.bi_flags, but
I suspect that there are maybe still valid reasons for having such a
tiny bio.bi_flags field. I just have no idea what they are. (Hardware
that knows the size of a bio? No, because there would be obvious
build-time assertions, and comments about such a constraint.) Anyway, I
can drop that patch if it seems like too much cost for too little
benefit.
And finally, as long as we're all staring at the iter_iov code, I'm
including a nice easy ceph patch, that removes one more caller of
iter_iov_get_pages().
Design notes ============
This whole approach depends on certain concepts:
1) Each struct bio instance must not mix different types of pages:
FOLL_PIN and non-FOLL_PIN pages. (By FOLL_PIN I'm referring to pages
that were acquired and pinned via pin_user_page*() routines.)
Fortunately, this is already an enforced constraint for bio's, as
evidenced by the existence and use of BIO_NO_PAGE_REF.
2) Christoph Hellwig's July, 2019 observation that there are
only 5 ITER_ types, and we can simplify handling of them for Direct IO
[1]. Accordingly, this series implements the following pseudocode:
Direct IO behavior:
ITER_IOVEC:
pin_user_pages_fast();
break;
ITER_KVEC: // already elevated page refcount, leave alone
ITER_BVEC: // already elevated page refcount, leave alone
ITER_PIPE: // just, no :)
ITER_DISCARD: // discard
return -EFAULT or -ENVALID;
...which works for callers that already have sorted out which case they
are in. Such as, Direct IO in the block/bio layers.
Now, this does leave ITER_KVEC and ITER_BVEC unconverted, but on the
other hand, it's not clear that these are actually affected in the real
world, by the get_user_pages()+filesystem interaction problems of [2].
If it turns out to matter, then those can be handled too, but it's just
more refactoring and surgery to do so.
Testing
=======
Performance: no obvious regressions from running fio (direct=1: Direct
IO) on both SSD and NVMe drives.
Functionality: selected non-destructive bare metal xfstests on xfs,
ext4, btrfs, orangefs filesystems, plus LTP tests.
Note that I have only a single x86 64-bit test machine, though.
Remaining work
==============
Non-converted call sites for iter_iov_get_pages*() at the
moment include: net, crypto, cifs, ceph, vhost, fuse, nfs/direct,
vhost/scsi.
About-to-be-converted sites (in a subsequent patch) are: Direct IO for
filesystems that use the generic read/write functions.
[1] https://lore.kernel.org/kvm/20190724061750.GA19397@infradead.org/
[2] "Explicit pinning of user-space pages":
https://lwn.net/Articles/807108/
John Hubbard (5):
iov_iter: introduce iov_iter_pin_user_pages*() routines
mm/gup: introduce pin_user_page()
bio: convert get_user_pages_fast() --> pin_user_pages_fast()
bio: introduce BIO_FOLL_PIN flag
fs/ceph: use pipe_get_pages_alloc() for pipe
block/bio.c | 29 +++++++------
block/blk-map.c | 7 +--
fs/ceph/file.c | 3 +-
fs/direct-io.c | 30 ++++++-------
fs/iomap/direct-io.c | 2 +-
include/linux/blk_types.h | 5 ++-
include/linux/mm.h | 2 +
include/linux/uio.h | 9 +++-
lib/iov_iter.c | 91 +++++++++++++++++++++++++++++++++++++--
mm/gup.c | 30 +++++++++++++
10 files changed, 169 insertions(+), 39 deletions(-)
--
2.28.0
Powered by blists - more mailing lists