[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <639bd030acc938dc3ef1d11fe630c03e3effd24d.camel@ibm.com>
Date: Tue, 18 Mar 2025 19:38:02 +0000
From: Viacheslav Dubeyko <Slava.Dubeyko@....com>
To: Alex Markuze <amarkuze@...hat.com>,
"slava@...eyko.com"
<slava@...eyko.com>,
David Howells <dhowells@...hat.com>
CC: "dongsheng.yang@...ystack.cn" <dongsheng.yang@...ystack.cn>,
Xiubo Li
<xiubli@...hat.com>,
"linux-fsdevel@...r.kernel.org"
<linux-fsdevel@...r.kernel.org>,
"ceph-devel@...r.kernel.org"
<ceph-devel@...r.kernel.org>,
"linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>,
"jlayton@...nel.org" <jlayton@...nel.org>,
"idryomov@...il.com" <idryomov@...il.com>,
"linux-block@...r.kernel.org"
<linux-block@...r.kernel.org>
Subject: Re: [RFC PATCH 13/35] rbd: Switch from using bvec_iter to iov_iter
On Thu, 2025-03-13 at 23:33 +0000, David Howells wrote:
> Switch from using a ceph_bio_iter/ceph_bvec_iter for iterating over the
> bio_vecs attached to the request to using a ceph_databuf with the bio_vecs
> transscribed from the bio list. This allows the entire bio bvec[] set to
> be passed down to the socket (if unencrypted).
>
> Signed-off-by: David Howells <dhowells@...hat.com>
> cc: Viacheslav Dubeyko <slava@...eyko.com>
> cc: Alex Markuze <amarkuze@...hat.com>
> cc: Ilya Dryomov <idryomov@...il.com>
> cc: Xiubo Li <xiubli@...hat.com>
> cc: linux-fsdevel@...r.kernel.org
> ---
> drivers/block/rbd.c | 642 ++++++++++++++---------------------
> include/linux/ceph/databuf.h | 22 ++
> include/linux/ceph/striper.h | 58 +++-
> net/ceph/striper.c | 53 ---
> 4 files changed, 331 insertions(+), 444 deletions(-)
>
>
<skipped>
> +
> #endif /* __FS_CEPH_DATABUF_H */
> diff --git a/include/linux/ceph/striper.h b/include/linux/ceph/striper.h
> index 3486636c0e6e..50bc1b88c5c4 100644
> --- a/include/linux/ceph/striper.h
> +++ b/include/linux/ceph/striper.h
> @@ -4,6 +4,7 @@
>
> #include <linux/list.h>
> #include <linux/types.h>
> +#include <linux/bug.h>
>
> struct ceph_file_layout;
>
> @@ -39,10 +40,6 @@ int ceph_file_to_extents(struct ceph_file_layout *l, u64 off, u64 len,
> void *alloc_arg,
> ceph_object_extent_fn_t action_fn,
> void *action_arg);
> -int ceph_iterate_extents(struct ceph_file_layout *l, u64 off, u64 len,
> - struct list_head *object_extents,
> - ceph_object_extent_fn_t action_fn,
> - void *action_arg);
>
> struct ceph_file_extent {
> u64 fe_off;
> @@ -68,4 +65,57 @@ int ceph_extent_to_file(struct ceph_file_layout *l,
>
> u64 ceph_get_num_objects(struct ceph_file_layout *l, u64 size);
>
> +static __always_inline
> +struct ceph_object_extent *ceph_lookup_containing(struct list_head *object_extents,
> + u64 objno, u64 objoff, u32 xlen)
> +{
> + struct ceph_object_extent *ex;
> +
> + list_for_each_entry(ex, object_extents, oe_item) {
> + if (ex->oe_objno == objno &&
OK. I see the point that objno should be the same.
> + ex->oe_off <= objoff &&
But why ex->oe_off could be lesser than objoff? The objoff could be not exactly
the same?
> + ex->oe_off + ex->oe_len >= objoff + xlen) /* paranoia */
Do we really need in this comment? :)
I am still guessing why ex->oe_off + ex->oe_len could be bigger than objoff +
xlen. Is it possible that object size or offset could be bigger?
Thanks,
Slava.
> + return ex;
> +
> + if (ex->oe_objno > objno)
> + break;
> + }
> +
> + return NULL;
> +}
> +
Powered by blists - more mailing lists