lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZeCcV9Jo3mTRPsME@bfoster>
Date: Thu, 29 Feb 2024 10:01:43 -0500
From: Brian Foster <bfoster@...hat.com>
To: Hou Tao <houtao@...weicloud.com>
Cc: linux-fsdevel@...r.kernel.org, Miklos Szeredi <miklos@...redi.hu>,
	Vivek Goyal <vgoyal@...hat.com>,
	Stefan Hajnoczi <stefanha@...hat.com>,
	Bernd Schubert <bernd.schubert@...tmail.fm>,
	"Michael S . Tsirkin" <mst@...hat.com>,
	Matthew Wilcox <willy@...radead.org>,
	Benjamin Coddington <bcodding@...hat.com>,
	linux-kernel@...r.kernel.org, virtualization@...ts.linux.dev,
	houtao1@...wei.com
Subject: Re: [PATCH v2 4/6] virtiofs: support bounce buffer backed by
 scattered pages

On Wed, Feb 28, 2024 at 10:41:24PM +0800, Hou Tao wrote:
> From: Hou Tao <houtao1@...wei.com>
> 
> When reading a file kept in virtiofs from kernel (e.g., insmod a kernel
> module), if the cache of virtiofs is disabled, the read buffer will be
> passed to virtiofs through out_args[0].value instead of pages. Because
> virtiofs can't get the pages for the read buffer, virtio_fs_argbuf_new()
> will create a bounce buffer for the read buffer by using kmalloc() and
> copy the read buffer into bounce buffer. If the read buffer is large
> (e.g., 1MB), the allocation will incur significant stress on the memory
> subsystem.
> 
> So instead of allocating bounce buffer by using kmalloc(), allocate a
> bounce buffer which is backed by scattered pages. The original idea is
> to use vmap(), but the use of GFP_ATOMIC is no possible for vmap(). To
> simplify the copy operations in the bounce buffer, use a bio_vec flex
> array to represent the argbuf. Also add an is_flat field in struct
> virtio_fs_argbuf to distinguish between kmalloc-ed and scattered bounce
> buffer.
> 
> Signed-off-by: Hou Tao <houtao1@...wei.com>
> ---
>  fs/fuse/virtio_fs.c | 163 ++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 149 insertions(+), 14 deletions(-)
> 
> diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
> index f10fff7f23a0f..ffea684bd100d 100644
> --- a/fs/fuse/virtio_fs.c
> +++ b/fs/fuse/virtio_fs.c
..
> @@ -408,42 +425,143 @@ static void virtio_fs_request_dispatch_work(struct work_struct *work)
>  	}
>  }
>  
..  
>  static void virtio_fs_argbuf_copy_from_in_arg(struct virtio_fs_argbuf *argbuf,
>  					      unsigned int offset,
>  					      const void *src, unsigned int len)
>  {
> -	memcpy(argbuf->buf + offset, src, len);
> +	struct iov_iter iter;
> +	unsigned int copied;
> +
> +	if (argbuf->is_flat) {
> +		memcpy(argbuf->f.buf + offset, src, len);
> +		return;
> +	}
> +
> +	iov_iter_bvec(&iter, ITER_DEST, argbuf->s.bvec,
> +		      argbuf->s.nr, argbuf->s.size);
> +	iov_iter_advance(&iter, offset);

Hi Hou,

Just a random comment, but it seems a little inefficient to reinit and
readvance the iter like this on every copy/call. It looks like offset is
already incremented in the callers of the argbuf copy helpers. Perhaps
iov_iter could be lifted into the callers and passed down, or even just
include it in the argbuf structure and init it at alloc time?

Brian

> +
> +	copied = _copy_to_iter(src, len, &iter);
> +	WARN_ON_ONCE(copied != len);
>  }
>  
>  static unsigned int
> @@ -451,15 +569,32 @@ virtio_fs_argbuf_out_args_offset(struct virtio_fs_argbuf *argbuf,
>  				 const struct fuse_args *args)
>  {
>  	unsigned int num_in = args->in_numargs - args->in_pages;
> +	unsigned int offset = fuse_len_args(num_in,
> +					    (struct fuse_arg *)args->in_args);
>  
> -	return fuse_len_args(num_in, (struct fuse_arg *)args->in_args);
> +	if (argbuf->is_flat)
> +		return offset;
> +	return round_up(offset, PAGE_SIZE);
>  }
>  
>  static void virtio_fs_argbuf_copy_to_out_arg(struct virtio_fs_argbuf *argbuf,
>  					     unsigned int offset, void *dst,
>  					     unsigned int len)
>  {
> -	memcpy(dst, argbuf->buf + offset, len);
> +	struct iov_iter iter;
> +	unsigned int copied;
> +
> +	if (argbuf->is_flat) {
> +		memcpy(dst, argbuf->f.buf + offset, len);
> +		return;
> +	}
> +
> +	iov_iter_bvec(&iter, ITER_SOURCE, argbuf->s.bvec,
> +		      argbuf->s.nr, argbuf->s.size);
> +	iov_iter_advance(&iter, offset);
> +
> +	copied = _copy_from_iter(dst, len, &iter);
> +	WARN_ON_ONCE(copied != len);
>  }
>  
>  /*
> @@ -1154,7 +1289,7 @@ static unsigned int sg_init_fuse_args(struct scatterlist *sg,
>  	len = fuse_len_args(numargs - argpages, args);
>  	if (len)
>  		total_sgs += virtio_fs_argbuf_setup_sg(req->argbuf, *len_used,
> -						       len, &sg[total_sgs]);
> +						       &len, &sg[total_sgs]);
>  
>  	if (argpages)
>  		total_sgs += sg_init_fuse_pages(&sg[total_sgs],
> @@ -1199,7 +1334,7 @@ static int virtio_fs_enqueue_req(struct virtio_fs_vq *fsvq,
>  	}
>  
>  	/* Use a bounce buffer since stack args cannot be mapped */
> -	req->argbuf = virtio_fs_argbuf_new(args, GFP_ATOMIC);
> +	req->argbuf = virtio_fs_argbuf_new(args, GFP_ATOMIC, true);
>  	if (!req->argbuf) {
>  		ret = -ENOMEM;
>  		goto out;
> -- 
> 2.29.2
> 
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ