[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240424055108-mutt-send-email-mst@kernel.org>
Date: Wed, 24 Apr 2024 05:51:25 -0400
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Jason Wang <jasowang@...hat.com>
Cc: Cindy Lu <lulu@...hat.com>, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH v5 3/5] vduse: Add function to get/free the pages for
reconnection
On Wed, Apr 24, 2024 at 08:44:10AM +0800, Jason Wang wrote:
> On Tue, Apr 23, 2024 at 4:42 PM Michael S. Tsirkin <mst@...hat.com> wrote:
> >
> > On Tue, Apr 23, 2024 at 11:09:59AM +0800, Jason Wang wrote:
> > > On Tue, Apr 23, 2024 at 4:05 AM Michael S. Tsirkin <mst@...hat.com> wrote:
> > > >
> > > > On Thu, Apr 18, 2024 at 08:57:51AM +0800, Jason Wang wrote:
> > > > > On Wed, Apr 17, 2024 at 5:29 PM Michael S. Tsirkin <mst@...hat.com> wrote:
> > > > > >
> > > > > > On Fri, Apr 12, 2024 at 09:28:23PM +0800, Cindy Lu wrote:
> > > > > > > Add the function vduse_alloc_reconnnect_info_mem
> > > > > > > and vduse_alloc_reconnnect_info_mem
> > > > > > > These functions allow vduse to allocate and free memory for reconnection
> > > > > > > information. The amount of memory allocated is vq_num pages.
> > > > > > > Each VQS will map its own page where the reconnection information will be saved
> > > > > > >
> > > > > > > Signed-off-by: Cindy Lu <lulu@...hat.com>
> > > > > > > ---
> > > > > > > drivers/vdpa/vdpa_user/vduse_dev.c | 40 ++++++++++++++++++++++++++++++
> > > > > > > 1 file changed, 40 insertions(+)
> > > > > > >
> > > > > > > diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
> > > > > > > index ef3c9681941e..2da659d5f4a8 100644
> > > > > > > --- a/drivers/vdpa/vdpa_user/vduse_dev.c
> > > > > > > +++ b/drivers/vdpa/vdpa_user/vduse_dev.c
> > > > > > > @@ -65,6 +65,7 @@ struct vduse_virtqueue {
> > > > > > > int irq_effective_cpu;
> > > > > > > struct cpumask irq_affinity;
> > > > > > > struct kobject kobj;
> > > > > > > + unsigned long vdpa_reconnect_vaddr;
> > > > > > > };
> > > > > > >
> > > > > > > struct vduse_dev;
> > > > > > > @@ -1105,6 +1106,38 @@ static void vduse_vq_update_effective_cpu(struct vduse_virtqueue *vq)
> > > > > > >
> > > > > > > vq->irq_effective_cpu = curr_cpu;
> > > > > > > }
> > > > > > > +static int vduse_alloc_reconnnect_info_mem(struct vduse_dev *dev)
> > > > > > > +{
> > > > > > > + unsigned long vaddr = 0;
> > > > > > > + struct vduse_virtqueue *vq;
> > > > > > > +
> > > > > > > + for (int i = 0; i < dev->vq_num; i++) {
> > > > > > > + /*page 0~ vq_num save the reconnect info for vq*/
> > > > > > > + vq = dev->vqs[i];
> > > > > > > + vaddr = get_zeroed_page(GFP_KERNEL);
> > > > > >
> > > > > >
> > > > > > I don't get why you insist on stealing kernel memory for something
> > > > > > that is just used by userspace to store data for its own use.
> > > > > > Userspace does not lack ways to persist data, for example,
> > > > > > create a regular file anywhere in the filesystem.
> > > > >
> > > > > Good point. So the motivation here is to:
> > > > >
> > > > > 1) be self contained, no dependency for high speed persist data
> > > > > storage like tmpfs
> > > >
> > > > No idea what this means.
> > >
> > > I mean a regular file may slow down the datapath performance, so
> > > usually the application will try to use tmpfs and other which is a
> > > dependency for implementing the reconnection.
> >
> > Are we worried about systems without tmpfs now?
>
> Yes.
Why? Who ships these?
> >
> >
> > > >
> > > > > 2) standardize the format in uAPI which allows reconnection from
> > > > > arbitrary userspace, unfortunately, such effort was removed in new
> > > > > versions
> > > >
> > > > And I don't see why that has to live in the kernel tree either.
> > >
> > > I can't find a better place, any idea?
> > >
> > > Thanks
> >
> >
> > Well anywhere on github really. with libvhost-user maybe?
> > It's harmless enough in Documentation
> > if you like but ties you to the kernel release cycle in a way that
> > is completely unnecessary.
>
> Ok.
>
> Thanks
>
> >
> > > >
> > > > > If the above doesn't make sense, we don't need to offer those pages by VDUSE.
> > > > >
> > > > > Thanks
> > > > >
> > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > > + if (vaddr == 0)
> > > > > > > + return -ENOMEM;
> > > > > > > +
> > > > > > > + vq->vdpa_reconnect_vaddr = vaddr;
> > > > > > > + }
> > > > > > > +
> > > > > > > + return 0;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int vduse_free_reconnnect_info_mem(struct vduse_dev *dev)
> > > > > > > +{
> > > > > > > + struct vduse_virtqueue *vq;
> > > > > > > +
> > > > > > > + for (int i = 0; i < dev->vq_num; i++) {
> > > > > > > + vq = dev->vqs[i];
> > > > > > > +
> > > > > > > + if (vq->vdpa_reconnect_vaddr)
> > > > > > > + free_page(vq->vdpa_reconnect_vaddr);
> > > > > > > + vq->vdpa_reconnect_vaddr = 0;
> > > > > > > + }
> > > > > > > +
> > > > > > > + return 0;
> > > > > > > +}
> > > > > > >
> > > > > > > static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
> > > > > > > unsigned long arg)
> > > > > > > @@ -1672,6 +1705,8 @@ static int vduse_destroy_dev(char *name)
> > > > > > > mutex_unlock(&dev->lock);
> > > > > > > return -EBUSY;
> > > > > > > }
> > > > > > > + vduse_free_reconnnect_info_mem(dev);
> > > > > > > +
> > > > > > > dev->connected = true;
> > > > > > > mutex_unlock(&dev->lock);
> > > > > > >
> > > > > > > @@ -1855,12 +1890,17 @@ static int vduse_create_dev(struct vduse_dev_config *config,
> > > > > > > ret = vduse_dev_init_vqs(dev, config->vq_align, config->vq_num);
> > > > > > > if (ret)
> > > > > > > goto err_vqs;
> > > > > > > + ret = vduse_alloc_reconnnect_info_mem(dev);
> > > > > > > + if (ret < 0)
> > > > > > > + goto err_mem;
> > > > > > >
> > > > > > > __module_get(THIS_MODULE);
> > > > > > >
> > > > > > > return 0;
> > > > > > > err_vqs:
> > > > > > > device_destroy(&vduse_class, MKDEV(MAJOR(vduse_major), dev->minor));
> > > > > > > +err_mem:
> > > > > > > + vduse_free_reconnnect_info_mem(dev);
> > > > > > > err_dev:
> > > > > > > idr_remove(&vduse_idr, dev->minor);
> > > > > > > err_idr:
> > > > > > > --
> > > > > > > 2.43.0
> > > > > >
> > > >
> >
Powered by blists - more mailing lists