[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3324a580-8436-424b-bff1-5d7df046b938@oracle.com>
Date: Wed, 17 Jan 2024 15:34:43 -0500
From: Steven Sistare <steven.sistare@...cle.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: virtualization@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
Jason Wang <jasowang@...hat.com>, Si-Wei Liu <si-wei.liu@...cle.com>,
Eugenio Perez Martin <eperezma@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Dragos Tatulea
<dtatulea@...dia.com>, Eli Cohen <elic@...dia.com>,
Xie Yongji <xieyongji@...edance.com>
Subject: Re: [RFC V1 01/13] vhost-vdpa: count pinned memory
On 1/10/2024 5:24 PM, Michael S. Tsirkin wrote:
> On Wed, Jan 10, 2024 at 12:40:03PM -0800, Steve Sistare wrote:
>> Remember the count of pinned memory for the device.
>>
>> Signed-off-by: Steve Sistare <steven.sistare@...cle.com>
>
> Can we have iommufd support in vdpa so we do not keep extending these hacks?
I assume this is rhetorical and not aimed specifically at me, but live update
interfaces for iommufd are on my todo list.
- Steve
>> ---
>> drivers/vhost/vdpa.c | 7 +++++--
>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
>> index da7ec77cdaff..10fb95bcca1a 100644
>> --- a/drivers/vhost/vdpa.c
>> +++ b/drivers/vhost/vdpa.c
>> @@ -59,6 +59,7 @@ struct vhost_vdpa {
>> int in_batch;
>> struct vdpa_iova_range range;
>> u32 batch_asid;
>> + long pinned_vm;
>> };
>>
>> static DEFINE_IDA(vhost_vdpa_ida);
>> @@ -893,6 +894,7 @@ static void vhost_vdpa_pa_unmap(struct vhost_vdpa *v, struct vhost_iotlb *iotlb,
>> unpin_user_page(page);
>> }
>> atomic64_sub(PFN_DOWN(map->size), &dev->mm->pinned_vm);
>> + v->pinned_vm -= PFN_DOWN(map->size);
>> vhost_vdpa_general_unmap(v, map, asid);
>> vhost_iotlb_map_free(iotlb, map);
>> }
>> @@ -975,9 +977,10 @@ static int vhost_vdpa_map(struct vhost_vdpa *v, struct vhost_iotlb *iotlb,
>> return r;
>> }
>>
>> - if (!vdpa->use_va)
>> + if (!vdpa->use_va) {
>> atomic64_add(PFN_DOWN(size), &dev->mm->pinned_vm);
>> -
>> + v->pinned_vm += PFN_DOWN(size);
>> + }
>> return 0;
>> }
>>
>> --
>> 2.39.3
>
Powered by blists - more mailing lists