lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <40e6c307-b3e0-6bc1-05a1-804500b6fe3f@redhat.com>
Date:   Mon, 1 Feb 2021 14:31:12 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     Stefano Garzarella <sgarzare@...hat.com>
Cc:     virtualization@...ts.linux-foundation.org,
        Xie Yongji <xieyongji@...edance.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Laurent Vivier <lvivier@...hat.com>,
        Stefan Hajnoczi <stefanha@...hat.com>,
        linux-kernel@...r.kernel.org, Max Gurtovoy <mgurtovoy@...dia.com>,
        kvm@...r.kernel.org
Subject: Re: [PATCH RFC v2 02/10] vringh: add 'iotlb_lock' to synchronize
 iotlb accesses


On 2021/1/29 下午5:18, Stefano Garzarella wrote:
> On Fri, Jan 29, 2021 at 03:43:40PM +0800, Jason Wang wrote:
>>
>> On 2021/1/28 下午10:41, Stefano Garzarella wrote:
>>> Usually iotlb accesses are synchronized with a spinlock.
>>> Let's request it as a new parameter in vringh_set_iotlb() and
>>> hold it when we navigate the iotlb in iotlb_translate() to avoid
>>> race conditions with any new additions/deletions of ranges from
>>> the ioltb.
>>
>>
>> Patch looks fine but I wonder if this is the best approach comparing 
>> to do locking by the caller.
>
> Initially I tried to hold the lock in the vdpasim_blk_work(), but since
> we have a lot of different functions for vringh, I opted to take the 
> lock at the beginning and release it at the end.
> Also because several times I went to see if that call used 
> iotlb_translate or not.
>
> This could be a problem for example if we have multiple workers to 
> handle multiple queues.
>
> Also, some functions are quite long (e.g. vringh_getdesc_iotlb) and 
> holding the lock for that long could reduce parallelism.
>
> For these reasons I thought it was better to hide everything from the 
> caller who doesn't have to worry about which function calls 
> iotlb_translate() and thus hold the lock.


Fine with me.

Acked-by: Jason Wang <jasowang@...hat.com>

Thanks


>
> Thanks,
> Stefano
>
>>
>> Thanks
>>
>>
>>>
>>> Signed-off-by: Stefano Garzarella <sgarzare@...hat.com>
>>> ---
>>>  include/linux/vringh.h           | 6 +++++-
>>>  drivers/vdpa/vdpa_sim/vdpa_sim.c | 3 ++-
>>>  drivers/vhost/vringh.c           | 9 ++++++++-
>>>  3 files changed, 15 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/include/linux/vringh.h b/include/linux/vringh.h
>>> index 59bd50f99291..9c077863c8f6 100644
>>> --- a/include/linux/vringh.h
>>> +++ b/include/linux/vringh.h
>>> @@ -46,6 +46,9 @@ struct vringh {
>>>      /* IOTLB for this vring */
>>>      struct vhost_iotlb *iotlb;
>>> +    /* spinlock to synchronize IOTLB accesses */
>>> +    spinlock_t *iotlb_lock;
>>> +
>>>      /* The function to call to notify the guest about added buffers */
>>>      void (*notify)(struct vringh *);
>>>  };
>>> @@ -258,7 +261,8 @@ static inline __virtio64 cpu_to_vringh64(const 
>>> struct vringh *vrh, u64 val)
>>>  #if IS_REACHABLE(CONFIG_VHOST_IOTLB)
>>> -void vringh_set_iotlb(struct vringh *vrh, struct vhost_iotlb *iotlb);
>>> +void vringh_set_iotlb(struct vringh *vrh, struct vhost_iotlb *iotlb,
>>> +              spinlock_t *iotlb_lock);
>>>  int vringh_init_iotlb(struct vringh *vrh, u64 features,
>>>                unsigned int num, bool weak_barriers,
>>> diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c 
>>> b/drivers/vdpa/vdpa_sim/vdpa_sim.c
>>> index 2183a833fcf4..53238989713d 100644
>>> --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
>>> +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
>>> @@ -284,7 +284,8 @@ struct vdpasim *vdpasim_create(struct 
>>> vdpasim_dev_attr *dev_attr)
>>>          goto err_iommu;
>>>      for (i = 0; i < dev_attr->nvqs; i++)
>>> -        vringh_set_iotlb(&vdpasim->vqs[i].vring, vdpasim->iommu);
>>> +        vringh_set_iotlb(&vdpasim->vqs[i].vring, vdpasim->iommu,
>>> +                 &vdpasim->iommu_lock);
>>>      ret = iova_cache_get();
>>>      if (ret)
>>> diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
>>> index 85d85faba058..f68122705719 100644
>>> --- a/drivers/vhost/vringh.c
>>> +++ b/drivers/vhost/vringh.c
>>> @@ -1074,6 +1074,8 @@ static int iotlb_translate(const struct vringh 
>>> *vrh,
>>>      int ret = 0;
>>>      u64 s = 0;
>>> +    spin_lock(vrh->iotlb_lock);
>>> +
>>>      while (len > s) {
>>>          u64 size, pa, pfn;
>>> @@ -1103,6 +1105,8 @@ static int iotlb_translate(const struct vringh 
>>> *vrh,
>>>          ++ret;
>>>      }
>>> +    spin_unlock(vrh->iotlb_lock);
>>> +
>>>      return ret;
>>>  }
>>> @@ -1262,10 +1266,13 @@ EXPORT_SYMBOL(vringh_init_iotlb);
>>>   * vringh_set_iotlb - initialize a vringh for a ring with IOTLB.
>>>   * @vrh: the vring
>>>   * @iotlb: iotlb associated with this vring
>>> + * @iotlb_lock: spinlock to synchronize the iotlb accesses
>>>   */
>>> -void vringh_set_iotlb(struct vringh *vrh, struct vhost_iotlb *iotlb)
>>> +void vringh_set_iotlb(struct vringh *vrh, struct vhost_iotlb *iotlb,
>>> +              spinlock_t *iotlb_lock)
>>>  {
>>>      vrh->iotlb = iotlb;
>>> +    vrh->iotlb_lock = iotlb_lock;
>>>  }
>>>  EXPORT_SYMBOL(vringh_set_iotlb);
>>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ