[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ab6cee88-b4a2-5e37-0741-7012ca502ee2@oracle.com>
Date: Fri, 2 Sep 2022 18:53:43 +0100
From: Joao Martins <joao.m.martins@...cle.com>
To: Jason Gunthorpe <jgg@...dia.com>,
Alex Williamson <alex.williamson@...hat.com>
Cc: Yishai Hadas <yishaih@...dia.com>, saeedm@...dia.com,
kvm@...r.kernel.org, netdev@...r.kernel.org, kuba@...nel.org,
kevin.tian@...el.com, leonro@...dia.com, maorg@...dia.com,
cohuck@...hat.com
Subject: Re: [PATCH V5 vfio 04/10] vfio: Add an IOVA bitmap support
On 02/09/2022 00:16, Jason Gunthorpe wrote:
> On Thu, Sep 01, 2022 at 02:36:25PM -0600, Alex Williamson wrote:
>
>>> Much of the bitmap helpers don't check that the offset is within the range
>>> of the passed ulong array. So I followed the same thinking and the
>>> caller is /provided/ with the range that the IOVA bitmap covers. The intention
>>> was minimizing the number of operations given that this function sits on the
>>> hot path. I can add this extra check.
>>
>> Maybe Jason can quote a standard here, audit the callers vs sanitize
>> the input. It'd certainly be fair even if the test were a BUG_ON since
>> it violates the defined calling conventions and we're not taking
>> arbitrary input, but it could also pretty easily and quietly go into
>> the weeds if we do nothing. Thanks,
>
> Nope, no consensus I know of
>
> But generally people avoid sanity checks on hot paths
>
OK I'm not stagging the check for now, unless folks think I really should.
__bitmap_set() is skipping it much like iova_bitmap_set().
The caller can sanity check and has the necessary info for that,
as the iterator knows the exact range the mapped bitmap covers.
The diff that I just tested is below anyhow, if I am advised against not
having such check.
> Linus will reject your merge request if you put a BUG_ON :)
>
> Turn on a check if kasn is on or something if you think it is really
> important?
I am not sure about CONFIG_KASAN/kasan_enabled(), as I wouldn't be using any of
the kasan helpers but still it is sort of sanitizing future memory accesses, but
no other ideas aside from DEBUG_KERNEL.
FWIW, it would look sort of like this (in addition to all other comments I got
here in v5). Caching iova_bitmap_mapped_length() into bitmap::mapped->length
would make it a bit cheaper/cleaner, should we go this route.
diff --git a/drivers/vfio/iova_bitmap.c b/drivers/vfio/iova_bitmap.c
index fd0f8f0482f7..6aba02f03316 100644
--- a/drivers/vfio/iova_bitmap.c
+++ b/drivers/vfio/iova_bitmap.c
@@ -406,13 +406,21 @@ int iova_bitmap_for_each(struct iova_bitmap *bitmap, void *opaque,
void iova_bitmap_set(struct iova_bitmap *bitmap,
unsigned long iova, size_t length)
{
+ unsigned long page_offset, page_idx, offset, nbits;
struct iova_bitmap_map *mapped = &bitmap->mapped;
- unsigned long offset = (iova - mapped->iova) >> mapped->pgshift;
- unsigned long nbits = max(1UL, length >> mapped->pgshift);
- unsigned long page_idx = offset / BITS_PER_PAGE;
- unsigned long page_offset = mapped->pgoff;
void *kaddr;
+#ifdef CONFIG_KASAN
+ unsigned long mapped_length = iova_bitmap_mapped_length(bitmap);
+ if (unlikely(WARN_ON_ONCE(iova < mapped->iova ||
+ iova + length - 1 > mapped->iova + mapped_length - 1)))
+ return;
+#endif
+
+ offset = (iova - mapped->iova) >> mapped->pgshift;
+ nbits = max(1UL, length >> mapped->pgshift);
+ page_idx = offset / BITS_PER_PAGE;
+ page_offset = mapped->pgoff;
offset = offset % BITS_PER_PAGE;
do {
Powered by blists - more mailing lists