lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f8ce24507b836170e1ca5df8586a3478c164386d.camel@amazon.co.uk>
Date: Tue, 27 Aug 2024 09:56:54 +0000
From: "Stamatis, Ilias" <ilstam@...zon.co.uk>
To: "pbonzini@...hat.com" <pbonzini@...hat.com>, "seanjc@...gle.com"
	<seanjc@...gle.com>
CC: "maz@...nel.org" <maz@...nel.org>, "kvm@...r.kernel.org"
	<kvm@...r.kernel.org>, "Stamatis, Ilias" <ilstam@...zon.co.uk>,
	"anup@...infault.org" <anup@...infault.org>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "oliver.upton@...ux.dev"
	<oliver.upton@...ux.dev>, "paul@....org" <paul@....org>
Subject: Re: [PATCH 2/2] KVM: Clean up coalesced MMIO ring full check

On Fri, 2024-08-23 at 12:13 -0700, Sean Christopherson wrote:
> Fold coalesced_mmio_has_room() into its sole caller, coalesced_mmio_write(),
> as it's really just a single line of code, has a goofy return value, and
> is unnecessarily brittle.
> 
> E.g. if coalesced_mmio_has_room() were to check ring->last directly, or
> the caller failed to use READ_ONCE(), KVM would be susceptible to TOCTOU
> attacks from userspace.
> 
> Opportunistically add a comment explaining why on earth KVM leaves one
> entry free, which may not be obvious to readers that aren't famailiar with

s/famailiar/familiar

> ring buffers.
> 
> No functional change intended.
> 
> Cc: Ilias Stamatis <ilstam@...zon.com>
> Cc: Paul Durrant <paul@....org>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
>  virt/kvm/coalesced_mmio.c | 29 ++++++++---------------------
>  1 file changed, 8 insertions(+), 21 deletions(-)
> 
> diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
> index 184c5c40c9c1..375d6285475e 100644
> --- a/virt/kvm/coalesced_mmio.c
> +++ b/virt/kvm/coalesced_mmio.c
> @@ -40,25 +40,6 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev,
>         return 1;
>  }
> 
> -static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev, u32 last)
> -{
> -       struct kvm_coalesced_mmio_ring *ring;
> -
> -       /* Are we able to batch it ? */
> -
> -       /* last is the first free entry
> -        * check if we don't meet the first used entry
> -        * there is always one unused entry in the buffer
> -        */
> -       ring = dev->kvm->coalesced_mmio_ring;
> -       if ((last + 1) % KVM_COALESCED_MMIO_MAX == READ_ONCE(ring->first)) {
> -               /* full */
> -               return 0;
> -       }
> -
> -       return 1;
> -}
> -
>  static int coalesced_mmio_write(struct kvm_vcpu *vcpu,
>                                 struct kvm_io_device *this, gpa_t addr,
>                                 int len, const void *val)
> @@ -72,9 +53,15 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu,
> 
>         spin_lock(&dev->kvm->ring_lock);
> 
> +       /*
> +        * last is the index of the entry to fill.  Verify userspace hasn't
> +        * set last to be out of range, and that there is room in the ring.
> +        * Leave one entry free in the ring so that userspace can differentiate
> +        * between an empty ring and a full ring.
> +        */
>         insert = READ_ONCE(ring->last);
> -       if (!coalesced_mmio_has_room(dev, insert) ||
> -           insert >= KVM_COALESCED_MMIO_MAX) {
> +       if (insert >= KVM_COALESCED_MMIO_MAX ||
> +           (insert + 1) % KVM_COALESCED_MMIO_MAX == READ_ONCE(ring->first)) {
>                 spin_unlock(&dev->kvm->ring_lock);
>                 return -EOPNOTSUPP;
>         }
> --
> 2.46.0.295.g3b9ea8a38a-goog
> 

Reviewed-by: Ilias Stamatis <ilstam@...zon.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ