lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABgObfYrxwdy-LqcWiCSfHhOihi9qJT2a3PzhSRHzkFgiJurNQ@mail.gmail.com>
Date: Wed, 24 Apr 2024 18:05:57 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Binbin Wu <binbin.wu@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, 
	isaku.yamahata@...el.com, xiaoyao.li@...el.com, seanjc@...gle.com, 
	rick.p.edgecombe@...el.com
Subject: Re: [PATCH 2/6] KVM: Add KVM_PRE_FAULT_MEMORY vcpu ioctl to
 pre-populate guest memory

On Mon, Apr 22, 2024 at 7:39 AM Binbin Wu <binbin.wu@...ux.intel.com> wrote:
> range->size equals 0 can be covered by "range->gpa + range->size <=
> range->gpa"
>
> If we want to return success when size is 0 (, though I am not sure it's
> needed),
> we need to use "range->gpa + range->size < range->gpa" instead.

I think it's not needed because it could cause an infinite loop in
(buggy) userspace. Better return -EINVAL.

Paolo

>
> > +
> > +     vcpu_load(vcpu);
> > +     idx = srcu_read_lock(&vcpu->kvm->srcu);
> > +
> > +     full_size = range->size;
> > +     do {
> > +             if (signal_pending(current)) {
> > +                     r = -EINTR;
> > +                     break;
> > +             }
> > +
> > +             r = kvm_arch_vcpu_pre_fault_memory(vcpu, range);
> > +             if (r < 0)
> > +                     break;
> > +
> > +             if (WARN_ON_ONCE(r == 0))
> > +                     break;
> > +
> > +             range->size -= r;
> > +             range->gpa += r;
> > +             cond_resched();
> > +     } while (range->size);
> > +
> > +     srcu_read_unlock(&vcpu->kvm->srcu, idx);
> > +     vcpu_put(vcpu);
> > +
> > +     /* Return success if at least one page was mapped successfully.  */
> > +     return full_size == range->size ? r : 0;
> > +}
> > +#endif
> > +
> >   static long kvm_vcpu_ioctl(struct file *filp,
> >                          unsigned int ioctl, unsigned long arg)
> >   {
> > @@ -4580,6 +4629,20 @@ static long kvm_vcpu_ioctl(struct file *filp,
> >               r = kvm_vcpu_ioctl_get_stats_fd(vcpu);
> >               break;
> >       }
> > +#ifdef CONFIG_KVM_GENERIC_PRE_FAULT_MEMORY
> > +     case KVM_PRE_FAULT_MEMORY: {
> > +             struct kvm_pre_fault_memory range;
> > +
> > +             r = -EFAULT;
> > +             if (copy_from_user(&range, argp, sizeof(range)))
> > +                     break;
> > +             r = kvm_vcpu_pre_fault_memory(vcpu, &range);
> > +             /* Pass back leftover range. */
> > +             if (copy_to_user(argp, &range, sizeof(range)))
> > +                     r = -EFAULT;
> > +             break;
> > +     }
> > +#endif
> >       default:
> >               r = kvm_arch_vcpu_ioctl(filp, ioctl, arg);
> >       }
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ