lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 26 Mar 2012 12:08:29 +0200
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Avi Kivity <avi@...hat.com>
Cc:	Joerg Roedel <joerg.roedel@....com>,
	Marcelo Tosatti <mtosatti@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC dontapply] kvm_para: add mmio word store hypercall

On Mon, Mar 26, 2012 at 11:21:58AM +0200, Avi Kivity wrote:
> On 03/26/2012 12:05 AM, Michael S. Tsirkin wrote:
> > We face a dilemma: IO mapped addresses are legacy,
> > so, for example, PCI express bridges waste 4K
> > of this space for each link, in effect limiting us
> > to 16 devices using this space.
> >
> > Memory is supposed to replace them, but memory
> > exits are much slower than PIO because of the need for
> > emulation and page walks.
> >
> > As a solution, this patch adds an MMIO hypercall with
> > the guest physical address + data.
> >
> > I did test that this works but didn't benchmark yet.
> >
> > TODOs:
> > This only implements a 2 bytes write since this is
> > the minimum required for virtio, but we'll probably need
> > at least 1 byte reads (for ISR read).
> > We can support up to 8 byte reads/writes for 64 bit
> > guests and up to 4 bytes for 32 ones - better limit
> > to 4 bytes for everyone for consistency, or support
> > the maximum that we can?
> 
> Let's support the maximum we can.
> 
> >  
> >  static int handle_invd(struct kvm_vcpu *vcpu)
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 9cbfc06..7bc00ae 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -4915,7 +4915,9 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
> >  
> >  int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> >  {
> > +	struct kvm_run *run = vcpu->run;
> >  	unsigned long nr, a0, a1, a2, a3, ret;
> > +	gpa_t gpa;
> >  	int r = 1;
> >  
> >  	if (kvm_hv_hypercall_enabled(vcpu->kvm))
> > @@ -4946,12 +4948,24 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> >  	case KVM_HC_VAPIC_POLL_IRQ:
> >  		ret = 0;
> >  		break;
> > +	case KVM_HC_MMIO_STORE_WORD:
> 
> HC_MEMORY_WRITE

Do we really want guests to access random memory this way though?
Even though it can, how about HC_PCI_MEMORY_WRITE to stress the intended
usage?
See also discussion below.

> > +		gpa = hc_gpa(vcpu, a1, a2);
> > +		if (!write_mmio(vcpu, gpa, 2, &a0) && run) {
> 
> What's this && run thing?

I'm not sure - copied this from another other place in emulation:
arch/x86/kvm/x86.c:4953:                if (!write_mmio(vcpu, gpa, 2, &a0) && run)

I assumed there's some way to trigger emulation while VCPU does not run.
No?

> 
> > +			run->exit_reason = KVM_EXIT_MMIO;
> > +			run->mmio.phys_addr = gpa;
> > +			memcpy(run->mmio.data, &a0, 2);
> > +			run->mmio.len = 2;
> > +			run->mmio.is_write = 1;
> > +                        r = 0;
> > +		}
> > +		goto noret;
> 
> What if the address is in RAM?
> Note the guest can't tell if a piece of memory is direct mapped or
> implemented as mmio.

True but doing hypercalls for memory which can be
mapped directly is bad for performance - it's
the reverse of what we are trying to do here.

The intent is to use this for virtio where we can explicitly let the
guest know whether using a hypercall is safe.

Acceptable?  What do you suggest?

> 
> -- 
> error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ