lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A03451E.9020304@gmail.com>
Date:	Thu, 07 May 2009 16:31:26 -0400
From:	Gregory Haskins <gregory.haskins@...il.com>
To:	Arnd Bergmann <arnd@...db.de>
CC:	Avi Kivity <avi@...hat.com>, Gregory Haskins <ghaskins@...ell.com>,
	Chris Wright <chrisw@...s-sol.org>,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	Anthony Liguori <anthony@...emonkey.ws>
Subject: Re: [RFC PATCH 0/3] generic hypercall support

Arnd Bergmann wrote:
> On Thursday 07 May 2009, Gregory Haskins wrote:
>   
>> I guess technically mmio can just be a simple access of the page which
>> would be problematic to trap locally without a PF.  However it seems
>> that most mmio always passes through a ioread()/iowrite() call so this
>> is perhaps the hook point.  If we set the stake in the ground that mmios
>> that go through some other mechanism like PFs can just hit the "slow
>> path" are an acceptable casualty, I think we can make that work.
>>
>> Thoughts?
>>     
>
> An mmio that goes through a PF is a bug, it's certainly broken on
> a number of platforms, so performance should not be an issue there.
>   

This may be my own ignorance, but I thought a VMEXIT of type "PF" was
how MMIO worked in VT/SVM.  I didn't mean to imply that the guest nor
the host took a traditional PF exception in their respective IDT, if
that is what you thought I meant here.  Rather, the mmio region is
unmapped in the guest MMU, access causes a VMEXIT to host-side KVM of
type PF, and the host side code then consults the guest page-table to
see if its an MMIO or not.  I could very well be mistaken as I have only
a cursory understanding of what happens in KVM today with this path.

After posting my numbers today, what I *can* tell you definitively that
its significantly slower to VMEXIT via MMIO.  I guess I do not really
know the reason for sure. :)
> Note that are four commonly used interface classes for PIO/MMIO:
>
> 1. readl/writel: little-endian MMIO
> 2. inl/outl: little-endian PIO
> 3. ioread32/iowrite32: converged little-endian PIO/MMIO
> 4. __raw_readl/__raw_writel: native-endian MMIO without checks
>
> You don't need to worry about the __raw_* stuff, as this should never
> be used in device drivers.
>
> As a simplification, you could mandate that all drivers that want to
> use this get converted to the ioread/iowrite class of interfaces and
> leave the others slow.
>   

I guess the problem that was later pointed out is that we cannot discern
which devices might be pass-through and therefore should not be
revectored through a HC.  But I am even less knowledgeable about how
pass-through works than I am about the MMIO traps, so I might be
completely off here.

In any case, thank you kindly for the suggestions.

Regards,
-Greg


Download attachment "signature.asc" of type "application/pgp-signature" (267 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ