lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150520154601.GA2176@potion.brq.redhat.com>
Date:	Wed, 20 May 2015 17:46:05 +0200
From:	Radim Krčmář <rkrcmar@...hat.com>
To:	Paolo Bonzini <pbonzini@...hat.com>
Cc:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	Xiao Guangrong <guangrong.xiao@...ux.intel.com>,
	bdas@...hat.com
Subject: Re: [PATCH 08/11] KVM: implement multiple address spaces

2015-05-20 09:07+0200, Paolo Bonzini:
> On 19/05/2015 20:28, Radim Krčmář wrote:
>>> The regular and SMM address spaces are not hierarchical.  As soon as you
>>> put a PCI resource underneath SMRAM---which is exactly what happens for
>>> legacy VRAM at 0xa0000---they can be completely different.  Note that
>>> QEMU can map legacy VRAM as a KVM memslot when using the VGA 320x200x256
>>> color mode (this mapping is not correct from the VGA point of view, but
>>> it cannot be changed in QEMU without breaking migration).
>>
>> How is a PCI resource under SMRAM accessed?
>> I thought that outside SMM, PCI resource under SMRAM is working
>> normally, but it will be overshadowed, and made inaccessible, in SMM.
>
> Yes, it is.  (There is some chipset magic to make instruction fetches
> retrieve SMRAM and data fetches retrieve PCI resources.  I guess you
> could use execute-only EPT permissions, but needless to say, we don't care).

Interesting, so that part of SMRAM is going to be useless for SMM data?
(Even worse, SMM will read and write to the PCI resource?)

>> I'm not sure if we mean the same hierarchy.  I meant hierarchy in the
>> sense than one address space is considered before the other.
>> (Maybe layers would be a better word.)
>> SMM address space could have just one slot and be above regular, we'd
>> then decide how to handle overlapping.
>
> Ah, now I understand.  That would be doable.
>
> But as they say, "All programming is an exercise in caching."  In this
> case, the caching is done by userspace.

(It's not caching if we wanted a different result ;])

> QEMU implements the SMM address space exactly by overlaying SMRAM over
> normal memory:
| [...]
> The caching consists simply in resolving the overlaps beforehand, thus
> giving KVM the complete address space.
>
> Since slots do not change often, the simpler code is not worth the
> potentially more expensive KVM_SET_USER_MEMORY_REGION (it _is_ more
> expensive, if only because it has to be called twice per slot change).

I am a bit worried about the explosion that would happen if we wanted,
for example, per-VCPU address spaces;  SMM would double their amount.

My main issue (orthogonal to layering) is that we don't allow a way to
let userspace tell us that some slots in different name spaces are the
same slot.  We're losing information that could be useful in the future
(I can only think of less slot queries for dirty log now).

What I like about your solution is that it fits existing code really
well, is easily modified if needs change, and that it already exists.
All my ideas would require more code in kernel, which really doesn't
seem to be worth the benefits it would bring to the SMM use case ...

I'm ok with this approach,

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ