lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150506162437.GA27205@potion.brq.redhat.com>
Date:	Wed, 6 May 2015 18:24:41 +0200
From:	Radim Krčmář <rkrcmar@...hat.com>
To:	Paolo Bonzini <pbonzini@...hat.com>
Cc:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org, bsd@...hat.com,
	guangrong.xiao@...ux.intel.com,
	Yang Zhang <yang.z.zhang@...el.com>, wanpeng.li@...ux.intel.com
Subject: Re: [PATCH 12/13] KVM: x86: add KVM_MEM_X86_SMRAM memory slot flag

2015-05-06 11:47+0200, Paolo Bonzini:
> On 05/05/2015 19:17, Radim Krčmář wrote:
>> 2015-04-30 13:36+0200, Paolo Bonzini:
>>>  struct kvm_memory_slot *x86_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn)
>>>  {
>>> -	struct kvm_memory_slot *slot = gfn_to_memslot(vcpu->kvm, gfn);
>>> +	bool found;
>>> +	struct kvm_memslots *memslots = kvm_memslots(vcpu->kvm);
>>> +	struct kvm_memory_slot *slot = search_memslots(memslots, gfn, &found);
>>> +
>>> +	if (found && unlikely(slot->flags & KVM_MEM_X86_SMRAM) && !is_smm(vcpu))
>>> +		return NULL;
>> 
>> Patch [10/13] made me sad and IIUIC, the line above is the only reason
>> for it ...
> 
> Yes, all the differences trickle down to using x86_gfn_to_memslot.
> 
> On the other hand, there are already cut-and-pasted loops for guest 
> memory access, see kvm_write_guest_virt_system or 
> kvm_read_guest_virt_helper.

(Yeah ... not introducing new problem is a good first step to fixing the
 existing one.  I can accept that both are okay -- the definition is up
 to us -- but not that we are adding an abomination on purpose.)

> We could add __-prefixed macros like
> 
> #define __kvm_write_guest(fn_page, gpa, data, len, args...)	\
> 	({							\
> 		gpa_t _gpa = (gpa);				\
> 		void *_data = (data);				\
> 		int _len = (len);				\
> 		gfn_t _gfn = _gpa >> PAGE_SHIFT;		\
> 		int _offset = offset_in_page(_gpa);		\
> 		int _seg, _ret;					\
> 	        while ((_seg = next_segment(_len, _offset)) != 0) { \
> 	                _ret = (fn_page)(args##, _gfn, _data, _offset, _seg); \
> 	                if (_ret < 0)				\
> 	                        break;				\
> 	                _offset = 0;				\
> 	                _len -= _seg;				\
> 	                _data += _seg;				\
> 	                ++_gfn;					\
> 	        }						\
> 		_ret;						\
> 	})
> 
> ...
> 
> int x86_write_guest(struct kvm_vcpu *vcpu, gpa_t gpa, const void *data,
>                     unsigned long len)
> {
> 	return __kvm_write_guest(x86_write_guest_page, gpa, data, len, vcpu);
> }
> 
> but frankly it seems worse than the disease.

Well, it's a good approach, but the C language makes it awkward.
(I like first class functions.)

>  what about renaming and changing kvm_* memory function to
>> vcpu_* and create 
>>   bool kvm_arch_vcpu_can_access_slot(vcpu, slot)
>> which could also be inline in arch/*/include/asm/kvm_host.h thanks to
>> the way we build.
>> We could be passing both kvm and vcpu in internal memslot operations and
>> not checking if vcpu is NULL.  This should allow all possible operations
>> with little code duplication and the compiler could also optimize the
>> case where vcpu is NULL.
> 
> That would be a huge patch, and most architectures do not (yet) need it.

Not that huge ... trivial extension for passing extra argument around
and adding few wrappers to keep compatibility and then a bunch of
  static inline bool .*(vcpu, slot) { return true; }
for remaining arches.  (We could have a default unless an arch #defines
KVM_ARCH_VCPU_SLOT_CHECKING or some other hack to anger programmers.)

The hard part is have the same object code and added flexibility in C.

> I can change the functions to kvm_vcpu_read_* and when a second architecture
> needs it, we move it from arch/x86/kvm/ to virt/kvm.  I named it x86_ just
> because it was the same length as kvm_ and thus hardly needed reindentation.

That doesn't improve the main issue, so x86 is good.

>> Another option is adding something like "vcpu kvm_arch_fake_vcpu(kvm)"
>> for cases where the access doesn't have an associated vcpu, so it would
>> always succeed.  (Might not be generic enough.)
> 
> That's ugly...

Yes.  (And I still prefer it.)

> The question is also how often the copied code is changed, and the answer is
> that most of it was never changed since it was introduced in 2007
> (commit 195aefde9cc2, "KVM: Add general accessors to read and write guest
> memory").  Before then, KVM used kmap_atomic directly!
> 
> Only the cache code is more recent, but that also has only been changed a
> couple of times after introducing it in 2010 (commit 49c7754ce570, "KVM:
> Add memory slot versioning and use it to provide fast guest write interface").
> It is very stable code.

We have different views on code duplication :)

The feature you wanted exposed a flaw in the code, so an extension was
needed.  Copying code is the last resort after all options of
abstracting were exhausted ... I might be forcing common paths when
writing it twice requires less brain power, but 200 lines of
structurally identical code seem far from it.
Reworking stable code is simpler, as we can just cover all features
needed now and omit the hard thinking about future extensions.
(For me, stable code is the first candidate for generalization ...
 and I wouldn't copy it, even though it's mostly fine in practice.)

It's all nice in theory;  I'll prepare a patch we can discuss.
(And maybe agree with this one after understanding all challenges.)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ