lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87mujxro70.fsf@vitty.brq.redhat.com>
Date:   Wed, 08 May 2019 10:54:59 -0400
From:   Vitaly Kuznetsov <vkuznets@...hat.com>
To:     Maya Nakamura <m.maya.nakamura@...il.com>
Cc:     mikelley@...rosoft.com, kys@...rosoft.com, haiyangz@...rosoft.com,
        sthemmin@...rosoft.com, sashal@...nel.org, x86@...nel.org,
        linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/6] x86: hv: hv_init.c: Replace alloc_page() with kmem_cache_alloc()

Maya Nakamura <m.maya.nakamura@...il.com> writes:

> On Fri, Apr 12, 2019 at 09:52:47AM +0200, Vitaly Kuznetsov wrote:
>> Maya Nakamura <m.maya.nakamura@...il.com> writes:
>> 
>> > On Fri, Apr 05, 2019 at 01:31:02PM +0200, Vitaly Kuznetsov wrote:
>> >> Maya Nakamura <m.maya.nakamura@...il.com> writes:
>> >> 
>> >> > @@ -98,18 +99,20 @@ EXPORT_SYMBOL_GPL(hyperv_pcpu_input_arg);
>> >> >  u32 hv_max_vp_index;
>> >> >  EXPORT_SYMBOL_GPL(hv_max_vp_index);
>> >> >  
>> >> > +struct kmem_cache *cachep;
>> >> > +EXPORT_SYMBOL_GPL(cachep);
>> >> > +
>> >> >  static int hv_cpu_init(unsigned int cpu)
>> >> >  {
>> >> >  	u64 msr_vp_index;
>> >> >  	struct hv_vp_assist_page **hvp = &hv_vp_assist_page[smp_processor_id()];
>> >> >  	void **input_arg;
>> >> > -	struct page *pg;
>> >> >  
>> >> >  	input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);
>> >> > -	pg = alloc_page(GFP_KERNEL);
>> >> > -	if (unlikely(!pg))
>> >> > +	*input_arg = kmem_cache_alloc(cachep, GFP_KERNEL);
>> >> 
>> >> I'm not sure use of kmem_cache is justified here: pages we allocate are
>> >> not cache-line and all these allocations are supposed to persist for the
>> >> lifetime of the guest. In case you think that even on x86 it will be
>> >> possible to see PAGE_SIZE != HV_HYP_PAGE_SIZE you can use alloc_pages()
>> >> instead.
>> >> 
>> > Thank you for your feedback, Vitaly!
>> >
>> > Will you please tell me how cache-line relates to kmem_cache?
>> >
>> > I understand that alloc_pages() would work when PAGE_SIZE <=
>> > HV_HYP_PAGE_SIZE, but I think that it would not work if PAGE_SIZE >
>> > HV_HYP_PAGE_SIZE.
>> 
>> Sorry, my bad: I meant to say "not cache-like" (these allocations are
>> not 'cache') but the typo made it completely incomprehensible. 
>  
> No worries! Thank you for sharing your thoughts with me, Vitaly.
>
> Do you know of any alternatives to kmem_cache that can allocate memory
> in a specified size (different than a guest page size) with alignment?
> Memory allocated by alloc_page() is aligned but limited to the guest
> page size, and kmalloc() can allocate a particular size but it seems
> that it does not guarantee alignment. I am asking this while considering
> the changes for architecture independent code.
>

I think we can consider these allocations being DMA-like (because
Hypervisor accesses this memory too) so you can probably take a look at
dma_pool_create()/dma_pool_alloc() and friends.

>> >> Also, in case the idea is to generalize stuff, what will happen if
>> >> PAGE_SIZE > HV_HYP_PAGE_SIZE? Who will guarantee proper alignment?
>> >> 
>> >> I think we can leave hypercall arguments, vp_assist and similar pages
>> >> alone for now: the code is not going to be shared among architectures
>> >> anyways.
>> >> 
>> > About the alignment, kmem_cache_create() aligns memory with its third
>> > parameter, offset.
>> 
>> Yes, I know, I was trying to think about a (hypothetical) situation when
>> page sizes differ: what would be the memory alignment requirements from
>> the hypervisor for e.g. hypercall arguments? In case it's always
>> HV_HYP_PAGE_SIZE we're good but could it be PAGE_SIZE (for e.g. TLB
>> flush hypercall)? I don't know. For x86 this discussion probably makes
>> no sense. I'm, however, struggling to understand what benefit we will
>> get from the change. Maybe just leave it as-is for now and fix
>> arch-independent code only? And later, if we decide to generalize this
>> code, make another approach? (Not insisting, just a suggestion)
>
> Thank you for the suggestion, Vitaly!
>
> The introduction of HV_HYP_PAGE_SIZE is weighing the assumption of the
> future page sizeā€”it can be bigger based on the general trend, not
> smaller, which is a reasonable assumption, I think.
>

Let's spell it out (as BUILD_BUG_ON(HV_HYP_PAGE_SIZE < PAGE_SIZE) or
something like that) then to make it clear.

-- 
Vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ