[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wqchcp6n.fsf@linux.vnet.ibm.com>
Date: Mon, 16 Jun 2014 12:32:24 +0530
From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Michal Nazarewicz <mina86@...a86.com>,
Minchan Kim <minchan@...nel.org>,
Russell King - ARM Linux <linux@....linux.org.uk>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Gleb Natapov <gleb@...nel.org>, Alexander Graf <agraf@...e.de>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
kvm@...r.kernel.org, kvm-ppc@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework
Joonsoo Kim <iamjoonsoo.kim@....com> writes:
> On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote:
>> Joonsoo Kim <iamjoonsoo.kim@....com> writes:
>>
>> > Now, we have general CMA reserved area management framework,
>> > so use it for future maintainabilty. There is no functional change.
>> >
>> > Acked-by: Michal Nazarewicz <mina86@...a86.com>
>> > Acked-by: Paolo Bonzini <pbonzini@...hat.com>
>> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
>>
>> Need this. We may want to keep the VM_BUG_ON by moving
>> KVM_CMA_CHUNK_ORDER around.
>>
>> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> index 8056107..1932e0e 100644
>> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> @@ -37,8 +37,6 @@
>> #include <asm/ppc-opcode.h>
>> #include <asm/cputable.h>
>>
>> -#include "book3s_hv_cma.h"
>> -
>> /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */
>> #define MAX_LPID_970 63
>>
>> @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp)
>> }
>>
>> kvm->arch.hpt_cma_alloc = 0;
>> - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER);
>> page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT));
>> if (page) {
>> hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page));
>>
>>
>>
>> -aneesh
>
> Okay.
> So do you also want this?
>
> @@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages)
> {
> unsigned long align_pages = HPT_ALIGN_PAGES;
>
> + VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
> +
> /* Old CPUs require HPT aligned on a multiple of its size */
> if (!cpu_has_feature(CPU_FTR_ARCH_206))
> align_pages = nr_pages;
> - return kvm_alloc_cma(nr_pages, align_pages);
> + return cma_alloc(kvm_cma, nr_pages, get_order(align_pages));
> }
That would also work.
Thanks
-aneesh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists