lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 22 Mar 2010 10:33:58 +0800
From:	graff yang <graff.yang@...il.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	dhowells@...hat.com, linux-kernel@...r.kernel.org,
	akpm@...ux-foundation.org, uclinux-dist-devel@...ckfin.uclinux.org
Subject: Re: [PATCH] mm/nommu.c:Dynamic alloc/free percpu area for nommu

On Sat, Mar 20, 2010 at 12:06 PM, Tejun Heo <tj@...nel.org> wrote:
> Hello,
>
> On 03/19/2010 06:02 PM, graff.yang@...il.com wrote:
>>
>> From: Graff Yang<graff.yang@...il.com>
>>
>> This patch supports dynamic alloc/free percpu area for nommu arch like
>> blackfin.
>> It allocates contiguous pages in funtion pcpu_get_vm_areas() instead of
>> getting none contiguous pages then vmap it in mmu arch.
>> As we can not get the real page structure through vmalloc_to_page(), so
>> it also modified the nommu version vmalloc_to_page()/vmalloc_to_pfn().
>>
>> Signed-off-by: Graff Yang<graff.yang@...il.com>
>
> Heh heh... I've never imagined there would be a SMP architecture w/o
> mmu.  That's pretty interesting.  I mean, there is real estate for
> multiple cores but not for mmu?

Yes, we ported the SMP to the blackfin dual core processor BF561.

>
>> diff --git a/mm/nommu.c b/mm/nommu.c
>> index 605ace8..98bbdf4 100644
>> --- a/mm/nommu.c
>> +++ b/mm/nommu.c
>> @@ -255,13 +255,15 @@ EXPORT_SYMBOL(vmalloc_user);
>>
>>  struct page *vmalloc_to_page(const void *addr)
>>  {
>> -       return virt_to_page(addr);
>> +       return (struct page *)
>> +                       (virt_to_page(addr)->index) ? :
>> virt_to_page(addr);
>
> Nothing major but isn't it more usual to write ?: without the
> intervening space?
>
>> +#ifdef CONFIG_SMP
>> +int map_kernel_range_noflush(unsigned long addr, unsigned long size,
>> +                                       pgprot_t prot, struct page
>> **pages)
>> +{
>
> More nitpicks.
>
>> +       int i, nr_page = size>>  PAGE_SHIFT;
>
>               nr_pages = size >> PAGE_SHIFT;
>
>> +       for (i = 0; i<  nr_page; i++, addr += PAGE_SIZE)
>
>                    i < nr_pages
>
>> +               virt_to_page(addr)->index = (pgoff_t)pages[i];
>> +       return size>>  PAGE_SHIFT;
>
>        return size >> PAGE_SHIFT;
>
> I think checkpatch would whine about these too.

OK.

>
>> +void unmap_kernel_range_noflush(unsigned long addr, unsigned long size)
>> +{
>> +       int i, nr_page = size>>  PAGE_SHIFT;
>> +       for (i = 0; i<  nr_page; i++, addr += PAGE_SIZE)
>> +               virt_to_page(addr)->index = 0;
>> +}
>> +
>> +struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
>> +                                       const size_t *sizes, int nr_vms,
>> +                                               size_t align, gfp_t
>> gfp_mask)
>
> Hmmm... in general, one of the reasons the percpu allocation is
> complex is to avoid contiguous allocations while avoiding additional
> TLB / NUMA overhead on machines with rather complex memory
> configuration (which is pretty common these days).  If the memory has
> to be allocated contiguous anyway, it probably would be much simpler
> to hook at higher level and simply allocate each chunk contiguously.
> I'll look into it.
I understand the complexity of percpu allocation code. As a nommu arch,
we have to allocate a bulk of memory in one time to insure its contiguous.
And in my implementation, many pages are wasted.
It would be better, if the percpu allocation code provide some hooks for us.
Thanks for your feedback.

-- 
-Graff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ