lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49A389F7.9030002@kernel.org>
Date:	Tue, 24 Feb 2009 14:47:35 +0900
From:	Tejun Heo <tj@...nel.org>
To:	Rusty Russell <rusty@...tcorp.com.au>
CC:	tglx@...utronix.de, x86@...nel.org, linux-kernel@...r.kernel.org,
	hpa@...or.com, jeremy@...p.org, cpw@....com, mingo@...e.hu,
	tony.luck@...el.com
Subject: Re: [PATCH 09/10] percpu: implement new dynamic percpu allocator

Hello, Rusty.

Rusty Russell wrote:
> On Friday 20 February 2009 13:31:21 Tejun Heo wrote:
>>>    One question.  Are you thinking that to be defined by every SMP arch
>>> long-term?
>> Yeap, definitely.
> 
> Excellent.  That opens some really nice stuff.

Yeap, I think it'll be pretty interesting.

>>> Because there are benefits in having &<percpuvar> == valid
>>> percpuptr, such as passing them around as parameters.  If so, IA64
>>> will want a dedicated per-cpu area for statics (tho it can probably
>>> just map it somehow, but it has to be 64k).
>> Hmmm...  Don't have much idea about ia64 and its magic 64k.  Can it
>> somehow be used for the first chunk?
> 
> Yes, but I think that chunk must not be handed out for dynamic allocations
> but kept in reserve for modules.
> 
> IA64 uses a pinned TLB entry to map this cpu's 64k at __phys_per_cpu_start.
> See __ia64_per_cpu_var() in arch/ia64/include/asm/percpu.h.  This means they
> can also optimize cpu_local_* and read_cpuvar (or whatever it's called now).
> IIUC IA64 needs this region internally, using it for percpu vars is a bonus.

I'll take a look.

>>> These pseudo-constants seem like a really weird thing to do to me.
>> I explained this in the reply to Andrew's comment.  It's
>> non-really-constant-but-should-be-considered-so-by-users thing.  Is it
>> too weird?  Even if I add comment explaning it?
> 
> It's weird; I'd make them __read_mostly and be done with it.

Already dropped.  It seems I was the only one liking it.

>> Hmmm... the reverse mapping can be piggy backed on vmalloc by adding a
>> private pointer to the vm_struct but rbtree isn't too difficult to use
>> so I just did it directly.  Nick, what do you think about adding
>> private field to vm_struct and providing a reverse map function?
> 
> Naah, just walk the arrays to do the mapping.  Cuts a heap of code, and
> we can optimize when someone complains :)
> 
> Walking arrays is cache friendly, too.

It won't make much difference cache line wise here as it needs to
dereference anyway.  It will cut less than a hundred lines of code
comments included.  Given the not-so-large amount of reduced
complexity, I'm a little bit reluctant to cut the code but please feel
free to submit a patch to kill it if you think it's really necessary.

>> As for the sl*b allocation thing, can you please explain in more
>> detail or point me to the patches / threads?
> 
> lkml from 2008-05-30:
> 
> Message-Id: <20080530040021.800522644@....com>:
> Subject: [patch 32/41] cpu alloc: Use in slub
> And:
> Subject: [patch 33/41] cpu alloc: Remove slub fields
> Subject: [patch 34/41] cpu alloc: Page allocator conversion

I'll read them.  Thanks.

>> Thanks.  :-)
> 
> Don't thank me: you're doing all the work!
> Rusty.

Heh... I'm just being coward.  I keep thanks around so that I can
remove it when I wanna curse.  :-P

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ