lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 21 Aug 2013 17:31:44 -0400
From:	Tejun Heo <tj@...nel.org>
To:	Kent Overstreet <kmo@...erainc.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	"Nicholas A. Bellinger" <nab@...ux-iscsi.org>,
	Christoph Lameter <cl@...two.org>,
	linux-kernel@...r.kernel.org, Oleg Nesterov <oleg@...hat.com>,
	Ingo Molnar <mingo@...hat.com>,
	Andi Kleen <andi@...stfloor.org>, Jens Axboe <axboe@...nel.dk>
Subject: Re: [PATCH] idr: Use this_cpu_ptr() for percpu_ida

Hello, Kent.

On Wed, Aug 21, 2013 at 02:24:42PM -0700, Kent Overstreet wrote:
> With single page allocations:
> 
> 1 << 15 bits per page
> 
> 1 << 9 pointers per page
> 
> So two layers of pointers does get us to 1 << 33 bits, which is what we
> need.

And single layer - 1 << 15 - would cover most of the use cases, right?
With 1 << (9 + 15) probably covering everyone else but the cyclic ones
doing the full circle.

> But now, since we need two layers of pointers instead of one, we need
> either another pointer deref for a node lookup - _always_, even when
> we've got 8 bytes of bits - or we need to branch on the depth of the
> tree, which is something we don't have now.

A likely() branch which is almost always hit is *extremely* cheap.

> This is extra overhead _no matter the size of the ida_, over my current
> approach.
> I'm assuming the common case is < one page of bits, based on the usage
> I've seen throughout the kernel that's probably way conservative.
> 
> In that case, your approach is going to be slower than mine, and there's
> no difference in the size of the allocations.

By single likely() branch.  I'm not even sure that'd be measureable in
most cases.  I'd take that over custom radix tree implementation which
needs high order allocations.

> I've already shown massive performance gains over the existing radix
> tree approach, you're the one claiming a different approach would be
> better.

So?  What difference does that make?  You should be able to justify
your custom thing.  If you do something unusual, of course someone is
gonna ask you to justify it and justifying that is *your*
responsibility.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ