lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53630686.2020604@kernel.dk>
Date:	Thu, 01 May 2014 20:44:22 -0600
From:	Jens Axboe <axboe@...nel.dk>
To:	Kent Overstreet <kmo@...erainc.com>
CC:	Ming Lei <tom.leiming@...il.com>,
	Alexander Gordeev <agordeev@...hat.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Shaohua Li <shli@...nel.org>,
	Nicholas Bellinger <nab@...ux-iscsi.org>,
	Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH RFC 0/2] percpu_ida: Take into account CPU topology when
 stealing tags

On 2014-05-01 20:38, Kent Overstreet wrote:
> On Thu, May 01, 2014 at 08:19:39PM -0600, Jens Axboe wrote:
>> On 2014-05-01 16:47, Kent Overstreet wrote:
>>> On Tue, Apr 29, 2014 at 03:13:38PM -0600, Jens Axboe wrote:
>>>> On 04/29/2014 05:35 AM, Ming Lei wrote:
>>>>> On Sat, Apr 26, 2014 at 10:03 AM, Jens Axboe <axboe@...nel.dk> wrote:
>>>>>> On 2014-04-25 18:01, Ming Lei wrote:
>>>>>>>
>>>>>>> Hi Jens,
>>>>>>>
>>>>>>> On Sat, Apr 26, 2014 at 5:23 AM, Jens Axboe <axboe@...nel.dk> wrote:
>>>>>>>>
>>>>>>>> On 04/25/2014 03:10 AM, Ming Lei wrote:
>>>>>>>>
>>>>>>>> Sorry, I did run it the other day. It has little to no effect here, but
>>>>>>>> that's mostly because there's so much other crap going on in there. The
>>>>>>>> most effective way to currently make it work better, is just to ensure
>>>>>>>> the caching pool is of a sane size.
>>>>>>>
>>>>>>>
>>>>>>> Yes, that is just what the patch is doing, :-)
>>>>>>
>>>>>>
>>>>>> But it's not enough.
>>>>>
>>>>> Yes, the patch is only for cases of mutli hw queue and having
>>>>> offline CPUs existed.
>>>>>
>>>>>> For instance, my test case, it's 255 tags and 64 CPUs.
>>>>>> We end up in cross-cpu spinlock nightmare mode.
>>>>>
>>>>> IMO, the scaling problem for the above case might be
>>>>> caused by either current percpu ida design or blk-mq's
>>>>> usage on it.
>>>>
>>>> That is pretty much my claim, yes. Basically I don't think per-cpu tag
>>>> caching is ever going to be the best solution for the combination of
>>>> modern machines and the hardware that is out there (limited tags).
>>>
>>> Sorry for not being more active in the discussion earlier, but anyways - I'm in
>>> 100% agreement with this.
>>>
>>> Percpu freelists are _fundamentally_ only _useful_ when you don't need to be
>>> using all your available tags, because percpu sharding requires wasting your tag
>>> space. I could write a mathematical proof of this if I cared enough.
>>>
>>> Otherwise what happens is on alloc failure you're touching all the other
>>> cachelines every single time and now you're bouncing _more_ cachelines than if
>>> you just had a single global freelist.
>>>
>>> So yeah, for small tag spaces just use a single simple bit vector on a single
>>> cacheline.
>>
>> I've taken the consequence of this and implemented another tagging scheme
>> that blk-mq will use if it deems that percpu_ida isn't going to be effective
>> for the device being initialized. But I really hate to have both of them in
>> there. Unfortunately I have no devices available that have a tag space that
>> will justify using percu_ida, so comparisons are a bit hard at the moment.
>> NVMe should change that, though, so decision will have to be deferred until
>> that is tested.
>
> Yeah, I agree that is annoying. I've thought about the issue too though and I
> haven't been able to come up with any better ideas, myself.

I have failed in that area, too, and it's not for lack of thinking about 
it and experimenting. So hence a new solution was thought up, based on a 
lot of userland prototyping and testing. Things considered, two 
solutions is better than no solution.

> A given driver probably should be able to always use one or the other though, so
> we shouldn't _need_ a runtime conditional because of this, though structuring
> the code that way might be more trouble than it's worth from my vague
> recollection of what blk-mq looks likee...

It's completely runtime conditional at this point, not sure how not to 
make it so. This is transparent to drivers, they should not care about 
what kind of tagging scheme to use. If we present that, then we have 
failed. The runtime conditional is still better than a function pointer, 
though, so it'll likely stay that way for now.

So it the entry points now all look like this:

if (use_new_scheme)
   ret = new_foo();
else
   ret = foo();

At least it should branch predict really well :-)

> (I've actually been fighting with unrelated issues at a very similar layer of
> abstraction, it's quite annoying.).
>
>>> BTW, Shaohua Li's patch d835502f3dacad1638d516ab156d66f0ba377cf5 that changed
>>> when steal_tags() runs was fundamentally wrong and broken in this respect, and
>>> should be reverted, whatever usage it was that was expecting to be able to
>>> allocate the entire tag space was the problem.
>>
>> It's hard to blame Shaohua, and I helped push that. It was an attempt in
>> making percpu_ida actually useful for what blk-mq needs it for, and being
>> the primary user of it, it was definitely worth doing. A tagging scheme that
>> requires the tag space to be effectively sparse/huge to be fast is not a
>> good generic tagging algorithm.
>
> Yeah it was definitely not an unreasonable attempt and it's probably my own
> fault for not speaking up louder at the time (I can't remember how much I
> commented at the time). Ah well, irritating problem space :)

Not a problem, I think the main failure here is that we have been coming 
at this with clashing expectations of what the requirements are. And a 
further issue is wanting to cling to the percpu_ida tags on my end, 
thinking it could be made to work.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ