lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 8 Mar 2014 13:13:10 -0500
From:	Mike Snitzer <snitzer@...il.com>
To:	Hannes Reinecke <hare@...e.de>
Cc:	Christoph Hellwig <hch@...radead.org>,
	Jeff Moyer <jmoyer@...hat.com>, Jens Axboe <axboe@...nel.dk>,
	Shaohua Li <shli@...ionio.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	msnitzer <msnitzer@...hat.com>
Subject: Re: [PATCH 1/1] block: rework flush sequencing for blk-mq

On Sat, Mar 8, 2014 at 2:51 PM, Hannes Reinecke <hare@...e.de> wrote:
> On 03/08/2014 06:33 PM, Mike Snitzer wrote:
>>
>> On Sat, Mar 8, 2014 at 10:52 AM, Christoph Hellwig <hch@...radead.org>
>> wrote:
>>>
>>> On Fri, Mar 07, 2014 at 03:45:09PM -0500, Jeff Moyer wrote:
>>>>
>>>> Hi, Christoph,
>>>>
>>>> Did you mean to switch from list_add to list_add_tail?  That seems like
>>>> a change that warrants mention.
>>>
>>>
>>> No, that wasn't intentional and should be fixed.  Btw, there was another
>>> issue with that commit, in that dm-multipath also needs to allocate
>>> ->flush_rq.  I saw a patch from Hannes fixing it in the SuSE tree, and
>>> would really love to see him submit that for mainline as well.
>>
>>
>> Ugh, rq-based DM calls blk_init_allocated_queue.. (ah, looks like best
>> to move q->flush_rq allocation from blk_init_queue_node to
>> blk_alloc_queue_node?).

The above suggestion would work.  But we'd lose the side-effect
benefit to bio-based DM not needing q->flush_rq allocated at all.

But I'm not immediately seeing a clean way to get that benefit (of not
allocating for bio-based request_queue) while always allocating it for
request-based queues.

Actually, how about moving the flush_rq allocation to
blk_init_allocated_queue(), it'd accomplish both.. what do others
think of that?

>>> Unfortunately SuSE seems to have lots of block and dm fixes and even
>>> features that they don't submit upstream.
>>
>>
>> Yeah, it is certainly disturbing.  No excuse for sitting on fixes like
>> this.
>>
>> Hannes, _please_ get this dm-mpath flush_rq fix for 3.14 posted ASAP.
>> Jens or I will need to get it to Linus next week.
>>
> Hey, calm down.

I'm calm.. was just a bit frustrated.  But this isn't a big deal.
I'll make an effort to reach out to relevant people sooner when
similar stuff is reported against recently upstreamed code.  Would be
cool if you did the same.  I can relate to needing to have the distro
vendor hat on (first needing to determine/answer "is this issue
specific to our hacked distro kernel?", etc).

> I've made the fix just two days ago. And was quite surprised that I've been
> the first hitting that; should've crashed for everybody using dm-multipath.

Yeah, it is surprising that we haven't had any upstream reports.

> And given the pushback I've gotten recently from patches I would have
> thought that it would work for most users; sure the author would've done due
> diligence on the original patchset ...
> Plus I've gotten the reports from S/390, so I put it down to mainframe
> weirdness.
>
> BTW, it not _my_ decision to sit on tons of SUSE specific patches.

I'll take the bait.. this isn't a SUSE specific patch we're talking about. ;)

> I really try to get things upstream. But I cannot do more than sending
> patches upstream, answer patiently any questions, and redo the patchset.
> Which I did. Frequently, But, alas, it's up to the maintainer to apply them.
> And I can only ask and hope. The usual story...

Guess I'm missing context for which patches you're saying people are
sitting on.  (I doubt you're referring to your dm-mpath patchset on
dm-devel.. since it has gone 9ish iterations.  Now that everything is
sorted out I'll be getting it reviewed and staged for 3.15 next week).

> I'll be sending the patch soon, Monday at latest.

OK, looking forward to seeing it.  Would appreciate your feedback on
the questions/suggestions I posed above.

Thanks,
Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists