[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c71255da-c142-8ea1-2dc4-840a6581689b@kernel.dk>
Date: Fri, 18 Jan 2019 10:36:07 -0700
From: Jens Axboe <axboe@...nel.dk>
To: Paolo Valente <paolo.valente@...aro.org>
Cc: linux-block <linux-block@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Ulf Hansson <ulf.hansson@...aro.org>,
Linus Walleij <linus.walleij@...aro.org>,
Mark Brown <broonie@...nel.org>,
'Paolo Valente' via bfq-iosched
<bfq-iosched@...glegroups.com>, oleksandr@...alenko.name,
hurikhan77+bko@...il.com
Subject: Re: [PATCH BUGFIX RFC 0/2] reverting two commits causing freezes
On 1/18/19 10:24 AM, Paolo Valente wrote:
>
>
>> Il giorno 18 gen 2019, alle ore 14:35, Jens Axboe <axboe@...nel.dk> ha scritto:
>>
>> On 1/18/19 4:52 AM, Paolo Valente wrote:
>>> Hi Jens,
>>> a user reported a warning, followed by freezes, in case he increases
>>> nr_requests to more than 64 [1]. After reproducing the issues, I
>>> reverted the commit f0635b8a416e ("bfq: calculate shallow depths at
>>> init time"), plus the related commit bd7d4ef6a4c9 ("bfq-iosched:
>>> remove unused variable"). The problem went away.
>>
>> For reverts, please put the justification into the actual revert
>> commit. With this series, if applied as-is, we'd have two patches
>> in the tree that just says "revert X" without any hint as to why
>> that was done.
>>
>
> I forget to say explicitly that these patches were meant only to give
> you and anybody else something concrete to test and check.
>
> With me you're as safe as houses, in terms of amount of comments in
> final patches :)
It's almost an example of the classic case of "if you want a real
solution to a problem, post a knowingly bad and half assed solution".
That always gets people out of the woodwork :-)
>>> Maybe the assumption in commit f0635b8a416e ("bfq: calculate shallow
>>> depths at init time") does not hold true?
>>
>> It apparently doesn't! But let's try and figure this out instead of
>> blindly reverting it.
>
> Totally agree.
>
>> OK, I think I see it. For the sched_tags
>> case, when we grow the requests, we allocate a new set. Hence any
>> old cache would be stale at that point.
>>
>
> ok
>
>> How about something like this? It still keeps the code of having
>> to update this out of the hot IO path, and only calls it when we
>> actually change the depths.
>>
>
> Looks rather clean and efficient.
>
>> Totally untested...
>>
>
> It seems to work here too.
OK good, I've posted it "officially" now.
--
Jens Axboe
Powered by blists - more mailing lists