[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87mvr9xsrn.fsf@notabene.neil.brown.name>
Date: Wed, 10 Feb 2016 12:23:40 +1100
From: NeilBrown <neilb@...e.com>
To: Trond Myklebust <trond.myklebust@...marydata.com>
Cc: Anna Schumaker <anna.schumaker@...app.com>,
Linux NFS Mailing List <linux-nfs@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] SUNRPC: restore fair scheduling to priority queues.
On Sun, Dec 27 2015, Trond Myklebust wrote:
> On Tue, Dec 15, 2015 at 10:10 PM, NeilBrown <neilb@...e.com> wrote:
>> On Wed, Dec 16 2015, Trond Myklebust wrote:
>>
>>> On Tue, Dec 15, 2015 at 6:44 PM, NeilBrown <neilb@...e.com> wrote:
>>>>
>>>> Commit: c05eecf63610 ("SUNRPC: Don't allow low priority tasks to pre-empt higher priority ones")
>>>>
>>>> removed the 'fair scheduling' feature from SUNRPC priority queues.
>>>> This feature caused problems for some queues (send queue and session slot queue)
>>>> but is still needed for others, particularly the tcp slot queue.
>>>>
>>>> Without fairness, reads (priority 1) can starve background writes
>>>> (priority 0) so a streaming read can cause writeback to block
>>>> indefinitely. This is not easy to measure with default settings as
>>>> the current slot table size is much larger than the read-ahead size.
>>>> However if the slot-table size is reduced (seen when backporting to
>>>> older kernels with a limited size) the problem is easily demonstrated.
>>>>
>>>> This patch conditionally restores fair scheduling. It is now the
>>>> default unless rpc_sleep_on_priority() is called directly. Then the
>>>> queue switches to strict priority observance.
>>>>
>>>> As that function is called for both the send queue and the session
>>>> slot queue and not for any others, this has exactly the desired
>>>> effect.
>>>>
>>>> The "count" field that was removed by the previous patch is restored.
>>>> A value for '255' means "strict priority queuing, no fair queuing".
>>>> Any other value is a could of owners to be processed before switching
>>>> to a different priority level, just like before.
> <snip>
>>> Are we sure there is value in keeping FLUSH_LOWPRI for background writes?
>>
>> There is currently also FLUSH_HIGHPRI for "for_reclaim" writes.
>> Should they be allowed to starve reads?
>>
>> If you treated all reads and writed the same, then I can't see value in
>> restoring fair scheduling. If there is any difference, then I suspect
>> we do need the fairness.
>
> I disagree. Reclaiming memory should always be able to pre-empt
> "interactive" features such as read. Everything goes down the toilet
> when we force the kernel into situations where it needs to swap.
That's your call I guess. I certainly agree that memory-reclaim writes
should get some priority (e.g. two writes serviced for every read).
Whether they should be allowed to completely block reads I'm less sure
of. But it is probably purely academic as if the system is busy
reclaiming you are unlikely to have any reads to want to send.
My problem would be solved (I think) by treating reads and non-reclaim
writes as equals. I'll make a patch, see if I can test it, and let you
know.
Thanks,
NeilBrown
Download attachment "signature.asc" of type "application/pgp-signature" (819 bytes)
Powered by blists - more mailing lists