[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47037DAA.9070107@aitel.hist.no>
Date: Wed, 03 Oct 2007 13:31:54 +0200
From: Helge Hafting <helge.hafting@...el.hist.no>
To: davids@...master.com
CC: Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org
Subject: Re: Network slowdown due to CFS
David Schwartz wrote:
>> * Jarek Poplawski <jarkao2@...pl> wrote:
>>
>>
>>> BTW, it looks like risky to criticise sched_yield too much: some
>>> people can misinterpret such discussions and stop using this at all,
>>> even where it's right.
>>>
>
>
>> Really, i have never seen a _single_ mainstream app where the use of
>> sched_yield() was the right choice.
>>
>
> It can occasionally be an optimization. You may have a case where you can do
> something very efficiently if a lock is not held, but you cannot afford to
> wait for the lock to be released. So you check the lock, if it's held, you
> yield and then check again. If that fails, you do it the less optimal way
> (for example, dispatching it to a thread that *can* afford to wait).
>
How about:
Check the lock. If it is held, sleep for an interval that is shorter
than acceptable waiting time. If it is still held, sleep for twice as long.
Loop until you get the lock and do the work, or until you
you reach the limit for how much you can wait at this point and
dispatch to a thread instead.
This approach should be portable, don't wake up too often,
and don't waste the CPU. (And it won't go idle either, whoever
holds the lock will be running.)
Helge Hafting
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists