[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6E3BC7F7C9A4BF4286DD4C043110F30B4BBA01D83D@shsmsx502.ccr.corp.intel.com>
Date: Wed, 6 Apr 2011 21:28:38 +0800
From: "Shi, Alex" <alex.shi@...el.com>
To: Mike Galbraith <efault@....de>
CC: Rik van Riel <riel@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"a.p.zijlstra@...llo.nl" <a.p.zijlstra@...llo.nl>,
"mingo@...e.hu" <mingo@...e.hu>,
"Chen, Tim C" <tim.c.chen@...el.com>,
"Li, Shaohua" <shaohua.li@...el.com>
Subject: RE: [PATCH] sched: recover sched_yield task running time increase
>> > NACK
>> >
>> > This was switched off by default and under
>> > the sysctl sched_compat_yield for a reason.
>> >
>> > Reintroducing it under that sysctl option
>> > may be acceptable, but by default it would
>> > be doing the wrong thing for other workloads.
>>
>> I can implement this as sysctl option. But when I checked again the man
>> page of sched_yield. I have some concerns on this.
>>
>> ----
>> int sched_yield(void);
>>
>> DESCRIPTION
>> A process can relinquish the processor voluntarily without blocking by calling sched_yield().
>> The process will then be moved to the end of the queue for its static priority and a new process
>> gets to run.
>> ----
>>
>> If a application calls sched_yield system call, most of time it is not
>> want to be launched again right now. so the man page said "the caller
>> process will then be moved to the _end_ of the queue..."
>
>Moving a yielding nice 0 task behind a SCHED_IDLE (or nice 19) task
>could be incredibly painful.
Good reminder! Do you have more detailed idea on this?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists