lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 01 Apr 2013 12:15:50 +0530 From: Preeti U Murthy <preeti@...ux.vnet.ibm.com> To: Joonsoo Kim <iamjoonsoo.kim@....com> CC: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, linux-kernel@...r.kernel.org, Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>, Alex Shi <alex.shi@...el.com>, Vincent Guittot <vincent.guittot@...aro.org>, Morten Rasmussen <morten.rasmussen@....com>, Namhyung Kim <namhyung@...nel.org> Subject: Re: [PATCH 5/5] sched: limit sched_slice if it is more than sysctl_sched_latency Hi Joonsoo, On 04/01/2013 10:39 AM, Joonsoo Kim wrote: > Hello Preeti. > So we should limit this possible weird situation. >>> >>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com> >>> >>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >>> index e232421..6ceffbc 100644 >>> --- a/kernel/sched/fair.c >>> +++ b/kernel/sched/fair.c >>> @@ -645,6 +645,9 @@ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) >>> } >>> slice = calc_delta_mine(slice, se->load.weight, load); >>> >>> + if (unlikely(slice > sysctl_sched_latency)) >>> + slice = sysctl_sched_latency; >> >> Then in this case the highest priority thread would get >> 20ms(sysctl_sched_latency), and the rest would get >> sysctl_sched_min_granularity * 10 * (1024/97977) which would be 0.4ms. >> Then all tasks would get scheduled ateast once within 20ms + (0.4*9) ms >> = 23.7ms, while your scheduling latency period was extended to 40ms,just >> so that each of these tasks don't have their sched_slices shrunk due to >> large number of tasks. > > I don't know I understand your question correctly. > I will do my best to answer your comment. :) > > With this patch, I just limit maximum slice at one time. Scheduling is > controlled through the vruntime. So, in this case, the task with nice -20 > will be scheduled twice. > > 20 + (0.4 * 9) + 20 = 43.9 ms > > And after 43.9 ms, this process is repeated. > > So I can tell you that scheduling period is preserved as before. > > If we give a long period to a task at one go, it can cause > a latency problem. So IMHO, limiting this is meaningful. Thank you very much for the explanation. Just one question. What is the reason behind you choosing sysctl_sched_latency as the upper bound here? Regards Preeti U Murthy -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists