lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071003080224.GB1726@ff.dom.local>
Date:	Wed, 3 Oct 2007 10:02:24 +0200
From:	Jarek Poplawski <jarkao2@...pl>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	David Schwartz <davids@...master.com>, linux-kernel@...r.kernel.org
Subject: Re: Network slowdown due to CFS

On 02-10-2007 08:06, Ingo Molnar wrote:
> * David Schwartz <davids@...master.com> wrote:
...
>> I'm not familiar enough with CFS' internals to help much on the 
>> implementation, but there may be some simple compromise yield that 
>> might work well enough. How about simply acting as if the task used up 
>> its timeslice and scheduling the next one? (Possibly with a slight 
>> reduction in penalty or reward for not really using all the time, if 
>> possible?)
> 
> firstly, there's no notion of "timeslices" in CFS. (in CFS tasks "earn" 
> a right to the CPU, and that "right" is not sliced in the traditional 
> sense) But we tried a conceptually similar thing [...]

>From kernel/sched_fair.c:

"/*
 * Targeted preemption latency for CPU-bound tasks:
 * (default: 20ms, units: nanoseconds)
 *
 * NOTE: this latency value is not the same as the concept of
 * 'timeslice length' - timeslices in CFS are of variable length.
 * (to see the precise effective timeslice length of your workload,
 *  run vmstat and monitor the context-switches field)
..."

So, no notion of something, which are(!) of variable length, and which
precise effective timeslice lenght can be seen in nanoseconds? (But
not timeslice!)

Well, I start to think, this new scheduler could be too simple yet...


> [...] [ and this is driven by compatibility 
> goals - regardless of how broken we consider yield use. The ideal 
> solution is of course to almost never use yield. Fortunately 99%+ of 
> Linux apps follow that ideal solution ;-) ]

Nevertheless, it seems, this 1% is important enough to boast a little:

  "( another detail: due to nanosec accounting and timeline sorting,
     sched_yield() support is very simple under CFS, and in fact under
     CFS sched_yield() behaves much better than under any other
     scheduler i have tested so far. )"
				[Documentation/sched-design-CFS.txt]

Cheers,
Jarek P.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ