lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46F058EE.1080408@redhat.com>
Date:	Tue, 18 Sep 2007 19:02:06 -0400
From:	Chuck Ebbert <cebbert@...hat.com>
To:	Ingo Molnar <mingo@...e.hu>
CC:	Antoine Martin <antoine@...afix.co.uk>,
	Satyam Sharma <satyam.sharma@...il.com>,
	Linux Kernel Development <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: CFS: some bad numbers with Java/database threading [FIXED]

On 09/18/2007 06:46 PM, Ingo Molnar wrote:
>>> We need a (tested) 
>>> solution for 2.6.23 and the CFS-devel patches are not for 2.6.23. I've 
>>> attached below the latest version of the -rc6 yield patch - the switch 
>>> is not dependent on SCHED_DEBUG anymore but always available.
>>>
>> Is this going to be merged? And will you be making the default == 1 or 
>> just leaving it at 0, which forces people who want the older behavior 
>> to modify the default?
> 
> not at the moment - Antoine suggested that the workload is probably fine 
> and the patch against -rc6 would have no clear effect anyway so we have 
> nothing to merge right now. (Note that there's no "older behavior" 
> possible, unless we want to emulate all of the O(1) scheduler's 
> behavior.) But ... we could still merge something like that patch, but a 
> clearer testcase is needed. The JVM's i have access to work fine.

I just got a bug report today:

https://bugzilla.redhat.com/show_bug.cgi?id=295071

==================================================

Description of problem:

The CFS scheduler does not seem to implement sched_yield correctly. If one
program loops with a sched_yield and another program prints out timing
information in a loop. You will see that if both are taskset to the same core
that the timing stats will be twice as long as when they are on different cores.
This problem was not in 2.6.21-1.3194 but showed up in 2.6.22.4-65 and continues
in the newest released kernel 2.6.22.5-76. 

Version-Release number of selected component (if applicable):

2.6.22.4-65 through 2.6.22.5-76

How reproducible:

Very

Steps to Reproduce:
compile task1
int main() {
        while (1) {
            sched_yield();
        }
        return 0;
}

and compile task2

#include <stdio.h>
#include <sys/time.h>
int main() {
    while (1) {
        int i;
        struct timeval t0,t1;
        double usec;

        gettimeofday(&t0, 0);
        for (i = 0; i < 100000000; ++i)
            ;
        gettimeofday(&t1, 0);

        usec = (t1.tv_sec * 1e6 + t1.tv_usec) - (t0.tv_sec * 1e6 + t0.tv_usec);
        printf ("%8.0f\n", usec);
    }
    return 0;
}

Then run:
"taskset -c 0 ./task1"
"taskset -c 0 ./task2"

You will see that both tasks use 50% of the CPU. 
Then kill task2 and run:
"taskset -c 1 ./task2"

Now task2 will run twice as fast verifying that it is not some anomaly with the
way top calculates CPU usage with sched_yield.
  
Actual results:
Tasks with sched_yield do not yield like they are suppose to.

Expected results:
The sched_yield task's CPU usage should go to near 0% when another task is on
the same CPU.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ