lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110617060533.GA2746@localhost.localdomain>
Date:	Fri, 17 Jun 2011 14:05:33 +0800
From:	Hu Tao <hutao@...fujitsu.com>
To:	Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>
Cc:	Paul Turner <pjt@...gle.com>, linux-kernel@...r.kernel.org,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Bharata B Rao <bharata@...ux.vnet.ibm.com>,
	Dhaval Giani <dhaval.giani@...il.com>,
	Balbir Singh <balbir@...ux.vnet.ibm.com>,
	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
	Srivatsa Vaddagiri <vatsa@...ibm.co>
Subject: Re: [patch 00/15] CFS Bandwidth Control V6

On Fri, Jun 17, 2011 at 10:22:51AM +0900, Hidetoshi Seto wrote:
> (2011/06/16 18:45), Hu Tao wrote:
> > On Thu, Jun 16, 2011 at 09:57:09AM +0900, Hidetoshi Seto wrote:
> >> (2011/06/15 17:37), Hu Tao wrote:
> >>> On Tue, Jun 14, 2011 at 04:29:49PM +0900, Hidetoshi Seto wrote:
> >>>> (2011/06/14 15:58), Hu Tao wrote:
> >>>>> Hi,
> >>>>>
> >>>>> I've run several tests including hackbench, unixbench, massive-intr
> >>>>> and kernel building. CPU is Intel(R) Xeon(R) CPU X3430  @ 2.40GHz,
> >>>>> 4 cores, and 4G memory.
> >>>>>
> >>>>> Most of the time the results differ few, but there are problems:
> >>>>>
> >>>>> 1. unixbench: execl throughout has about 5% drop.
> >>>>> 2. unixbench: process creation has about 5% drop.
> >>>>> 3. massive-intr: when running 200 processes for 5mins, the number
> >>>>>    of loops each process runs differ more than before cfs-bandwidth-v6.
> >>>>>
> >>>>> The results are attached.
> >>>>
> >>>> I know the score of unixbench is not so stable that the problem might
> >>>> be noises ... but the result of massive-intr is interesting.
> >>>> Could you give a try to find which piece (xx/15) in the series cause
> >>>> the problems?
> >>>
> >>> After more tests, I found massive-intr data is not stable, too. Results
> >>> are attached. The third number in file name means which patchs are
> >>> applied, 0 means no patch applied. plot.sh is easy to generate png
> >>> files.
> >>
> >> (Though I don't know what the 16th patch of this series is, anyway)
> 
> I see.  It will be replaced by Paul's update.
> 
> > the 16th patch is this: https://lkml.org/lkml/2011/5/23/503
> > 
> >> I see that the results of 15, 15-1 and 15-2 are very different and that
> >> 15-2 is similar to without-patch.
> >>
> >> One concern is whether this unstable of data is really caused by the
> >> nature of your test (hardware, massive-intr itself and something running
> >> in background etc.) or by a hidden piece in the bandwidth patch set.
> >> Did you see "not stable" data when none of patches is applied?
> > 
> > Yes. 
> > 
> > But for a five-runs the result seems 'stable'(before patches and after
> > patches). I've also run the tests in single mode. results are attached.
> 
> (It will be appreciated greatly if you could provide not only raw results
> but also your current observation/speculation.)

Sorry I didn't make me clear.

> 
> Well, (to wrap it up,) do you still see the following problem?
> 
> >>>>> 3. massive-intr: when running 200 processes for 5mins, the number
> >>>>>    of loops each process runs differ more than before cfs-bandwidth-v6.

Even when before applying the patches, the numbers differ much between
several runs of massive_intr, this is the reason I say the data is not
stable. But treating the results of five runs as a whole, it shows some
stability. The results after the patches are similar, and the average
loops differ little comparing to the results before the patches(compare
0-1.png and 16-1.png in my last mail). so I would say the patches don't
bring too much impact on interactive processes.

> 
> I think that 5 samples are not enough to draw a conclusion, and that at the
> moment it is inconsiderable.  How do you think?

At least 5 samples reveal something, but if you'd like I can take more
samples.

> 
> Even though pointed problems are gone, I have to say thank you for taking
> your time to test this CFS bandwidth patch set.
> I'd appreciate it if you could continue your test, possibly against V7.
> (I'm waiting, Paul?)
> 
> 
> Thanks,
> H.Seto

Thanks,
-- 
Hu Tao
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ