lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 20 Nov 2009 10:04:21 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Corrado Zoccolo <czoccolo@...il.com>
Cc:	"Alan D. Brunelle" <Alan.Brunelle@...com>,
	linux-kernel@...r.kernel.org, jens.axboe@...cle.com
Subject: Re: [RFC] Block IO Controller V2 - some results

On Fri, Nov 20, 2009 at 03:28:27PM +0100, Corrado Zoccolo wrote:
> Hi Vivek,
> On Fri, Nov 20, 2009 at 3:18 PM, Vivek Goyal <vgoyal@...hat.com> wrote:
> > Hi Corrado,
> >
> > I liked the idea of putting all the sync-noidle queues together in root
> > group to achieve better throughput and implemeted a small patch.
> >
> > It works fine for random readers. But when I do multiple direct random writers
> > in one group vs a random reader in other group, I am getting strange
> > behavior. Random reader moves to root group as sync-noidle workload. But
> > random writers are largely sync queues in remain in other group. But many
> > a times also jump into root group and preempt random reader.
> 
> can you try the attached patches?
> They fix the problems you identified about no-idle preemption, and
> deep seeky queues.
> With those, you should not see this jumping any more.
> I'll send them to Jens as soon has he comes back from vacation.
> 
> Corrado
> 
> > Anyway, with 4 random writers and 1 random reader running for 30 seconds
> > in root group I get following.
> >
> > rw: 59,963KB/s
> > rr: 66KB/s
> >
> > But if these are put in seprate groups test1 and test2 then
> >
> > rw: 30,587KB/s
> > rr: 23KB/s
> >

I quickly tried your new patches to try to keep idling enabled on deep
seeky sync queues so that it does not jump around too much and consume
share both in sync workload and sync-noidle workload.

Here are new results.

Without cgroup.

rw: 58,571KB/s
rr: 83KB/s

With cgroup:

rw: 32,525KB/s
rr: 25KB/s

So without cgroup it looks like that random reader gained a bit and that's 
a good thing.

With cgroup, problem still persists. I am wondering why both are loosing.
Looks like I am idling somewhere otherwise at least one person should have
gained.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ