lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 4 Apr 2012 13:17:12 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Tao Ma <tm@....ma>
Cc:	Shaohua Li <shli@...nel.org>, Tejun Heo <tj@...nel.org>,
	axboe@...nel.dk, ctalbott@...gle.com, rni@...gle.com,
	linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
	containers@...ts.linux-foundation.org
Subject: Re: IOPS based scheduler (Was: Re: [PATCH 18/21] blkcg: move
 blkio_group_conf->weight to cfq)

On Wed, Apr 04, 2012 at 12:50:48PM -0400, Vivek Goyal wrote:
> On Thu, Apr 05, 2012 at 12:45:05AM +0800, Tao Ma wrote:
> 
> [..]
> > > In iops_mode(), expire each cfqq after dispatch of 1 or bunch of requests
> > > and you should get the same behavior (with slice_idle=0 and group_idle=0).
> > > So why write a new scheduler.
> > really? How could we config cfq to work like this? Or you mean we can
> > change the code for it?
> 
> You can just put a few lines of code to expire queue after 1-2 requests
> dispatched from the queue. Than run your workload with slice_idle=0
> and group_idle=0 and see what happens.

Can you apply following patch and test your workload with slice_idle=0,
group_idle=0 and quantum=64/128.

I expect that fast queue and group switching will take place. Even if your
workload is creating continuously backlogged queues, we will still
expire the queue after dispatching 5 requests from the queue and requeue
it.

I also expect that you should see service differentation at *group level*.
(And not at cfqq level), if your workload is creating continuously
backlogged groups. Otherwise it will effectively become round robin
scheduling.

If possible, send me small backtrace (5 seconds trace), of your workload
and that can help understand little better what is going on.

Thanks
Vivek


---
 block/cfq-iosched.c |    5 +++++
 1 file changed, 5 insertions(+)

Index: linux-2.6/block/cfq-iosched.c
===================================================================
--- linux-2.6.orig/block/cfq-iosched.c	2012-04-03 23:18:33.000000000 -0400
+++ linux-2.6/block/cfq-iosched.c	2012-04-05 00:02:07.517806185 -0400
@@ -655,8 +655,13 @@ cfq_set_prio_slice(struct cfq_data *cfqd
  */
 static inline bool cfq_slice_used(struct cfq_queue *cfqq)
 {
+	/* In iops mode, we really are not looking for time measurement */
+	if (iops_mode(cfqq->cfqd) && cfqq->slice_dispatch > 5)
+		return true;
+
 	if (cfq_cfqq_slice_new(cfqq))
 		return false;
+
 	if (time_before(jiffies, cfqq->slice_end))
 		return false;
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists