lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 15 Dec 2010 17:04:53 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Gui Jianfeng <guijianfeng@...fujitsu.com>
Cc:	Jens Axboe <axboe@...nel.dk>, Corrado Zoccolo <czoccolo@...il.com>,
	Chad Talbott <ctalbott@...gle.com>,
	Nauman Rafique <nauman@...gle.com>,
	Divyesh Shah <dpshah@...gle.com>,
	linux kernel mailing list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 5/8 v2] cfq-iosched: Introduce hierarchical scheduling
 with CFQ queue and group at the same level

On Wed, Dec 15, 2010 at 03:02:36PM +0800, Gui Jianfeng wrote:
[..]
> >>  static inline unsigned
> >>  cfq_group_slice(struct cfq_data *cfqd, struct cfq_group *cfqg)
> >>  {
> >> -	struct cfq_rb_root *st = &cfqd->grp_service_tree;
> >>  	struct cfq_entity *cfqe = &cfqg->cfqe;
> >> +	struct cfq_rb_root *st = cfqe->service_tree;
> >>  
> >> -	return cfq_target_latency * cfqe->weight / st->total_weight;
> >> +	if (st)
> >> +		return cfq_target_latency * cfqe->weight
> >> +			/ st->total_weight;
> > 
> > Is it still true in hierarchical mode. Previously group used to be
> > at the top and there used to be only one service tree for groups so
> > st->total_weight represented total weight in the system.
> >  
> > Now with hierarhcy this will not/should not be true. So a group slice
> > calculation should be different?
> 
> I just keep the original group slice calculation here. I was thinking that
> calculate group slice in a hierachical way, this might get a really small
> group slice and not sure how it works. So I just keep the original calculation.
> Any thoughts?

Corrado already had minimum per queue limits (16ms or something) so don't
worry about it getting too small. But we have to do hierarchical groups
share calculation otherwise what's the point of writting this code and
all the logic of trying to meet soft latency of 300ms.

> 

> > 
> >> +	else
> >> +		/* If this is the root group, give it a full slice. */
> >> +		return cfq_target_latency;
> >>  }
> >>  
> >>  static inline void
> >> @@ -804,17 +809,6 @@ static struct cfq_entity *cfq_rb_first(struct cfq_rb_root *root)
> >>  	return NULL;
> >>  }
> >>  
> >> -static struct cfq_entity *cfq_rb_first_entity(struct cfq_rb_root *root)
> >> -{
> >> -	if (!root->left)
> >> -		root->left = rb_first(&root->rb);
> >> -
> >> -	if (root->left)
> >> -		return rb_entry_entity(root->left);
> >> -
> >> -	return NULL;
> >> -}
> >> -
> >>  static void rb_erase_init(struct rb_node *n, struct rb_root *root)
> >>  {
> >>  	rb_erase(n, root);
> >> @@ -888,12 +882,15 @@ __cfq_entity_service_tree_add(struct cfq_rb_root *st, struct cfq_entity *cfqe)
> >>  
> >>  	rb_link_node(&cfqe->rb_node, parent, node);
> >>  	rb_insert_color(&cfqe->rb_node, &st->rb);
> >> +
> >> +	update_min_vdisktime(st);
> >>  }
> >>  
> >>  static void
> >>  cfq_entity_service_tree_add(struct cfq_rb_root *st, struct cfq_entity *cfqe)
> >>  {
> >>  	__cfq_entity_service_tree_add(st, cfqe);
> >> +	cfqe->reposition_time = jiffies;
> >>  	st->count++;
> >>  	st->total_weight += cfqe->weight;
> >>  }
> >> @@ -901,34 +898,57 @@ cfq_entity_service_tree_add(struct cfq_rb_root *st, struct cfq_entity *cfqe)
> >>  static void
> >>  cfq_group_service_tree_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
> >>  {
> >> -	struct cfq_rb_root *st = &cfqd->grp_service_tree;
> >>  	struct cfq_entity *cfqe = &cfqg->cfqe;
> >> -	struct cfq_entity *__cfqe;
> >>  	struct rb_node *n;
> >> +	struct cfq_entity *entity;
> >> +	struct cfq_rb_root *st;
> >> +	struct cfq_group *__cfqg;
> >>  
> >>  	cfqg->nr_cfqq++;
> >> +
> >> +	/*
> >> +	 * Root group doesn't belongs to any service
> >> +	 */
> >> +	if (cfqg == &cfqd->root_group)
> >> +		return;
> > 
> > Can we keep root group on cfqd->grp_service_tree?  In hierarchical mode
> > there will be only 1 group on grp service tree and in flat mode there
> > can be many.
> 
> Keep top service tree different for hierarchical mode and flat mode is just
> fine to me. If you don't strongly object, I'd to keep the current way. :)

I am saying that keep one top tree both for hierarchical and flat mode and
not separate trees.

for flat mode everything goes on cfqd->grp_service_tree.

			grp_service_tree
			  /  |     \
		       root test1  test2

for hierarchical mode it will look as follows.

			grp_service_tree
			  	|	
		       	       root
				/ \
			     test1 test2

Or it could looks as follows if user has set use_hier=1 in test2 only.

			grp_service_tree
			 |     |      | 
		       	 root  test1  test2
					|
				      test3

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ