lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 14 May 2009 10:25:04 +0800
From:	Gui Jianfeng <guijianfeng@...fujitsu.com>
To:	Vivek Goyal <vgoyal@...hat.com>
CC:	nauman@...gle.com, dpshah@...gle.com, lizf@...fujitsu.com,
	mikew@...gle.com, fchecconi@...il.com, paolo.valente@...more.it,
	jens.axboe@...cle.com, ryov@...inux.co.jp, fernando@....ntt.co.jp,
	s-uchida@...jp.nec.com, taka@...inux.co.jp, jmoyer@...hat.com,
	dhaval@...ux.vnet.ibm.com, balbir@...ux.vnet.ibm.com,
	linux-kernel@...r.kernel.org,
	containers@...ts.linux-foundation.org, righi.andrea@...il.com,
	agk@...hat.com, dm-devel@...hat.com, snitzer@...hat.com,
	m-ikeda@...jp.nec.com, akpm@...ux-foundation.org
Subject: Re: [PATCH] IO Controller: Add per-device weight and ioprio_class
 handling

Vivek Goyal wrote:
...
> Hi Gui,
> 
> It might make sense to also store the device name or device major and
> minor number in io_group while creating the io group. This will help us
> to display io.disk_time and io.disk_sector statistics per device instead
> of aggregate.
> 
> I am attaching a patch I was playing around with to display per device
> statistics instead of aggregate one. So if user has specified the per
> device rule.
> 
> Thanks
> Vivek
> 
> 
> o Currently the statistics exported through cgroup are aggregate of statistics
>   on all devices for that cgroup. Instead of aggregate, make these per device.
> 
> o Also export another statistics io.disk_dequeue. This keeps a count of how
>   many times a particular group got out of race for the disk. This is a
>   debugging aid to keep a track how often we could create continuously
>   backlogged queues.
> 
> Signed-off-by: Vivek Goyal <vgoyal@...hat.com>
> ---
>  block/elevator-fq.c |  127 +++++++++++++++++++++++++++++++++-------------------
>  block/elevator-fq.h |    3 +
>  2 files changed, 85 insertions(+), 45 deletions(-)
> 
> Index: linux14/block/elevator-fq.h
> ===================================================================
> --- linux14.orig/block/elevator-fq.h	2009-05-13 11:40:32.000000000 -0400
> +++ linux14/block/elevator-fq.h	2009-05-13 11:40:57.000000000 -0400
> @@ -250,6 +250,9 @@ struct io_group {
>  
>  #ifdef CONFIG_DEBUG_GROUP_IOSCHED
>  	unsigned short iocg_id;
> +	dev_t	dev;
> +	/* How many times this group has been removed from active tree */
> +	unsigned long dequeue;
>  #endif
>  };
>  
> Index: linux14/block/elevator-fq.c
> ===================================================================
> --- linux14.orig/block/elevator-fq.c	2009-05-13 11:40:53.000000000 -0400
> +++ linux14/block/elevator-fq.c	2009-05-13 11:40:57.000000000 -0400
> @@ -12,6 +12,7 @@
>  #include "elevator-fq.h"
>  #include <linux/blktrace_api.h>
>  #include <linux/biotrack.h>
> +#include <linux/seq_file.h>
>  
>  /* Values taken from cfq */
>  const int elv_slice_sync = HZ / 10;
> @@ -758,6 +759,18 @@ int __bfq_deactivate_entity(struct io_en
>  	BUG_ON(sd->active_entity == entity);
>  	BUG_ON(sd->next_active == entity);
>  
> +#ifdef CONFIG_DEBUG_GROUP_IOSCHED
> +	{
> +		struct io_group *iog = io_entity_to_iog(entity);
> +		/*
> +		 * Keep track of how many times a group has been removed
> +		 * from active tree because it did not have any active
> +		 * backlogged ioq under it
> +		 */
> +		if (iog)
> +			iog->dequeue++;
> +	}
> +#endif
>  	return ret;
>  }
>  
> @@ -1126,90 +1139,103 @@ STORE_FUNCTION(weight, 0, WEIGHT_MAX);
>  STORE_FUNCTION(ioprio_class, IOPRIO_CLASS_RT, IOPRIO_CLASS_IDLE);
>  #undef STORE_FUNCTION
>  
> -/*
> - * traverse through all the io_groups associated with this cgroup and calculate
> - * the aggr disk time received by all the groups on respective disks.
> - */
> -static u64 calculate_aggr_disk_time(struct io_cgroup *iocg)
> +static int io_cgroup_disk_time_read(struct cgroup *cgroup,
> +				struct cftype *cftype, struct seq_file *m)
>  {
> +	struct io_cgroup *iocg;
>  	struct io_group *iog;
>  	struct hlist_node *n;
> -	u64 disk_time = 0;
> +
> +	if (!cgroup_lock_live_group(cgroup))
> +		return -ENODEV;
> +
> +	iocg = cgroup_to_io_cgroup(cgroup);
>  
>  	rcu_read_lock();
> +	spin_lock_irq(&iocg->lock);
>  	hlist_for_each_entry_rcu(iog, n, &iocg->group_data, group_node) {
>  		/*
>  		 * There might be groups which are not functional and
>  		 * waiting to be reclaimed upon cgoup deletion.
>  		 */
> -		if (rcu_dereference(iog->key))
> -			disk_time += iog->entity.total_service;
> +		if (rcu_dereference(iog->key)) {
> +			seq_printf(m, "%u %u %lu\n", MAJOR(iog->dev),
> +					MINOR(iog->dev),
> +					iog->entity.total_service);

  Hi Vivek,

  I think it's easier for users if device name is also shown here.

> +		}
>  	}
> +	spin_unlock_irq(&iocg->lock);
>  	rcu_read_unlock();
>  
> -	return disk_time;
> +	cgroup_unlock();
> +
> +	return 0;
>  }
>  

-- 
Regards
Gui Jianfeng

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ