[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e98e18940906301014n146e7146vb5a73c2f33c9e819@mail.gmail.com>
Date: Tue, 30 Jun 2009 10:14:48 -0700
From: Nauman Rafique <nauman@...gle.com>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: Gui Jianfeng <guijianfeng@...fujitsu.com>,
linux-kernel@...r.kernel.org,
containers@...ts.linux-foundation.org, dm-devel@...hat.com,
jens.axboe@...cle.com, dpshah@...gle.com, lizf@...fujitsu.com,
mikew@...gle.com, fchecconi@...il.com, paolo.valente@...more.it,
ryov@...inux.co.jp, fernando@....ntt.co.jp, s-uchida@...jp.nec.com,
taka@...inux.co.jp, jmoyer@...hat.com, dhaval@...ux.vnet.ibm.com,
balbir@...ux.vnet.ibm.com, righi.andrea@...il.com,
m-ikeda@...jp.nec.com, jbaron@...hat.com, agk@...hat.com,
snitzer@...hat.com, akpm@...ux-foundation.org, peterz@...radead.org
Subject: Re: [PATCH] io-controller: optimization for iog deletion when
elevator exiting
On Mon, Jun 29, 2009 at 7:06 AM, Vivek Goyal<vgoyal@...hat.com> wrote:
> On Mon, Jun 29, 2009 at 01:27:47PM +0800, Gui Jianfeng wrote:
>> Hi Vivek,
>>
>> There's no need to travel the iocg->group_data for each iog
>> when exiting a elevator, that costs too much. An alternative
>> solution is reseting iocg_id as soon as an io group unlinking
>> from a iocg. Make a decision that whether it's need to carry
>> out deleting action by checking iocg_id.
>>
>
> Thanks Gui. This makes sense to me. We can check iog->iocg_id to determine
> wheter group is still on iocg list or not instead of traversing the list.
>
> Nauman, do you see any issues with the patch?
Looks like this should work. The only iog with zero id is associated
with root group, which gets deleted outside of this function anyways.
>
> Thanks
> Vivek
>
>> Signed-off-by: Gui Jianfeng <guijianfeng@...fujitsu.com>
>> ---
>> block/elevator-fq.c | 29 ++++++++++-------------------
>> 1 files changed, 10 insertions(+), 19 deletions(-)
>>
>> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
>> index d779282..b26fe0f 100644
>> --- a/block/elevator-fq.c
>> +++ b/block/elevator-fq.c
>> @@ -2218,8 +2218,6 @@ void io_group_cleanup(struct io_group *iog)
>> BUG_ON(iog->sched_data.active_entity != NULL);
>> BUG_ON(entity != NULL && entity->tree != NULL);
>>
>> - iog->iocg_id = 0;
>> -
>> /*
>> * Wait for any rcu readers to exit before freeing up the group.
>> * Primarily useful when io_get_io_group() is called without queue
>> @@ -2376,6 +2374,7 @@ remove_entry:
>> group_node);
>> efqd = rcu_dereference(iog->key);
>> hlist_del_rcu(&iog->group_node);
>> + iog->iocg_id = 0;
>> spin_unlock_irqrestore(&iocg->lock, flags);
>>
>> spin_lock_irqsave(efqd->queue->queue_lock, flags);
>> @@ -2403,35 +2402,27 @@ done:
>> void io_group_check_and_destroy(struct elv_fq_data *efqd, struct io_group *iog)
>> {
>> struct io_cgroup *iocg;
>> - unsigned short id = iog->iocg_id;
>> - struct hlist_node *n;
>> - struct io_group *__iog;
>> unsigned long flags;
>> struct cgroup_subsys_state *css;
>>
>> rcu_read_lock();
>>
>> - BUG_ON(!id);
>> - css = css_lookup(&io_subsys, id);
>> + css = css_lookup(&io_subsys, iog->iocg_id);
>>
>> - /* css can't go away as associated io group is still around */
>> - BUG_ON(!css);
>> + if (!css)
>> + goto out;
>>
>> iocg = container_of(css, struct io_cgroup, css);
>>
>> spin_lock_irqsave(&iocg->lock, flags);
>> - hlist_for_each_entry_rcu(__iog, n, &iocg->group_data, group_node) {
>> - /*
>> - * Remove iog only if it is still in iocg list. Cgroup
>> - * deletion could have deleted it already.
>> - */
>> - if (__iog == iog) {
>> - hlist_del_rcu(&iog->group_node);
>> - __io_destroy_group(efqd, iog);
>> - break;
>> - }
>> +
>> + if (iog->iocg_id) {
>> + hlist_del_rcu(&iog->group_node);
>> + __io_destroy_group(efqd, iog);
>> }
>> +
>> spin_unlock_irqrestore(&iocg->lock, flags);
>> +out:
>> rcu_read_unlock();
>> }
>>
>> -- 1.5.4.rc3
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists