[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120306132034.ecaf8b20.akpm@linux-foundation.org>
Date: Tue, 6 Mar 2012 13:20:34 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: Tejun Heo <tj@...nel.org>, axboe@...nel.dk, hughd@...gle.com,
avi@...hat.com, nate@...nel.net, cl@...ux-foundation.org,
linux-kernel@...r.kernel.org, dpshah@...gle.com,
ctalbott@...gle.com, rni@...gle.com
Subject: Re: [PATCHSET] mempool, percpu, blkcg: fix percpu stat allocation
and remove stats_lock
On Tue, 6 Mar 2012 16:09:55 -0500
Vivek Goyal <vgoyal@...hat.com> wrote:
>
> ...
>
> blk-cgroup: Alloc per cpu stats from worker thread in a delayed manner
>
> Current per cpu stat allocation assumes GFP_KERNEL allocation flag. But in
> IO path there are times when we want GFP_NOIO semantics. As there is no
> way to pass the allocation flags to alloc_percpu(), this patch delays the
> allocation of stats using a worker thread.
>
> v2-> tejun suggested following changes. Changed the patch accordingly.
> - move alloc_node location in structure
> - reduce the size of names of some of the fields
> - Reduce the scope of locking of alloc_list_lock
> - Simplified stat_alloc_fn() by allocating stats for all
> policies in one go and then assigning these to a group.
<takes a look to see if he can understand some block stuff>
<decides he can't>
>
> ...
>
> @@ -30,6 +30,15 @@ static LIST_HEAD(blkio_list);
> static DEFINE_MUTEX(all_q_mutex);
> static LIST_HEAD(all_q_list);
>
> +/* List of groups pending per cpu stats allocation */
> +static DEFINE_SPINLOCK(alloc_list_lock);
> +static LIST_HEAD(alloc_list);
> +
> +/* Array of per cpu stat pointers allocated for blk groups */
> +static void *pcpu_stats[BLKIO_NR_POLICIES];
> +static void blkio_stat_alloc_fn(struct work_struct *);
> +static DECLARE_WORK(blkio_stat_alloc_work, blkio_stat_alloc_fn);
> +
> struct blkio_cgroup blkio_root_cgroup = { .weight = 2*BLKIO_WEIGHT_DEFAULT };
> EXPORT_SYMBOL_GPL(blkio_root_cgroup);
>
> @@ -391,6 +400,9 @@ void blkiocg_update_dispatch_stats(struc
> struct blkio_group_stats_cpu *stats_cpu;
> unsigned long flags;
>
> + if (pd->stats_cpu == NULL)
> + return;
Maybe add a comment explaining how this comes about? It isn't very
obvious..
> /*
> * Disabling interrupts to provide mutual exclusion between two
> * writes on same cpu. It probably is not needed for 64bit. Not
> @@ -443,6 +455,9 @@ void blkiocg_update_io_merged_stats(stru
> struct blkio_group_stats_cpu *stats_cpu;
> unsigned long flags;
>
> + if (pd->stats_cpu == NULL)
> + return;
> +
> /*
> * Disabling interrupts to provide mutual exclusion between two
> * writes on same cpu. It probably is not needed for 64bit. Not
> @@ -460,6 +475,59 @@ void blkiocg_update_io_merged_stats(stru
> }
> EXPORT_SYMBOL_GPL(blkiocg_update_io_merged_stats);
>
> +static void blkio_stat_alloc_fn(struct work_struct *work)
> +{
> +
> + struct blkio_group *blkg, *n;
> + int i;
> +
> +alloc_stats:
> + spin_lock_irq(&alloc_list_lock);
> + if (list_empty(&alloc_list)) {
> + /* Nothing to do */
That's not a very helpful comment, given that we weren't told what the
function is supposed to do in the first place.
> + spin_unlock_irq(&alloc_list_lock);
> + return;
> + }
> + spin_unlock_irq(&alloc_list_lock);
Interesting code layout - I rather like it!
> + for (i = 0; i < BLKIO_NR_POLICIES; i++) {
> + if (pcpu_stats[i] != NULL)
> + continue;
> +
> + pcpu_stats[i] = alloc_percpu(struct blkio_group_stats_cpu);
> + if (pcpu_stats[i] == NULL)
> + goto alloc_stats;
hoo boy that looks like an infinite loop. What's going on here?
> + }
> +
> + spin_lock_irq(&blkio_list_lock);
> + spin_lock(&alloc_list_lock);
> +
> + list_for_each_entry_safe(blkg, n, &alloc_list, alloc_node) {
> + for (i = 0; i < BLKIO_NR_POLICIES; i++) {
> + struct blkio_policy_type *pol = blkio_policy[i];
> + struct blkg_policy_data *pd;
> +
> + if (!pol)
> + continue;
> +
> + if (!blkg->pd[i])
> + continue;
> +
> + pd = blkg->pd[i];
> + if (pd->stats_cpu)
> + continue;
> +
> + pd->stats_cpu = pcpu_stats[i];
> + pcpu_stats[i] = NULL;
> + }
> + list_del_init(&blkg->alloc_node);
> + break;
> + }
> + spin_unlock(&alloc_list_lock);
> + spin_unlock_irq(&blkio_list_lock);
> + goto alloc_stats;
> +}
So the function runs until alloc_list is empty. Very mysterious.
>
> ...
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists