[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150813103518.GG26599@quack.suse.cz>
Date: Thu, 13 Aug 2015 12:35:18 +0200
From: Jan Kara <jack@...e.cz>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Al Viro <viro@...iv.linux.org.uk>,
Dave Chinner <david@...morbit.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Jan Kara <jack@...e.cz>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 7/8] shift percpu_counter_destroy() into
destroy_super_work()
On Tue 11-08-15 19:04:16, Oleg Nesterov wrote:
> Of course, this patch is ugly as hell. It will be (partially)
> reverted later. We add it to ensure that other WIP changes in
> percpu_rw_semaphore won't break fs/super.c.
>
> We do not even need this change right now, percpu_free_rwsem()
> is fine in atomic context. But we are going to change this, it
> will be might_sleep() after we merge the rcu_sync() patches.
>
> And even after that we do not really need destroy_super_work(),
> we will kill it in any case. Instead, destroy_super_rcu() should
> just check that rss->cb_state == CB_IDLE and do call_rcu() again
> in the (very unlikely) case this is not true.
>
> So this is just the temporary kludge which helps us to avoid the
> conflicts with the changes which will be (hopefully) routed via
> rcu tree.
>
> Signed-off-by: Oleg Nesterov <oleg@...hat.com>
Looking into this again, it would seem somewhat cleaner to me to move the
destruction to deactivate_locked_super() instead. We already do this for
list_lru_destroy() because that can sleep as well and so I'd prefer to keep
these two things together. You have to be somewhat careful with the failure
path in alloc_super() but that's easily doable...
Honza
> ---
> fs/super.c | 23 +++++++++++++++++++----
> include/linux/fs.h | 3 ++-
> 2 files changed, 21 insertions(+), 5 deletions(-)
>
> diff --git a/fs/super.c b/fs/super.c
> index 89b58fb..75436e2 100644
> --- a/fs/super.c
> +++ b/fs/super.c
> @@ -135,6 +135,24 @@ static unsigned long super_cache_count(struct shrinker *shrink,
> return total_objects;
> }
>
> +static void destroy_super_work(struct work_struct *work)
> +{
> + struct super_block *s = container_of(work, struct super_block,
> + destroy_work);
> + int i;
> +
> + for (i = 0; i < SB_FREEZE_LEVELS; i++)
> + percpu_counter_destroy(&s->s_writers.counter[i]);
> + kfree(s);
> +}
> +
> +static void destroy_super_rcu(struct rcu_head *head)
> +{
> + struct super_block *s = container_of(head, struct super_block, rcu);
> + INIT_WORK(&s->destroy_work, destroy_super_work);
> + schedule_work(&s->destroy_work);
> +}
> +
> /**
> * destroy_super - frees a superblock
> * @s: superblock to free
> @@ -143,16 +161,13 @@ static unsigned long super_cache_count(struct shrinker *shrink,
> */
> static void destroy_super(struct super_block *s)
> {
> - int i;
> list_lru_destroy(&s->s_dentry_lru);
> list_lru_destroy(&s->s_inode_lru);
> - for (i = 0; i < SB_FREEZE_LEVELS; i++)
> - percpu_counter_destroy(&s->s_writers.counter[i]);
> security_sb_free(s);
> WARN_ON(!list_empty(&s->s_mounts));
> kfree(s->s_subtype);
> kfree(s->s_options);
> - kfree_rcu(s, rcu);
> + call_rcu(&s->rcu, destroy_super_rcu);
> }
>
> /**
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 78ac768..6addccc 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -30,6 +30,7 @@
> #include <linux/lockdep.h>
> #include <linux/percpu-rwsem.h>
> #include <linux/blk_types.h>
> +#include <linux/workqueue.h>
>
> #include <asm/byteorder.h>
> #include <uapi/linux/fs.h>
> @@ -1346,7 +1347,7 @@ struct super_block {
> struct list_lru s_dentry_lru ____cacheline_aligned_in_smp;
> struct list_lru s_inode_lru ____cacheline_aligned_in_smp;
> struct rcu_head rcu;
> -
> + struct work_struct destroy_work;
> /*
> * Indicates how deep in a filesystem stack this SB is
> */
> --
> 1.5.5.1
>
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists