[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120113101220.GA13641@quack.suse.cz>
Date: Fri, 13 Jan 2012 11:12:20 +0100
From: Jan Kara <jack@...e.cz>
To: Dave Chinner <david@...morbit.com>
Cc: Jan Kara <jack@...e.cz>, linux-fsdevel@...r.kernel.org,
Surbhi Palande <csurbhi@...il.com>,
Kamal Mostafa <kamal@...onical.com>,
Eric Sandeen <sandeen@...deen.net>,
LKML <linux-kernel@...r.kernel.org>, xfs@....sgi.com,
Christoph Hellwig <hch@...radead.org>,
Dave Chinner <dchinner@...hat.com>, linux-ext4@...r.kernel.org
Subject: Re: [PATCH 1/4] fs: Improve filesystem freezing handling
On Fri 13-01-12 12:26:43, Dave Chinner wrote:
> On Thu, Jan 12, 2012 at 02:20:50AM +0100, Jan Kara wrote:
> > + *
> > + * Decrement number of writers to the filesystem and wake up possible
> > + * waiters wanting to freeze the filesystem.
> > + */
> > +void sb_end_write(struct super_block *sb)
> > +{
> > +#ifdef CONFIG_SMP
> > + this_cpu_dec(sb->s_writers);
> > +#else
> > + preempt_disable();
> > + sb->s_writers--;
> > + preempt_enable();
> > +#endif
>
> I really dislike this type of open coded per-cpu counter
> implementation. I can't see that there is no good reason to use it
> over percpu_counters here which abstract all this mess away.
>
> i.e. it is relatively rare that the per-cpu count will nest
> greater than the percpu_counter batch size (needs more than 32
> concurrent blocked active writes per CPU), so there is no
> significant overhead to using the percpu_counters here.
>
> Indeed, if there are that many blocked writes per CPU, then the
> overhead of an occasional global counter update is going to be lost
> in the noise of everything else that is going on.
Well, I just did it the way mnt_want_write / mnt_put_write does it. But
you are right that it's unnecessary so it's a good idea to switch the code
to using per-cpu counters. Thanks for the idea.
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists