[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120828203826.GA5868@p183.telecom.by>
Date: Tue, 28 Aug 2012 23:38:27 +0300
From: Alexey Dobriyan <adobriyan@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Nathan Zimmer <nzimmer@....com>, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org,
Alexander Viro <viro@...iv.linux.org.uk>,
David Woodhouse <dwmw2@...radead.org>
Subject: Re: [PATCH] fs/proc: Move kfree outside pde_unload_lock
On Wed, Aug 22, 2012 at 11:42:58PM +0200, Eric Dumazet wrote:
> On Wed, 2012-08-22 at 20:28 +0200, Eric Dumazet wrote:
>
> >
> > Thats interesting, but if you really want this to fly, one RCU
> > conversion would be much better ;)
> >
> > pde_users would be an atomic_t and you would avoid the spinlock
> > contention.
>
> Here is what I had in mind, I would be interested to know how it helps a 512 core machine ;)
Nothing can stop RCU!
After running "modprobe;rmmod" in a loop and "cat" in another loop for a while
rmmod got stuck in D-state inside remove_proc_entry() with trace amounts of CPU time
being consumed.
It didn't oopsed, though.
> --- a/include/linux/proc_fs.h
> +++ b/include/linux/proc_fs.h
> @@ -64,16 +64,13 @@ struct proc_dir_entry {
> * If you're allocating ->proc_fops dynamically, save a pointer
> * somewhere.
> */
> - const struct file_operations *proc_fops;
> + const struct file_operations __rcu *proc_fops;
> struct proc_dir_entry *next, *parent, *subdir;
> void *data;
> read_proc_t *read_proc;
> write_proc_t *write_proc;
> atomic_t count; /* use count */
> - int pde_users; /* number of callers into module in progress */
> - struct completion *pde_unload_completion;
> - struct list_head pde_openers; /* who did ->open, but not ->release */
> - spinlock_t pde_unload_lock; /* proc_fops checks and pde_users bumps */
> + atomic_t pde_users; /* number of callers into module in progress */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists