lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090425033235.GU8633@ZenIV.linux.org.uk>
Date:	Sat, 25 Apr 2009 04:32:35 +0100
From:	Al Viro <viro@...IV.linux.org.uk>
To:	npiggin@...e.de
Cc:	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 02/27] fs: scale files_lock

On Sat, Apr 25, 2009 at 11:20:22AM +1000, npiggin@...e.de wrote:
> Improve scalability of files_lock by adding per-cpu, per-sb files lists,
> protected with per-cpu locking. Effectively turning it into a big-writer
> lock.

Og dumb.  Many locks.  Many ifdefs.  Og don't like.

>  void file_sb_list_add(struct file *file, struct super_block *sb)
>  {
> -	spin_lock(&files_lock);
> +	spinlock_t *lock;
> +	struct list_head *list;
> +	int cpu;
> +
> +	lock = &get_cpu_var(files_cpulock);
> +#ifdef CONFIG_SMP
> +	BUG_ON(file->f_sb_list_cpu != -1);
> +	cpu = smp_processor_id();
> +	list = per_cpu_ptr(sb->s_files, cpu);
> +	file->f_sb_list_cpu = cpu;
> +#else
> +	list = &sb->s_files;
> +#endif
> +	spin_lock(lock);
>  	BUG_ON(!list_empty(&file->f_u.fu_list));
> -	list_add(&file->f_u.fu_list, &sb->s_files);
> -	spin_unlock(&files_lock);
> +	list_add(&file->f_u.fu_list, list);
> +	spin_unlock(lock);
> +	put_cpu_var(files_cpulock);
>  }

Don't like overhead on hot paths either.

And grown memory footprint of struct super_block (with alloc_percpu())

>  	atomic_long_t		f_count;
>  	unsigned int 		f_flags;
>  	fmode_t			f_mode;
> @@ -1330,7 +1333,11 @@ struct super_block {
>  	struct list_head	s_io;		/* parked for writeback */
>  	struct list_head	s_more_io;	/* parked for more writeback */
>  	struct hlist_head	s_anon;		/* anonymous dentries for (nfs) exporting */
> +#ifdef CONFIG_SMP
> +	struct list_head	*s_files;
> +#else
>  	struct list_head	s_files;
> +#endif
>  	/* s_dentry_lru and s_nr_dentry_unused are protected by dcache_lock */
>  	struct list_head	s_dentry_lru;	/* unused dentry lru */
>  	int			s_nr_dentry_unused;	/* # of dentry on lru */

... and ifdefs like that in structs.

What I really want to see is a rationale for all that.  Preferably with
more than microbenchmarks showing a visible impact.

Especially if you compare it with alternative variant that simply splits
files_lock on per-sb basis.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ