lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 9 Oct 2012 14:02:08 +0200
From:	David Sterba <dave@...os.cz>
To:	Chul Lee <chur.lee@...sung.com>
Cc:	dave@...os.cz, "'?????????'" <jaegeuk.kim@...sung.com>,
	viro@...iv.linux.org.uk, "'Theodore Ts'o'" <tytso@....edu>,
	gregkh@...uxfoundation.org, linux-kernel@...r.kernel.org,
	cm224.lee@...sung.com, jooyoung.hwang@...sung.com,
	linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 03/16] f2fs: add superblock and major in-memory structures

On Tue, Oct 09, 2012 at 02:13:29PM +0900, Chul Lee wrote:
> > > +	block_t	data_blkaddr;
> > > +};
> > > +
> > > +struct f2fs_sb_info {
> > > +	struct super_block *sb;			/* Pointer to VFS super
> > block */
> > > +	int s_dirty;
> > 
> > Is s_dirty actually used? I can see it only set and reset at checkpoint,
> > not eg. synced with an on-disk block to signalize a dirty status.
> 
> The s_dirty is checked, when sync_fs is called.

I've checked again, you're right, I did not see it before.

> > > +	struct mutex gc_mutex;			/* mutex for GC */
> > > +	struct mutex fs_lock[NR_LOCK_TYPE];	/* mutex for GP */
> > > +	struct mutex write_inode;		/* mutex for write inode */
> > > +	struct mutex writepages;		/* mutex for writepages() */
> > 
> > wow, thats 1+8+2 = 11 mutexes in a row! The ones hidden under
> > NR_LOCK_TYPE may hurt, as they seem to protect various and common file
> > opterations (guesed from the lock_type names).
> 
> Yes, they protect global variables shared by various operations and
> checkpoint.
> Could you tell me what you recommend? Each fs_lock's under NR_LOCK_TYPE
> would have had different lock names?

I think this was too eager from me to point out the perf problems with
the mutexes, this sure would be a problem with spinlocks but mutexes can
sleep and I can see that there are IO operations enclosed in the mutex
section almost always. There may be a subset of operations that are
both frequent and lightweight.

Seems to me that DATA_NEW, NODE_NEW and DATA_WRITE are candidates for
profiling and subject to futher optimizations (ie. split the locks from
the same cacheline).
(This is not something that would prevent inclusion of F2FS into kernel)

Also, if you target only embedded devices, the scaling problems are not
critical, however a NAND device are not limited to embedded world.

david
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ