lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 12 Sep 2013 10:02:49 +0800
From:	俞超 <chao2.yu@...sung.com>
To:	'Kim Jaegeuk' <jaegeuk.kim@...il.com>
Cc:	'???' <jaegeuk.kim@...sung.com>,
	'谭姝' <shu.tan@...sung.com>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-f2fs-devel@...ts.sourceforge.net
Subject: Re: Re: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better performance

Hi Kim

> -----Original Message-----
> From: Kim Jaegeuk [mailto:jaegeuk.kim@...il.com]
> Sent: Wednesday, September 11, 2013 9:15 PM
> To: chao2.yu@...sung.com
> Cc: ???; 谭姝; linux-fsdevel@...r.kernel.org;
linux-kernel@...r.kernel.org;
> linux-f2fs-devel@...ts.sourceforge.net
> Subject: Re: Re: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better
performance
> 
> Hi,
> 
> 2013/9/11 Chao Yu <chao2.yu@...sung.com>
> >
> > Hi Kim,
> >
> > I did some tests as you mention of using random instead of spin_lock.
> > The test model is as following:
> > eight threads race to grab one of eight locks for one thousand times,
> > and I used four methods to generate lock num:
> >
> > 1.atomic_add_return(1, &sbi->next_lock_num) % NR_GLOBAL_LOCKS;
> > 2.spin_lock(); next_lock_num++ % NR_GLOBAL_LOCKS; spin_unlock();
> > 3.ktime_get().tv64 % NR_GLOBAL_LOCKS;
> > 4.get_random_bytes(&next_lock, sizeof(unsigned int));
> >
> > the result indicate that:
> > max count of collide continuously: 4 > 3 > 2 = 1 max-min count of lock
> > is grabbed: 4 > 3 > 2 = 1 elapsed time of generating: 3 > 2 > 4 > 1
> >
> > So I think it's better to use atomic_add_return in round-robin method
> > to cost less time and reduce collide.
> > What's your opinion?
> 
> Could you test with sbi->next_lock_num++ only instead of using
> atomic_add_return?
> IMO, this is just an integer value and still I don't think this value
should be
> covered by any kind of locks.
> Thanks,

Thanks for the advice, I have tested sbi->next_lock_num++.
The time cost of it is a little bit lower than the atomic one's.
for running 8 thread for 1000000 times test.
the performance of it's balance and collide play quit well than I expected.

Can we modify this patch as following?

root@...taulmachine:/home/yuchao/git/linux-next/fs/f2fs# git diff --stat
 fs/f2fs/f2fs.h |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
root@...taulmachine:/home/yuchao/git/linux-next/fs/f2fs# git diff
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 608f0df..7fd99d8 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -544,15 +544,15 @@ static inline void mutex_unlock_all(struct
f2fs_sb_info *sbi)
 
 static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
 {
-       unsigned char next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
+       unsigned char next_lock;
        int i = 0;
 
        for (; i < NR_GLOBAL_LOCKS; i++)
                if (mutex_trylock(&sbi->fs_lock[i]))
                        return i;
 
+       next_lock = sbi->next_lock_num++ % NR_GLOBAL_LOCKS;
        mutex_lock(&sbi->fs_lock[next_lock]);
-       sbi->next_lock_num++;
        return next_lock;
 }

> 
> >
> > thanks
> >
> > ------- Original Message -------
> > Sender : ???<jaegeuk.kim@...sung.com> S5(??)/??/?????????(???)/????
> > Date : 九月 10, 2013 09:52 (GMT+09:00)
> > Title : Re: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better
> > performance
> >
> > Hi,
> >
> > At first, thank you for the report and please follow the email writing
> > rules. :)
> >
> > Anyway, I agree to the below issue.
> > One thing that I can think of is that we don't need to use the
> > spin_lock, since we don't care about the exact lock number, but just
> > need to get any not-collided number.
> >
> > So, how about removing the spin_lock?
> > And how about using a random number?
> > Thanks,
> >
> > 2013-09-06 (?), 09:48 +0000, Chao Yu:
> > > Hi Kim:
> > >
> > >      I think there is a performance problem: when all sbi->fs_lock
> > > is holded,
> > >
> > > then all other threads may get the same next_lock value from
> > > sbi->next_lock_num in function mutex_lock_op,
> > >
> > > and wait to get the same lock at position fs_lock[next_lock], it
> > > unbalance the fs_lock usage.
> > >
> > > It may lost performance when we do the multithread test.
> > >
> > >
> > >
> > > Here is the patch to fix this problem:
> > >
> > >
> > >
> > > Signed-off-by: Yu Chao
> > >
> > > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> > >
> > > old mode 100644
> > >
> > > new mode 100755
> > >
> > > index 467d42d..983bb45
> > >
> > > --- a/fs/f2fs/f2fs.h
> > >
> > > +++ b/fs/f2fs/f2fs.h
> > >
> > > @@ -371,6 +371,7 @@ struct f2fs_sb_info {
> > >
> > >         struct mutex fs_lock[NR_GLOBAL_LOCKS];  /* blocking FS
> > > operations */
> > >
> > >         struct mutex node_write;                /* locking node
> writes
> > > */
> > >
> > >         struct mutex writepages;                /* mutex for
> > > writepages() */
> > >
> > > +       spinlock_t spin_lock;                   /* lock for
> > > next_lock_num */
> > >
> > >         unsigned char next_lock_num;            /* round-robin
> global
> > > locks */
> > >
> > >         int por_doing;                          /* recovery is
> doing
> > > or not */
> > >
> > >         int on_build_free_nids;                 /* build_free_nids is
> > > doing */
> > >
> > > @@ -533,15 +534,19 @@ static inline void mutex_unlock_all(struct
> > > f2fs_sb_info *sbi)
> > >
> > >
> > >
> > >  static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
> > >
> > >  {
> > >
> > > -       unsigned char next_lock = sbi->next_lock_num %
> > > NR_GLOBAL_LOCKS;
> > >
> > > +       unsigned char next_lock;
> > >
> > >         int i = 0;
> > >
> > >
> > >
> > >         for (; i < NR_GLOBAL_LOCKS; i++)
> > >
> > >                 if (mutex_trylock(&sbi->fs_lock[i]))
> > >
> > >                         return i;
> > >
> > >
> > >
> > > -       mutex_lock(&sbi->fs_lock[next_lock]);
> > >
> > > +       spin_lock(&sbi->spin_lock);
> > >
> > > +       next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
> > >
> > >         sbi->next_lock_num++;
> > >
> > > +       spin_unlock(&sbi->spin_lock);
> > >
> > > +
> > >
> > > +       mutex_lock(&sbi->fs_lock[next_lock]);
> > >
> > >         return next_lock;
> > >
> > >  }
> > >
> > >
> > >
> > > diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
> > >
> > > old mode 100644
> > >
> > > new mode 100755
> > >
> > > index 75c7dc3..4f27596
> > >
> > > --- a/fs/f2fs/super.c
> > >
> > > +++ b/fs/f2fs/super.c
> > >
> > > @@ -657,6 +657,7 @@ static int f2fs_fill_super(struct super_block
> > > *sb, void *data, int silent)
> > >
> > >         mutex_init(&sbi->cp_mutex);
> > >
> > >         for (i = 0; i < NR_GLOBAL_LOCKS; i++)
> > >
> > >                 mutex_init(&sbi->fs_lock[i]);
> > >
> > > +       spin_lock_init(&sbi->spin_lock);
> > >
> > >         mutex_init(&sbi->node_write);
> > >
> > >         sbi->por_doing = 0;
> > >
> > >         spin_lock_init(&sbi->stat_lock);
> > >
> > > (END)
> > >
> > >
> > >
> > >
> > >
> > >
> >
> > --
> > Jaegeuk Kim
> > Samsung

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ