lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 12 Sep 2013 10:40:27 +0800
From:	俞超 <chao2.yu@...sung.com>
To:	'Gu Zheng' <guz.fnst@...fujitsu.com>, jaegeuk.kim@...sung.com
Cc:	shu.tan@...sung.com, linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	linux-f2fs-devel@...ts.sourceforge.net
Subject: RE: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better performance

Hi Gu

> -----Original Message-----
> From: Gu Zheng [mailto:guz.fnst@...fujitsu.com]
> Sent: Wednesday, September 11, 2013 1:38 PM
> To: jaegeuk.kim@...sung.com
> Cc: chao2.yu@...sung.com; shu.tan@...sung.com;
> linux-fsdevel@...r.kernel.org; linux-kernel@...r.kernel.org;
> linux-f2fs-devel@...ts.sourceforge.net
> Subject: Re: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better performance
> 
> Hi Jaegeuk, Chao,
> 
> On 09/10/2013 08:52 AM, Jaegeuk Kim wrote:
> 
> > Hi,
> >
> > At first, thank you for the report and please follow the email writing
> > rules. :)
> >
> > Anyway, I agree to the below issue.
> > One thing that I can think of is that we don't need to use the
> > spin_lock, since we don't care about the exact lock number, but just
> > need to get any not-collided number.
> 
> IMHO, just moving sbi->next_lock_num++ before
> mutex_lock(&sbi->fs_lock[next_lock])
> can avoid unbalance issue mostly.
> IMO, the case two or more threads increase sbi->next_lock_num in the same
> time is really very very little. If you think it is not rigorous, change
> next_lock_num to atomic one can fix it.
> What's your opinion?
> 
> Regards,
> Gu

I did the test sbi->next_lock_num++ compare with the atomic one,
And I found performance of them is almost the same under a small number thread racing.
So as your and Kim's opinion, it's enough to use "sbi->next_lock_num++" to fix this issue.

Thanks for the advice.
> 
> >
> > So, how about removing the spin_lock?
> > And how about using a random number?
> 
> > Thanks,
> >
> > 2013-09-06 (금), 09:48 +0000, Chao Yu:
> >> Hi Kim:
> >>
> >>      I think there is a performance problem: when all sbi->fs_lock is
> >> holded,
> >>
> >> then all other threads may get the same next_lock value from
> >> sbi->next_lock_num in function mutex_lock_op,
> >>
> >> and wait to get the same lock at position fs_lock[next_lock], it
> >> unbalance the fs_lock usage.
> >>
> >> It may lost performance when we do the multithread test.
> >>
> >>
> >>
> >> Here is the patch to fix this problem:
> >>
> >>
> >>
> >> Signed-off-by: Yu Chao <chao2.yu@...sung.com>
> >>
> >> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> >>
> >> old mode 100644
> >>
> >> new mode 100755
> >>
> >> index 467d42d..983bb45
> >>
> >> --- a/fs/f2fs/f2fs.h
> >>
> >> +++ b/fs/f2fs/f2fs.h
> >>
> >> @@ -371,6 +371,7 @@ struct f2fs_sb_info {
> >>
> >>         struct mutex fs_lock[NR_GLOBAL_LOCKS];  /* blocking FS
> >> operations */
> >>
> >>         struct mutex node_write;                /* locking node
> writes
> >> */
> >>
> >>         struct mutex writepages;                /* mutex for
> >> writepages() */
> >>
> >> +       spinlock_t spin_lock;                   /* lock for
> >> next_lock_num */
> >>
> >>         unsigned char next_lock_num;            /* round-robin
> global
> >> locks */
> >>
> >>         int por_doing;                          /* recovery is doing
> >> or not */
> >>
> >>         int on_build_free_nids;                 /* build_free_nids is
> >> doing */
> >>
> >> @@ -533,15 +534,19 @@ static inline void mutex_unlock_all(struct
> >> f2fs_sb_info *sbi)
> >>
> >>
> >>
> >>  static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
> >>
> >>  {
> >>
> >> -       unsigned char next_lock = sbi->next_lock_num %
> >> NR_GLOBAL_LOCKS;
> >>
> >> +       unsigned char next_lock;
> >>
> >>         int i = 0;
> >>
> >>
> >>
> >>         for (; i < NR_GLOBAL_LOCKS; i++)
> >>
> >>                 if (mutex_trylock(&sbi->fs_lock[i]))
> >>
> >>                         return i;
> >>
> >>
> >>
> >> -       mutex_lock(&sbi->fs_lock[next_lock]);
> >>
> >> +       spin_lock(&sbi->spin_lock);
> >>
> >> +       next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
> >>
> >>         sbi->next_lock_num++;
> >>
> >> +       spin_unlock(&sbi->spin_lock);
> >>
> >> +
> >>
> >> +       mutex_lock(&sbi->fs_lock[next_lock]);
> >>
> >>         return next_lock;
> >>
> >>  }
> >>
> >>
> >>
> >> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
> >>
> >> old mode 100644
> >>
> >> new mode 100755
> >>
> >> index 75c7dc3..4f27596
> >>
> >> --- a/fs/f2fs/super.c
> >>
> >> +++ b/fs/f2fs/super.c
> >>
> >> @@ -657,6 +657,7 @@ static int f2fs_fill_super(struct super_block
> >> *sb, void *data, int silent)
> >>
> >>         mutex_init(&sbi->cp_mutex);
> >>
> >>         for (i = 0; i < NR_GLOBAL_LOCKS; i++)
> >>
> >>                 mutex_init(&sbi->fs_lock[i]);
> >>
> >> +       spin_lock_init(&sbi->spin_lock);
> >>
> >>         mutex_init(&sbi->node_write);
> >>
> >>         sbi->por_doing = 0;
> >>
> >>         spin_lock_init(&sbi->stat_lock);
> >>
> >> (END)
> >>
> >>
> >>
> >>
> >>
> >>
> >
> 
> 
> =

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ