lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 25 Aug 2016 17:22:29 +0800
From:   Chao Yu <yuchao0@...wei.com>
To:     Jaegeuk Kim <jaegeuk@...nel.org>, Chao Yu <chao@...nel.org>
CC:     <linux-f2fs-devel@...ts.sourceforge.net>,
        <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/3] f2fs: schedule in between two continous batch
 discards

Hi Jaegeuk,

On 2016/8/24 0:53, Jaegeuk Kim wrote:
> Hi Chao,
> 
> On Sun, Aug 21, 2016 at 11:21:30PM +0800, Chao Yu wrote:
>> From: Chao Yu <yuchao0@...wei.com>
>>
>> In batch discard approach of fstrim will grab/release gc_mutex lock
>> repeatly, it makes contention of the lock becoming more intensive.
>>
>> So after one batch discards were issued in checkpoint and the lock
>> was released, it's better to do schedule() to increase opportunity
>> of grabbing gc_mutex lock for other competitors.
>>
>> Signed-off-by: Chao Yu <yuchao0@...wei.com>
>> ---
>>  fs/f2fs/segment.c | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>> index 020767c..d0f74eb 100644
>> --- a/fs/f2fs/segment.c
>> +++ b/fs/f2fs/segment.c
>> @@ -1305,6 +1305,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
>>  		mutex_unlock(&sbi->gc_mutex);
>>  		if (err)
>>  			break;
>> +
>> +		schedule();
> 
> Hmm, if other thread is already waiting for gc_mutex, we don't need this here.
> In order to avoid long latency, wouldn't it be enough to reduce the batch size?

Hmm, when fstrim call mutex_unlock we will pop one blocked locker from FIFO list
of mutex lock, and wake it up, then fstrimer will try to lock gc_mutex for next
batch trim, so the popped locker and fstrimer will make a new competition in
gc_mutex. If fstrimer is running in a big core, and popped locker is running in
a small core, we can't guarantee popped locker can win the race, and for the
most of time, fstrimer will win. So in order to reduce starvation of other
gc_mutext locker, it's better to do schedule() here.

Thanks,

> 
> Thanks,
> 
>>  	}
>>  out:
>>  	range->len = F2FS_BLK_TO_BYTES(cpc.trimmed);
>> -- 
>> 2.7.2
> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ