lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 26 Aug 2016 08:50:50 +0800
From:   Chao Yu <yuchao0@...wei.com>
To:     Jaegeuk Kim <jaegeuk@...nel.org>
CC:     Chao Yu <chao@...nel.org>,
        <linux-f2fs-devel@...ts.sourceforge.net>,
        <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/3] f2fs: schedule in between two continous batch
 discards

Hi Jaegeuk,

On 2016/8/26 0:57, Jaegeuk Kim wrote:
> Hi Chao,
> 
> On Thu, Aug 25, 2016 at 05:22:29PM +0800, Chao Yu wrote:
>> Hi Jaegeuk,
>>
>> On 2016/8/24 0:53, Jaegeuk Kim wrote:
>>> Hi Chao,
>>>
>>> On Sun, Aug 21, 2016 at 11:21:30PM +0800, Chao Yu wrote:
>>>> From: Chao Yu <yuchao0@...wei.com>
>>>>
>>>> In batch discard approach of fstrim will grab/release gc_mutex lock
>>>> repeatly, it makes contention of the lock becoming more intensive.
>>>>
>>>> So after one batch discards were issued in checkpoint and the lock
>>>> was released, it's better to do schedule() to increase opportunity
>>>> of grabbing gc_mutex lock for other competitors.
>>>>
>>>> Signed-off-by: Chao Yu <yuchao0@...wei.com>
>>>> ---
>>>>  fs/f2fs/segment.c | 2 ++
>>>>  1 file changed, 2 insertions(+)
>>>>
>>>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>>>> index 020767c..d0f74eb 100644
>>>> --- a/fs/f2fs/segment.c
>>>> +++ b/fs/f2fs/segment.c
>>>> @@ -1305,6 +1305,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
>>>>  		mutex_unlock(&sbi->gc_mutex);
>>>>  		if (err)
>>>>  			break;
>>>> +
>>>> +		schedule();
>>>
>>> Hmm, if other thread is already waiting for gc_mutex, we don't need this here.
>>> In order to avoid long latency, wouldn't it be enough to reduce the batch size?
>>
>> Hmm, when fstrim call mutex_unlock we will pop one blocked locker from FIFO list
>> of mutex lock, and wake it up, then fstrimer will try to lock gc_mutex for next
>> batch trim, so the popped locker and fstrimer will make a new competition in
>> gc_mutex.
> 
> Before trying to grab gc_mutex by fstrim again, there are already blocked tasks
> waiting for gc_mutex. Hence the next one should be selectec by FIFO, no?

The next one which is going to be waked up is selected by FIFO, but the waked
one is still needs to be race with other mutex lock grabber.

So there is no such guarantee that the waked one must get the lock.

Thanks,

> 
> Thanks,
> 
>> If fstrimer is running in a big core, and popped locker is running in
>> a small core, we can't guarantee popped locker can win the race, and for the
>> most of time, fstrimer will win. So in order to reduce starvation of other
>> gc_mutext locker, it's better to do schedule() here.
>>
>> Thanks,
>>
>>>
>>> Thanks,
>>>
>>>>  	}
>>>>  out:
>>>>  	range->len = F2FS_BLK_TO_BYTES(cpc.trimmed);
>>>> -- 
>>>> 2.7.2
>>>
>>> .
>>>
> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ