lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20201125130851.GA22157@codeaurora.org>
Date:   Wed, 25 Nov 2020 18:38:51 +0530
From:   Sahitya Tummala <stummala@...eaurora.org>
To:     David Laight <David.Laight@...LAB.COM>
Cc:     'Chao Yu' <yuchao0@...wei.com>, Jaegeuk Kim <jaegeuk@...nel.org>,
        "linux-f2fs-devel@...ts.sourceforge.net" 
        <linux-f2fs-devel@...ts.sourceforge.net>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] f2fs: change to use rwsem for cp_mutex

Hi David,

On Tue, Nov 24, 2020 at 09:12:12AM +0000, David Laight wrote:
> From: Chao Yu
> > Sent: 24 November 2020 03:12
> > 
> > On 2020/11/24 1:05, David Laight wrote:
> > > From: Sahitya Tummala
> > >> Sent: 23 November 2020 05:29
> > >>
> > >> Use rwsem to ensure serialization of the callers and to avoid
> > >> starvation of high priority tasks, when the system is under
> > >> heavy IO workload.
> > >
> > > I can't see any read lock requests.
> > >
> > > So why the change?
> > 
> > Hi David,
> > 
> > You can check the context of this patch in below link:
> > 
> > https://lore.kernel.org/linux-f2fs-devel/8e094021b958f9fe01df1183a2677882@codeaurora.org/T/#t
> > 
> > BTW, the root cause here is that mutex lock won't serialize callers, so there
> > could be potential starvation problem when this lock is always grabbed by high
> > priority tasks.
> 
> That doesn't seem right.
> 
> If I read the above correctly it was high priority tasks that were
> being 'starved' precisely because mutex lock serializes wakers.

Actually it can happen for any random task irrespective of the priority.
In my case, I was observing that the thread that went to sleep first is
not able to acquire the lock first and other new threads that came in
just around the mutex unlock time were acquiring the lock.

> 
> If you have a lock that is contended so much that it is held 100%
> of the time you need a different locking strategy.
> 
> IIRC mutex locks are 'ticket' locks so that only one thread is woken
> each time the mutex is released, and they are woken in the order
> they went to sleep.

AFAIK mutex locks doesn't *strictly* enforce FIFO order. The lock is released
before waking the first waiting task. The waiting task has to run to claim
the lock. So the lock is available for other tasks in this *short* window.

Thanks,

> 
> While this behaviour might not be the one you want, relying on
> rwsem (which might happen currently to work differently) doesn't
> seem the correct long term fix.
> 
> 	David
> 
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)

-- 
--
Sent by a consultant of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ