[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <98d200aa103fd6086c02dd620b65e961@suse.de>
Date: Thu, 06 Dec 2018 11:25:57 +0100
From: Roman Penyaev <rpenyaev@...e.de>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: Jason Baron <jbaron@...mai.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org
Subject: Re: [RFC PATCH 1/1] epoll: use rwlock in order to reduce
ep_poll_callback() contention
On 2018-12-06 05:04, Davidlohr Bueso wrote:
> On 12/3/18 6:02 AM, Roman Penyaev wrote:
>
>> The main change is in replacement of the spinlock with a rwlock, which
>> is
>> taken on read in ep_poll_callback(), and then by adding poll items to
>> the
>> tail of the list using xchg atomic instruction. Write lock is taken
>> everywhere else in order to stop list modifications and guarantee that
>> list
>> updates are fully completed (I assume that write side of a rwlock does
>> not
>> starve, it seems qrwlock implementation has these guarantees).
>
> Its good then that Will recently ported qrwlocks to arm64, which iirc
> had
> a bad case of writer starvation. In general, qrwlock will maintain
> reader
> to writer ratios of acquisitions fairly well, but will favor readers
> over
> writers in scenarios where when too many tasks (more than ncpus).
Thanks for noting that. Then that should not be a problem, since number
of
parallel ep_poll_callback() calls can't be greater then number of CPUs
because of the wq.lock which is taken by the caller of
ep_poll_callback().
BTW, did someone make any estimations how much does the latency on the
write side increase if the number of readers is greater than CPUs?
--
Roman
Powered by blists - more mailing lists