[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wiH-swkbq3aw78HwAg-OKkj96EZJzp3_rmNoHYnpA=njg@mail.gmail.com>
Date: Thu, 28 Mar 2019 13:43:58 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Waiman Long <longman@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Will Deacon <will.deacon@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
"the arch/x86 maintainers" <x86@...nel.org>,
Davidlohr Bueso <dave@...olabs.net>,
Tim Chen <tim.c.chen@...ux.intel.com>
Subject: Re: [PATCH 10/12] locking/rwsem: Merge owner into count on x86-64
On Thu, Mar 28, 2019 at 11:12 AM Waiman Long <longman@...hat.com> wrote:
>
> Reserving 2 bits for status flags,
> we will have 16 bits for the reader count. That can supports up to
> (64k-1) readers.
Explain why that's enough, please.
I could *easily* see more than 64k threads all on the same rwsem, all
at the same time.
Just do a really slow filesystem (think fuse), map a file with lots of
pages, and then fault in one page per thread. Boom. rwsem with more
than 64k concurrent readers.
So I think this approach is completely wrong, and/or needs a *lot* of
explanation why it works.
A small reader count works for the spinning rwlocks because we're
limited to the number of CPU's in the system. For a rwsem? No.
Linus
Powered by blists - more mailing lists