[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpFgJXxBa1+HNWn60cdhs3Qwoe1TAjDH50Pe3FvT_CVm1g@mail.gmail.com>
Date: Sat, 11 Jan 2025 16:31:47 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: Hillf Danton <hdanton@...a.com>
Cc: akpm@...ux-foundation.org, peterz@...radead.org, willy@...radead.org,
hannes@...xchg.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH v9 10/17] refcount: introduce __refcount_{add|inc}_not_zero_limited
On Sat, Jan 11, 2025 at 3:45 PM Hillf Danton <hdanton@...a.com> wrote:
>
> On Sat, 11 Jan 2025 09:11:52 -0800 Suren Baghdasaryan <surenb@...gle.com>
> > I see your point. I think it's a strong argument to use atomic
> > directly instead of refcount for this locking. I'll try that and see
> > how it looks. Thanks for the feedback!
> >
> Better not before having a clear answer to why is it sane to invent
> anything like rwsem in 2025. What, the 40 bytes? Nope it is the
> fair price paid for finer locking granuality.
It's not just about the 40 bytes. It allows us to fold the separate
vma->detached flag nicely into the same refcounter, which consolidates
the vma state in one place. Later that makes it much easier to add
SLAB_TYPESAFE_BY_RCU because now we have to preserve only this
refcounter during the vma reuse.
>
> BTW Vlastimil, the cc list is cut down because I have to walk around
> the spam check on the mail agent side.
>
Powered by blists - more mailing lists