[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <meorepggpsz4t3akwbmpyprhffno5xtex63ykuxy3t75n5vm77@shnc6m7pqtbc>
Date: Tue, 4 Feb 2025 13:22:29 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Minchan Kim <minchan@...nel.org>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCHv4 01/17] zram: switch to non-atomic entry locking
On (25/02/03 16:19), Andrew Morton wrote:
> > On (25/01/31 14:55), Andrew Morton wrote:
> > > > +static void zram_slot_write_lock(struct zram *zram, u32 index)
> > > > +{
> > > > + atomic_t *lock = &zram->table[index].lock;
> > > > + int old = atomic_read(lock);
> > > > +
> > > > + do {
> > > > + if (old != ZRAM_ENTRY_UNLOCKED) {
> > > > + cond_resched();
> > > > + old = atomic_read(lock);
> > > > + continue;
> > > > + }
> > > > + } while (!atomic_try_cmpxchg(lock, &old, ZRAM_ENTRY_WRLOCKED));
> > > > +}
> > >
> > > I expect that if the calling userspace process has realtime policy (eg
> > > SCHED_FIFO) then the cond_resched() won't schedule SCHED_NORMAL tasks
> > > and this becomes a busy loop. And if the machine is single-CPU, the
> > > loop is infinite.
> >
> > So for that scenario to happen zram needs to see two writes() to the same
> > index (page) simultaneously? Or read() and write() on the same index (page)
> > concurrently?
>
> Well, my point is that in the contended case, this "lock" operation can
> get stuck forever. If there are no contended cases, we don't need a
> lock!
Let me see if I can come up with something, I don't have an awfully
a lot of ideas right now.
> And I don't see how disabling the feature if PREEMPT=y will avoid this
Oh, that was a silly joke: the series that enables preemption in zram
and zsmalloc ends up disabling PREEMPT.
Powered by blists - more mailing lists