[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aGar8p-GlbqXtl7U@tiehlicka>
Date: Thu, 3 Jul 2025 18:12:34 +0200
From: Michal Hocko <mhocko@...e.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Frederic Weisbecker <frederic@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Valentin Schneider <vschneid@...hat.com>,
Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org
Subject: Re: [PATCH 6/6] mm: Drain LRUs upon resume to userspace on nohz_full
CPUs
On Thu 03-07-25 15:28:23, Matthew Wilcox wrote:
> On Thu, Jul 03, 2025 at 04:07:17PM +0200, Frederic Weisbecker wrote:
> > +unsigned int folio_batch_add(struct folio_batch *fbatch,
> > + struct folio *folio)
> > +{
> > + unsigned int ret;
> > +
> > + fbatch->folios[fbatch->nr++] = folio;
> > + ret = folio_batch_space(fbatch);
> > + isolated_task_work_queue();
>
> Umm. LRUs use folio_batches, but they are definitely not the only user
> of folio_batches. Maybe you want to add a new lru_batch_add()
> abstraction, because this call is definitely being done at the wrong
> level.
You have answered one of my question in other response. My initial
thought was that __lru_add_drain_all seems to be a better fit. But then
we have a problem that draining will become an unbounded operation which
will become a problem for lru_cache_disable which will never converge
until isolated workload does the draining. So it indeed seems like we
need to queue draining when a page is added. Are there other places
where we put folios into teh folio_batch than folio_batch_add? I cannot
seem to see any...
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists