lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YC6RGiemaQHQScsZ@dhcp22.suse.cz>
Date:   Thu, 18 Feb 2021 17:08:58 +0100
From:   Michal Hocko <mhocko@...e.com>
To:     Minchan Kim <minchan@...nel.org>
Cc:     Matthew Wilcox <willy@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>, cgoldswo@...eaurora.org,
        linux-fsdevel@...r.kernel.org, david@...hat.com, vbabka@...e.cz,
        viro@...iv.linux.org.uk, joaodias@...gle.com
Subject: Re: [RFC 1/2] mm: disable LRU pagevec during the migration
 temporarily

On Thu 18-02-21 07:52:25, Minchan Kim wrote:
> On Thu, Feb 18, 2021 at 09:17:02AM +0100, Michal Hocko wrote:
> > On Wed 17-02-21 13:32:05, Minchan Kim wrote:
> > > On Wed, Feb 17, 2021 at 09:16:12PM +0000, Matthew Wilcox wrote:
> > > > On Wed, Feb 17, 2021 at 12:46:19PM -0800, Minchan Kim wrote:
> > > > > > I suspect you do not want to add atomic_read inside hot paths, right? Is
> > > > > > this really something that we have to microoptimize for? atomic_read is
> > > > > > a simple READ_ONCE on many archs.
> > > > > 
> > > > > It's also spin_lock_irq_save in some arch. If the new synchonization is
> > > > > heavily compilcated, atomic would be better for simple start but I thought
> > > > > this locking scheme is too simple so no need to add atomic operation in
> > > > > readside.
> > > > 
> > > > What arch uses a spinlock for atomic_read()?  I just had a quick grep and
> > > > didn't see any.
> > > 
> > > Ah, my bad. I was confused with update side.
> > > Okay, let's use atomic op to make it simple.
> > 
> > Thanks. This should make the code much more simple. Before you send
> > another version for the review I have another thing to consider. You are
> > kind of wiring this into the migration code but control over lru pcp
> > caches can be used in other paths as well. Memory offlining would be
> > another user. We already disable page allocator pcp caches to prevent
> > regular draining. We could do the same with lru pcp caches.
> 
> I didn't catch your point here. If memory offlining is interested on
> disabling lru pcp, it could call migrate_prep and migrate_finish
> like other places. Are you suggesting this one?

What I meant to say is that you can have a look at this not as an
integral part of the migration code but rather a common functionality
that migration and others can use. So instead of an implicit part of
migrate_prep this would become disable_lru_cache and migrate_finish
would become lruc_cache_enable. See my point? 

An advantage of that would be that this would match the pcp page
allocator disabling and we could have it in place for the whole
operation to make the page state more stable wrt. LRU state (PageLRU).
 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index a969463bdda4..0ec1c13bfe32 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1425,8 +1425,12 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
>                 node_clear(mtc.nid, nmask);
>                 if (nodes_empty(nmask))
>                         node_set(mtc.nid, nmask);
> +
> +               migrate_prep();
>                 ret = migrate_pages(&source, alloc_migration_target, NULL,
>                         (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
> +
> +               migrate_finish();
>                 if (ret) {
>                         list_for_each_entry(page, &source, lru) {
>                                 pr_warn("migrating pfn %lx failed ret:%d ",
> 

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ