[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200122164618.GY29276@dhcp22.suse.cz>
Date: Wed, 22 Jan 2020 17:46:18 +0100
From: Michal Hocko <mhocko@...nel.org>
To: David Hildenbrand <david@...hat.com>
Cc: Dan Williams <dan.j.williams@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Leonardo Bras <leonardo@...ux.ibm.com>,
Nathan Lynch <nathanl@...ux.ibm.com>,
Allison Randal <allison@...utok.net>,
Nathan Fontenot <nfont@...ux.vnet.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Anshuman Khandual <anshuman.khandual@....com>,
lantianyu1986@...il.com,
linuxppc-dev <linuxppc-dev@...ts.ozlabs.org>
Subject: Re: [PATCH RFC v1] mm: is_mem_section_removable() overhaul
On Wed 22-01-20 12:58:16, David Hildenbrand wrote:
> On 22.01.20 11:54, David Hildenbrand wrote:
> > On 22.01.20 11:42, Michal Hocko wrote:
> >> On Wed 22-01-20 11:39:08, David Hildenbrand wrote:
> >>>>>> Really, the interface is flawed and should have never been merged in the
> >>>>>> first place. We cannot simply remove it altogether I am afraid so let's
> >>>>>> at least remove the bogus code and pretend that the world is a better
> >>>>>> place where everything is removable except the reality sucks...
> >>>>>
> >>>>> As I expressed already, the interface works as designed/documented and
> >>>>> has been used like that for years.
> >>>>
> >>>> It seems we do differ in the usefulness though. Using a crappy interface
> >>>> for years doesn't make it less crappy. I do realize we cannot remove the
> >>>> interface but we can remove issues with the implementation and I dare to
> >>>> say that most existing users wouldn't really notice.
> >>>
> >>> Well, at least powerpc-utils (why this interface was introduced) will
> >>> notice a) performance wise and b) because more logging output will be
> >>> generated (obviously non-offlineable blocks will be tried to offline).
> >>
> >> I would really appreciate some specific example for a real usecase. I am
> >> not familiar with powerpc-utils worklflows myself.
> >>
> >
> > Not an expert myself:
> >
> > https://github.com/ibm-power-utilities/powerpc-utils
> >
> > -> src/drmgr/drslot_chrp_mem.c
> >
> > On request to remove some memory it will
> >
> > a) Read "->removable" of all memory blocks ("lmb")
> > b) Check if the request can be fulfilled using the removable blocks
> > c) Try to offline the memory blocks by trying to offline it. If that
> > succeeded, trigger removeal of it using some hypervisor hooks.
> >
> > Interestingly, with "AMS ballooning", it will already consider the
> > "removable" information useless (most probably, because of
> > non-migratable balloon pages that can be offlined - I assume the powerpc
> > code that I converted to proper balloon compaction just recently). a)
> > and b) is skipped.
> >
> > Returning "yes" on all blocks will make them handle it just like if "AMS
> > ballooning" is active. So any memory block will be tried. Should work
> > but will be slower if no ballooning is active.
> >
>
> On lsmem:
>
> https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lgdd/lgdd_r_lsmem_cmd.html
>
> "
> Removable
> yes if the memory range can be set offline, no if it cannot be set
> offline. A dash (-) means that the range is already offline. The kernel
> method that identifies removable memory ranges is heuristic and not
> exact. Occasionally, memory ranges are falsely reported as removable or
> falsely reported as not removable.
> "
>
> Usage of lsmem paird with chmem:
>
> https://access.redhat.com/solutions/3937181
>
>
> Especially interesting for IBM z Systems, whereby memory
> onlining/offlining will trigger the actual population of memory in the
> hypervisor. So if an admin wants to offline some memory (to give it back
> to the hypervisor), it would use lsmem to identify such blocks first,
> instead of trying random blocks until one offlining request succeeds.
I am sorry for being dense here but I still do not understand why s390
and the way how it does the hotremove matters here. Afterall there are
no arch specific operations done until the memory is offlined. Also
randomly checking memory blocks and then hoping that the offline will
succeed is not way much different from just trying the offline the
block. Both have to crawl through the pfn range and bail out on the
unmovable memory.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists