[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPTQFZS7KTTor+CHyzwE8hVVZo04haWsyTHhN9+Hy35PVZ6O1w@mail.gmail.com>
Date: Tue, 23 Jul 2024 21:15:59 -0700
From: Jerome Glisse <jglisse@...gle.com>
To: David Hildenbrand <david@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH] mm: fix maxnode for mbind(), set_mempolicy() and migrate_pages()
On Tue, 23 Jul 2024 at 10:37, David Hildenbrand <david@...hat.com> wrote:
>
> On 23.07.24 18:33, Jerome Glisse wrote:
> > On Mon, 22 Jul 2024 at 06:09, David Hildenbrand <david@...hat.com> wrote:
> >>
> >> On 20.07.24 19:35, Jerome Glisse wrote:
> >>> Because maxnode bug there is no way to bind or migrate_pages to the
> >>> last node in multi-node NUMA system unless you lie about maxnodes
> >>> when making the mbind, set_mempolicy or migrate_pages syscall.
> >>>
> >>> Manpage for those syscall describe maxnodes as the number of bits in
> >>> the node bitmap ("bit mask of nodes containing up to maxnode bits").
> >>> Thus if maxnode is n then we expect to have a n bit(s) bitmap which
> >>> means that the mask of valid bits is ((1 << n) - 1). The get_nodes()
> >>> decrement lead to the mask being ((1 << (n - 1)) - 1).
> >>>
> >>> The three syscalls use a common helper get_nodes() and first things
> >>> this helper do is decrement maxnode by 1 which leads to using n-1 bits
> >>> in the provided mask of nodes (see get_bitmap() an helper function to
> >>> get_nodes()).
> >>>
> >>> The lead to two bugs, either the last node in the bitmap provided will
> >>> not be use in either of the three syscalls, or the syscalls will error
> >>> out and return EINVAL if the only bit set in the bitmap was the last
> >>> bit in the mask of nodes (which is ignored because of the bug and an
> >>> empty mask of nodes is an invalid argument).
> >>>
> >>> I am surprised this bug was never caught ... it has been in the kernel
> >>> since forever.
> >>
> >> Let's look at QEMU: backends/hostmem.c
> >>
> >> /*
> >> * We can have up to MAX_NODES nodes, but we need to pass maxnode+1
> >> * as argument to mbind() due to an old Linux bug (feature?) which
> >> * cuts off the last specified node. This means backend->host_nodes
> >> * must have MAX_NODES+1 bits available.
> >> */
> >>
> >> Which means that it's been known for a long time, and the workaround
> >> seems to be pretty easy.
> >>
> >> So I wonder if we rather want to update the documentation to match reality.
> >
> > [Sorry resending as text ... gmail insanity]
> >
> > I think it is kind of weird if we ask to supply maxnodes+1 to work
> > around the bug. If we apply this patch qemu would continue to work as
> > is while fixing users that were not aware of that bug. So I would say
> > applying this patch does more good. Long term qemu can drop its
> > workaround or keep it for backward compatibility with old kernel.
>
> Not really, unfortunately. The thing is that it requires a lot more
> effort to sense support than simply pass maxnodes+1. So unless you know
> exactly on which minimum kernel version your software runs (barely
> happens), you will simply apply the workaround.
The point I was trying to make is that working applications do not
need to change
their code; a patched or unpatched kernel will not change their behavior in any
way and thus they will continue working regardless of whether the kernel as
the patch.
While applications that are not as smart will keep miss-behaving until someone
fixes the application. So to me it looks like the patch brings good to
people without
arming any existing folks.
Fix in one place versus wait for people to fix their code ...
> I would assume that each and every sane user out there does that
> already, judging that even that QEMU code is 10 years old (!).
I took a look at some code I have access to and it is not the case
everywhere ...
>
> In any case, we have to document that behavior that existed since the
> very beginning. Because it would be even *worse* if someone would
> develop against a new kernel and would get a bunch of bug reports when
> running on literally every old kernel out there :)
>
> So my best guess is that long-term it will create more issues when we
> change the behavior ... but in any case we have to update the man pages.
No it would not, if you had the fix and did not modify applications
that are smart about it then nothing would change. Applications that
are smart will work the same on both patched and unpatched kernels
while applications that have the bug will suddenly have the behaviour
they would have expected from the documentation.
Thank you,
Jérôme
Powered by blists - more mailing lists