lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Jun 2023 09:12:51 +0200
From:   David Hildenbrand <david@...hat.com>
To:     John Hubbard <jhubbard@...dia.com>,
        Oscar Salvador <osalvador@...e.de>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
Subject: Re: [PATCH] mm/memory_hotplug.c: don't fail hot unplug quite so
 eagerly

On 20.06.23 03:17, John Hubbard wrote:
> mm/memory_hotplug.c: don't fail hot unplug quite so eagerly
> 
> Some device drivers add memory to the system via memory hotplug. When
> the driver is unloaded, that memory is hot-unplugged.

Which interfaces are they using to add/remove memory?

> 
> However, memory hot unplug can fail. And these days, it fails a little
> too easily, with respect to the above case. Specifically, if a signal is
> pending on the process, hot unplug fails. This leads directly to: the
> user must reboot the machine in order to unload the driver, and
> therefore the device is unusable until the machine is rebooted.

Why can't they retry in user space when offlining fails with -EINTR, or 
re-trigger driver unloading?

> 
> During teardown paths in the kernel, a higher tolerance for failures or
> imperfections is often best. That is, it is often better to continue
> with the teardown, than to error out too early.
> 
> So in this case, other things (unmovable pages, un-splittable huge
> pages) can also cause the above problem. However, those are demonstrably
> less common than simply having a pending signal. I've got bug reports
> from users who can trivially reproduce this by killing their process
> with a "kill -9", for example.
> 
> Fix this by soldering on with memory hot plug, even in the presence of
> pending signals.
> 
> Signed-off-by: John Hubbard <jhubbard@...dia.com>
> ---
>   mm/memory_hotplug.c | 6 ------
>   1 file changed, 6 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 8e0fa209d533..57a46620a667 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1879,12 +1879,6 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages,
>   	do {
>   		pfn = start_pfn;
>   		do {
> -			if (signal_pending(current)) {
> -				ret = -EINTR;
> -				reason = "signal backoff";
> -				goto failed_removal_isolated;
> -			}
> -
>   			cond_resched();
>   
>   			ret = scan_movable_pages(pfn, end_pfn, &pfn);

No, we can't remove that. It's documented behavior that exists precisely 
for that reason:

https://docs.kernel.org/admin-guide/mm/memory-hotplug.html#id21

"
When offlining is triggered from user space, the offlining context can 
be terminated by sending a fatal signal. A timeout based offlining can 
easily be implemented via:

% timeout $TIMEOUT offline_block | failure_handling
"

Otherwise, there is no way to stop an userspace-triggered offline 
operation that loops forever in the kernel.

I guess switching to fatal_signal_pending() might help to some degree, 
it should keep the timeout trick working.

But it wouldn't help in your case because where root kills arbitrary 
processes. I'm not sure if that is something we should be paying 
attention to.


-- 
Cheers,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ