lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 2 May 2024 08:56:28 +0200
From: David Hildenbrand <david@...hat.com>
To: Alistair Popple <apopple@...dia.com>, Jason Gunthorpe <jgg@...dia.com>
Cc: Christoph Hellwig <hch@...radead.org>, John Hubbard
 <jhubbard@...dia.com>, Andrew Morton <akpm@...ux-foundation.org>,
 LKML <linux-kernel@...r.kernel.org>, linux-rdma@...r.kernel.org,
 linux-mm@...ck.org, Mike Marciniszyn <mike.marciniszyn@...el.com>,
 Leon Romanovsky <leon@...nel.org>, Artemy Kovalyov <artemyko@...dia.com>,
 Michael Guralnik <michaelgur@...dia.com>, Pak Markthub <pmarkthub@...dia.com>
Subject: Re: [RFC] RDMA/umem: pin_user_pages*() can temporarily fail due to
 migration glitches

On 02.05.24 03:05, Alistair Popple wrote:
> 
> Jason Gunthorpe <jgg@...dia.com> writes:
> 
>> On Tue, Apr 30, 2024 at 10:10:43PM -0700, Christoph Hellwig wrote:
>>>> +		pinned = -ENOMEM;
>>>> +		int attempts = 0;
>>>> +		/*
>>>> +		 * pin_user_pages_fast() can return -EAGAIN, due to falling back
>>>> +		 * to gup-slow and then failing to migrate pages out of
>>>> +		 * ZONE_MOVABLE due to a transient elevated page refcount.
>>>> +		 *
>>>> +		 * One retry is enough to avoid this problem, so far, but let's
>>>> +		 * use a slightly higher retry count just in case even larger
>>>> +		 * systems have a longer-lasting transient refcount problem.
>>>> +		 *
>>>> +		 */
>>>> +		static const int MAX_ATTEMPTS = 3;
>>>> +
>>>> +		while (pinned == -EAGAIN && attempts < MAX_ATTEMPTS) {
>>>> +			pinned = pin_user_pages_fast(cur_base,
>>>> +						     min_t(unsigned long,
>>>> +							npages, PAGE_SIZE /
>>>> +							sizeof(struct page *)),
>>>> +						     gup_flags, page_list);
>>>>   			ret = pinned;
>>>> -			goto umem_release;
>>>> +			attempts++;
>>>> +
>>>> +			if (pinned == -EAGAIN)
>>>> +				continue;
>>>>   		}
>>>> +		if (pinned < 0)
>>>> +			goto umem_release;
>>>
>>> This doesn't make sense.  IFF a blind retry is all that is needed it
>>> should be done in the core functionality.  I fear it's not that easy,
>>> though.
>>
>> +1
>>
>> This migration retry weirdness is a GUP issue, it needs to be solved
>> in the mm not exposed to every pin_user_pages caller.
>>
>> If it turns out ZONE_MOVEABLE pages can't actually be reliably moved
>> then it is pretty broken..
> 
> I wonder if we should remove the arbitrary retry limit in
> migrate_pages() entirely for ZONE_MOVEABLE pages and just loop until
> they migrate? By definition there should only be transient references on
> these pages so why do we need to limit the number of retries in the
> first place?

There are some weird things that still needs fixing: vmsplice() is the 
example that doesn't use FOLL_LONGTERM.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ