lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fe76204d-4cef-4f06-a5bc-e016a513f783@arm.com>
Date: Tue, 13 Aug 2024 10:30:50 +0530
From: Dev Jain <dev.jain@....com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: akpm@...ux-foundation.org, shuah@...nel.org, david@...hat.com,
 willy@...radead.org, ryan.roberts@....com, anshuman.khandual@....com,
 catalin.marinas@....com, cl@...two.org, vbabka@...e.cz, mhocko@...e.com,
 apopple@...dia.com, osalvador@...e.de, baolin.wang@...ux.alibaba.com,
 dave.hansen@...ux.intel.com, will@...nel.org, baohua@...nel.org,
 ioworker0@...il.com, gshan@...hat.com, mark.rutland@....com,
 kirill.shutemov@...ux.intel.com, hughd@...gle.com, aneesh.kumar@...nel.org,
 yang@...amperecomputing.com, peterx@...hat.com, broonie@...nel.org,
 mgorman@...hsingularity.net, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org, linux-mm@...ck.org,
 linux-kselftest@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: Retry migration earlier upon refcount mismatch


On 8/12/24 17:38, Dev Jain wrote:
>
> On 8/12/24 13:01, Huang, Ying wrote:
>> Dev Jain <dev.jain@....com> writes:
>>
>>> On 8/12/24 11:45, Huang, Ying wrote:
>>>> Dev Jain <dev.jain@....com> writes:
>>>>
>>>>> On 8/12/24 11:04, Huang, Ying wrote:
>>>>>> Hi, Dev,
>>>>>>
>>>>>> Dev Jain <dev.jain@....com> writes:
>>>>>>
>>>>>>> As already being done in __migrate_folio(), wherein we backoff 
>>>>>>> if the
>>>>>>> folio refcount is wrong, make this check during the unmapping 
>>>>>>> phase, upon
>>>>>>> the failure of which, the original state of the PTEs will be 
>>>>>>> restored and
>>>>>>> the folio lock will be dropped via migrate_folio_undo_src(), any 
>>>>>>> racing
>>>>>>> thread will make progress and migration will be retried.
>>>>>>>
>>>>>>> Signed-off-by: Dev Jain <dev.jain@....com>
>>>>>>> ---
>>>>>>>     mm/migrate.c | 9 +++++++++
>>>>>>>     1 file changed, 9 insertions(+)
>>>>>>>
>>>>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>>>>> index e7296c0fb5d5..477acf996951 100644
>>>>>>> --- a/mm/migrate.c
>>>>>>> +++ b/mm/migrate.c
>>>>>>> @@ -1250,6 +1250,15 @@ static int 
>>>>>>> migrate_folio_unmap(new_folio_t get_new_folio,
>>>>>>>         }
>>>>>>>           if (!folio_mapped(src)) {
>>>>>>> +        /*
>>>>>>> +         * Someone may have changed the refcount and maybe 
>>>>>>> sleeping
>>>>>>> +         * on the folio lock. In case of refcount mismatch, 
>>>>>>> bail out,
>>>>>>> +         * let the system make progress and retry.
>>>>>>> +         */
>>>>>>> +        struct address_space *mapping = folio_mapping(src);
>>>>>>> +
>>>>>>> +        if (folio_ref_count(src) != 
>>>>>>> folio_expected_refs(mapping, src))
>>>>>>> +            goto out;
>>>>>>>             __migrate_folio_record(dst, old_page_state, anon_vma);
>>>>>>>             return MIGRATEPAGE_UNMAP;
>>>>>>>         }
>>>>>> Do you have some test results for this?  For example, after 
>>>>>> applying the
>>>>>> patch, the migration success rate increased XX%, etc.
>>>>> I'll get back to you on this.
>>>>>
>>>>>> My understanding for this issue is that the migration success 
>>>>>> rate can
>>>>>> increase if we undo all changes before retrying.  This is the 
>>>>>> current
>>>>>> behavior for sync migration, but not for async migration.  If so, 
>>>>>> we can
>>>>>> use migrate_pages_sync() for async migration too to increase success
>>>>>> rate?  Of course, we need to change the function name and comments.
>>>>> As per my understanding, this is not the current behaviour for sync
>>>>> migration. After successful unmapping, we fail in 
>>>>> migrate_folio_move()
>>>>> with -EAGAIN, we do not call undo src+dst (rendering the loop around
>>>>> migrate_folio_move() futile), we do not push the failed folio onto 
>>>>> the
>>>>> ret_folios list, therefore, in _sync(), _batch() is never tried 
>>>>> again.
>>>> In migrate_pages_sync(), migrate_pages_batch(,MIGRATE_ASYNC) will be
>>>> called first, if failed, the folio will be restored to the original
>>>> state (unlocked).  Then migrate_pages_batch(,_SYNC*) is called again.
>>>> So, we unlock once.  If it's necessary, we can unlock more times via
>>>> another level of loop.
>>> Yes, that's my point. We need to undo src+dst and retry.
>> For sync migration, we undo src+dst and retry now, but only once.  You
>> have shown that more retrying increases success rate.
>>
>>> We will have
>>> to decide where we want this retrying to be; do we want to change the
>>> return value, end up in the while loop wrapped around _sync(), and 
>>> retry
>>> there by adding another level of loop, or do we want to make use of the
>>> existing retry loops, one of which is wrapped around _unmap(); the 
>>> latter
>>> is my approach. The utility I see for the former approach is that, 
>>> in case
>>> of a large number of page migrations (which should usually be the 
>>> case),
>>> we are giving more time for the folio to get retried. The latter 
>>> does not
>>> give much time and discards the folio if it did not succeed under 7 
>>> times.
>> Because it's a race, I guess that most folios will be migrated
>> successfully in the first pass.
>>
>> My concerns of your method are that it deal with just one case
>> specially.  While retrying after undoing all appears more general.
>
>
> Makes sense. Also, please ignore my "change the return value"
> thing, I got confused between unmap_folios, ret_folios, etc.
> Now I think I understood what the lists are doing :)
>
>>
>> If it's really important to retry after undoing all, we can either
>> convert two retying loops of migrate_pages_batch() into one loop, or
>> remove retry loop in migrate_pages_batch() and retry in its caller
>> instead.
>
> And if I implemented this correctly, the following makes the test
> pass always:
> https://www.codedump.xyz/diff/Zrn7EdxzNXmXyNXe


Okay, I did mess up with the implementation, leading to a false
positive. Let me try again :)

>
>
>
>>
>> -- 
>> Best Regards,
>> Huang, Ying
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ