lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 8 Apr 2024 11:43:07 +0200
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>,
 Andrew Morton <akpm@...ux-foundation.org>,
 Matthew Wilcox <willy@...radead.org>, Huang Ying <ying.huang@...el.com>,
 Gao Xiang <xiang@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
 Yang Shi <shy828301@...il.com>, Michal Hocko <mhocko@...e.com>,
 Kefeng Wang <wangkefeng.wang@...wei.com>, Barry Song <21cnbao@...il.com>,
 Chris Li <chrisl@...nel.org>, Lance Yang <ioworker0@...il.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 2/6] mm: swap: free_swap_and_cache_nr() as batched
 free_swap_and_cache()


>>> +
>>> +/**
>>> + * swap_pte_batch - detect a PTE batch for a set of contiguous swap entries
>>> + * @start_ptep: Page table pointer for the first entry.
>>> + * @max_nr: The maximum number of table entries to consider.
>>> + * @entry: Swap entry recovered from the first table entry.
>>> + *
>>> + * Detect a batch of contiguous swap entries: consecutive (non-present) PTEs
>>> + * containing swap entries all with consecutive offsets and targeting the same
>>> + * swap type.
>>> + *
>>
>> Likely you should document that any swp pte bits are ignored? ()
> 
> Sorry I don't understand this comment. I thought any non-none, non-present PTE
> was always considered to contain only a "swap entry" and a swap entry consists
> of a "type" and an "offset" only. (and its a special "non-swap" swap entry if
> type > SOME_CONSTANT) Are you saying there are additional fields in the PTE that
> are not part of the swap entry?


pte_swp_soft_dirty()
pte_swp_clear_exclusive()
pte_swp_uffd_wp()

Are PTE bits used for swp PTE.

There is also dirty/young for migration entries, but that's not of a 
concern here, because we stop for non_swap_entry().

> 
> 
>>
>>> + * max_nr must be at least one and must be limited by the caller so scanning
>>> + * cannot exceed a single page table.
>>> + *
>>> + * Return: the number of table entries in the batch.
>>> + */
>>> +static inline int swap_pte_batch(pte_t *start_ptep, int max_nr,
>>> +                 swp_entry_t entry)
>>> +{
>>> +    const pte_t *end_ptep = start_ptep + max_nr;
>>> +    unsigned long expected_offset = swp_offset(entry) + 1;
>>> +    unsigned int expected_type = swp_type(entry);
>>> +    pte_t *ptep = start_ptep + 1;
>>> +
>>> +    VM_WARN_ON(max_nr < 1);
>>> +    VM_WARN_ON(non_swap_entry(entry));
>>> +
>>> +    while (ptep < end_ptep) {
>>> +        pte_t pte = ptep_get(ptep);
>>> +
>>> +        if (pte_none(pte) || pte_present(pte))
>>> +            break;
>>> +
>>> +        entry = pte_to_swp_entry(pte);
>>> +
>>> +        if (non_swap_entry(entry) ||
>>> +            swp_type(entry) != expected_type ||
>>> +            swp_offset(entry) != expected_offset)
>>> +            break;
>>> +
>>> +        expected_offset++;
>>> +        ptep++;
>>> +    }
>>> +
>>> +    return ptep - start_ptep;
>>> +}
>>
>> Looks very clean :)
>>
>> I was wondering whether we could similarly construct the expected swp PTE and
>> only check pte_same.
>>
>> expected_pte = __swp_entry_to_pte(__swp_entry(expected_type, expected_offset));
>>
>> ... or have a variant to increase only the swp offset for an existing pte. But
>> non-trivial due to the arch-dependent format.
>>
>> But then, we'd fail on mismatch of other swp pte bits.
> 
> Hmm, perhaps I have a misunderstanding regarding "swp pte bits"...
> 
>>
>>
>> On swapin, when reusing this function (likely!), we'll might to make sure that
>> the PTE bits match as well.
>>
>> See below regarding uffd-wp.
>>
>>
>>>    #endif /* CONFIG_MMU */
>>>      void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio,
>>> diff --git a/mm/madvise.c b/mm/madvise.c
>>> index 1f77a51baaac..070bedb4996e 100644
>>> --- a/mm/madvise.c
>>> +++ b/mm/madvise.c
>>> @@ -628,6 +628,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned
>>> long addr,
>>>        struct folio *folio;
>>>        int nr_swap = 0;
>>>        unsigned long next;
>>> +    int nr, max_nr;
>>>          next = pmd_addr_end(addr, end);
>>>        if (pmd_trans_huge(*pmd))
>>> @@ -640,7 +641,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned
>>> long addr,
>>>            return 0;
>>>        flush_tlb_batched_pending(mm);
>>>        arch_enter_lazy_mmu_mode();
>>> -    for (; addr != end; pte++, addr += PAGE_SIZE) {
>>> +    for (; addr != end; pte += nr, addr += PAGE_SIZE * nr) {
>>> +        nr = 1;
>>>            ptent = ptep_get(pte);
>>>              if (pte_none(ptent))
>>> @@ -655,9 +657,11 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned
>>> long addr,
>>>                  entry = pte_to_swp_entry(ptent);
>>>                if (!non_swap_entry(entry)) {
>>> -                nr_swap--;
>>> -                free_swap_and_cache(entry);
>>> -                pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
>>> +                max_nr = (end - addr) / PAGE_SIZE;
>>> +                nr = swap_pte_batch(pte, max_nr, entry);
>>> +                nr_swap -= nr;
>>> +                free_swap_and_cache_nr(entry, nr);
>>> +                clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
>>>                } else if (is_hwpoison_entry(entry) ||
>>>                       is_poisoned_swp_entry(entry)) {
>>>                    pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index 7dc6c3d9fa83..ef2968894718 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -1637,12 +1637,13 @@ static unsigned long zap_pte_range(struct mmu_gather
>>> *tlb,
>>>                    folio_remove_rmap_pte(folio, page, vma);
>>>                folio_put(folio);
>>>            } else if (!non_swap_entry(entry)) {
>>> -            /* Genuine swap entry, hence a private anon page */
>>> +            max_nr = (end - addr) / PAGE_SIZE;
>>> +            nr = swap_pte_batch(pte, max_nr, entry);
>>> +            /* Genuine swap entries, hence a private anon pages */
>>>                if (!should_zap_cows(details))
>>>                    continue;
>>> -            rss[MM_SWAPENTS]--;
>>> -            if (unlikely(!free_swap_and_cache(entry)))
>>> -                print_bad_pte(vma, addr, ptent, NULL);
>>> +            rss[MM_SWAPENTS] -= nr;
>>> +            free_swap_and_cache_nr(entry, nr);
>>>            } else if (is_migration_entry(entry)) {
>>>                folio = pfn_swap_entry_folio(entry);
>>>                if (!should_zap_folio(details, folio))
>>> @@ -1665,8 +1666,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
>>>                pr_alert("unrecognized swap entry 0x%lx\n", entry.val);
>>>                WARN_ON_ONCE(1);
>>>            }
>>> -        pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
>>> -        zap_install_uffd_wp_if_needed(vma, addr, pte, 1, details, ptent);
>>> +        clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
>>
>> For zap_install_uffd_wp_if_needed(), the uffd-wp bit has to match.
>>
>> zap_install_uffd_wp_if_needed() will use the uffd-wp information in
>> ptent->pteval to make a decision whether to place PTE_MARKER_UFFD_WP markers.
>>
>> On mixture, you either lose some or place too many markers.
> 
> What path are you concerned about here? I don't get how what you describe can
> happen? swap_pte_batch() will only give me a batch of actual swap entries and
> actual swap entries don't contain uffd-wp info, IIUC. If the function gets to a
> "non-swap" swap entry, it bails. I thought the uffd-wp info was populated based
> on the VMA state at swap-in? I think you are telling me that it's persisted
> across the swap per-pte?

Please see zap_install_uffd_wp_if_needed():

if (unlikely(pte_swp_uffd_wp_any(pteval)))
	arm_uffd_pte = true;

The PTEs (swp PTEs to be precise) contain uffd-wp informtation.

[...]

>>> +    /*
>>> +     * Short-circuit the below loop if none of the entries had their
>>> +     * reference drop to zero.
>>> +     */
>>> +    if (!any_only_cache)
>>> +        goto out;
>>>    -        count = __swap_entry_free(p, entry);
>>> -        if (count == SWAP_HAS_CACHE)
>>> -            __try_to_reclaim_swap(p, swp_offset(entry),
>>> +    /*
>>> +     * Now go back over the range trying to reclaim the swap cache. This is
>>> +     * more efficient for large folios because we will only try to reclaim
>>> +     * the swap once per folio in the common case. If we do
>>> +     * __swap_entry_free() and __try_to_reclaim_swap() in the same loop, the
>>> +     * latter will get a reference and lock the folio for every individual
>>> +     * page but will only succeed once the swap slot for every subpage is
>>> +     * zero.
>>> +     */
>>> +    for (offset = swp_offset(entry); offset < end; offset += nr) {
>>> +        nr = 1;
>>> +        if (READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) {
>>
>> Here we use READ_ONCE() only, above data_race(). Hmmm.
> 
> Yes. I think this is correct.
> 
> READ_ONCE() is a "marked access" which KCSAN understands, so it won't complain
> about it. So data_race() isn't required when READ_ONCE() (or WRITE_ONCE()) is
> used. I believe READ_ONCE() is required here because we don't have a lock and we
> want to make sure we read it in a non-tearing manner.
> 
> We don't need the READ_ONCE() above since we don't care about the exact value -
> only that it's not 0 (because we should be holding a ref). So do a plain access
> to give the compiler a bit more freedom. But we need to mark that with
> data_race() to stop KCSAN from complaining.

Okay.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ