lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4edfc0a2.e66.193757a9f31.Coremail.00107082@163.com>
Date: Fri, 29 Nov 2024 09:13:10 +0800 (CST)
From: "David Wang" <00107082@....com>
To: "Suren Baghdasaryan" <surenb@...gle.com>
Cc: kent.overstreet@...ux.dev, yuzhao@...gle.com, akpm@...ux-foundation.org, 
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/codetag: swap tags when migrate pages

Hi,

At 2024-11-29 03:53:28, "Suren Baghdasaryan" <surenb@...gle.com> wrote:
>On Thu, Nov 28, 2024 at 2:26 AM David Wang <00107082@....com> wrote:
>>
>> The initial solution for codetag adjustment during page migration
>> uses three kinds of low level plumbings, those steps can be replaced
>> by swapping tags, which only needs one kind of low level plumbing,
>> and code is more clear.
>
>This description does not explain the real issue. I would suggest
>something like:
>
>Current solution to adjust codetag references during page migration is
>done in 3 steps:
>1. sets the codetag reference of the old page as empty (not pointing
>to any codetag);
>2. subtracts counters of the new page to compensate for its own allocation;
>3. sets codetag reference of the new page to point to the codetag of
>the old page.
>This does not work if CONFIG_MEM_ALLOC_PROFILING_DEBUG=n because
>set_codetag_empty() becomes NOOP. Instead, let's simply swap codetag
>references so that the new page is referencing the old codetag and the
>old page is referencing the new codetag. This way accounting stays
>valid and the logic makes more sense.
>
>Fixes: e0a955bf7f61 ("mm/codetag: add pgalloc_tag_copy()")
>
>>
>> Signed-off-by: David Wang <00107082@....com>
>> Link: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/
> This above Link: seems unusual. Maybe uses Closes instead like this:
>
>Closed: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/

>

Copy that~!
 

>> ---
>>  include/linux/pgalloc_tag.h |  4 ++--
>>  lib/alloc_tag.c             | 35 +++++++++++++++++++----------------
>>  mm/migrate.c                |  2 +-
>>  3 files changed, 22 insertions(+), 19 deletions(-)
>>
>> diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h
>> index 0e43ab653ab6..3469c4b20105 100644
>> --- a/include/linux/pgalloc_tag.h
>> +++ b/include/linux/pgalloc_tag.h
>> @@ -231,7 +231,7 @@ static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr)
>>  }
>>
>>  void pgalloc_tag_split(struct folio *folio, int old_order, int new_order);
>> -void pgalloc_tag_copy(struct folio *new, struct folio *old);
>> +void pgalloc_tag_swap(struct folio *new, struct folio *old);
>>
>>  void __init alloc_tag_sec_init(void);
>>
>> @@ -245,7 +245,7 @@ static inline struct alloc_tag *pgalloc_tag_get(struct page *page) { return NULL
>>  static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned int nr) {}
>>  static inline void alloc_tag_sec_init(void) {}
>>  static inline void pgalloc_tag_split(struct folio *folio, int old_order, int new_order) {}
>> -static inline void pgalloc_tag_copy(struct folio *new, struct folio *old) {}
>> +static inline void pgalloc_tag_swap(struct folio *new, struct folio *old) {}
>>
>>  #endif /* CONFIG_MEM_ALLOC_PROFILING */
>>
>> diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
>> index 2414a7ee7ec7..b45efde50c40 100644
>> --- a/lib/alloc_tag.c
>> +++ b/lib/alloc_tag.c
>> @@ -189,26 +189,29 @@ void pgalloc_tag_split(struct folio *folio, int old_order, int new_order)
>>         }
>>  }
>>
>> -void pgalloc_tag_copy(struct folio *new, struct folio *old)
>> +void pgalloc_tag_swap(struct folio *new, struct folio *old)
>>  {
>> -       union pgtag_ref_handle handle;
>> -       union codetag_ref ref;
>> -       struct alloc_tag *tag;
>> +       union pgtag_ref_handle handles[2];
>> +       union codetag_ref refs[2];
>> +       struct alloc_tag *tags[2];
>> +       struct folio *folios[2] = {new, old};
>> +       int i;
>>
>> -       tag = pgalloc_tag_get(&old->page);
>> -       if (!tag)
>> -               return;
>> +       for (i = 0; i < 2; i++) {
>> +               tags[i] = pgalloc_tag_get(&folios[i]->page);
>> +               if (!tags[i])
>> +                       return;
>> +               if (!get_page_tag_ref(&folios[i]->page, &refs[i], &handles[i]))
>> +                       return;
>
>If any of the above getters fail on the second cycle, you will miss
>calling put_page_tag_ref(handles[0]) and releasing the page_tag_ref
>you obtained on the first cycle. It might be cleaner to drop the use
>of arrays and use new/old.


Thanks for catching this. I will make the change.



>
>> +       }
>>
>> -       if (!get_page_tag_ref(&new->page, &ref, &handle))
>> -               return;
>> +       swap(tags[0], tags[1]);
>>
>> -       /* Clear the old ref to the original allocation tag. */
>> -       clear_page_tag_ref(&old->page);
>> -       /* Decrement the counters of the tag on get_new_folio. */
>> -       alloc_tag_sub(&ref, folio_size(new));
>> -       __alloc_tag_ref_set(&ref, tag);
>> -       update_page_tag_ref(handle, &ref);
>> -       put_page_tag_ref(handle);
>> +       for (i = 0; i < 2; i++) {
>> +               __alloc_tag_ref_set(&refs[i], tags[i]);
>> +               update_page_tag_ref(handles[i], &refs[i]);
>> +               put_page_tag_ref(handles[i]);
>> +       }
>>  }
>>
>>  static void shutdown_mem_profiling(bool remove_file)
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 2ce6b4b814df..cc68583c86f9 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -745,7 +745,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
>>                 folio_set_readahead(newfolio);
>>
>>         folio_copy_owner(newfolio, folio);
>> -       pgalloc_tag_copy(newfolio, folio);
>> +       pgalloc_tag_swap(newfolio, folio);
>>
>>         mem_cgroup_migrate(folio, newfolio);
>>  }
>> --
>> 2.39.2
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ