lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 31 Aug 2022 10:51:45 +0800
From:   Qi Zheng <zhengqi.arch@...edance.com>
To:     Yang Shi <shy828301@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     willy@...radead.org, vbabka@...e.cz, hannes@...xchg.org,
        minchan@...nel.org, rppt@...nel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/7] mm: introduce common struct mm_slot



On 2022/8/31 01:03, Yang Shi wrote:
> On Mon, Aug 29, 2022 at 12:51 PM Andrew Morton
> <akpm@...ux-foundation.org> wrote:
>>
>> On Mon, 29 Aug 2022 22:30:49 +0800 Qi Zheng <zhengqi.arch@...edance.com> wrote:
>>
>>> At present, both THP and KSM module have similar structures
>>> mm_slot for organizing and recording the information required
>>> for scanning mm, and each defines the following exactly the
>>> same operation functions:
>>>
>>>   - alloc_mm_slot
>>>   - free_mm_slot
>>>   - get_mm_slot
>>>   - insert_to_mm_slots_hash
>>>
>>> In order to de-duplicate these codes, this patch introduces a
>>> common struct mm_slot, and subsequent patches will let THP and
>>> KSM to use it.
>>
>> Seems like a good idea.
>>
>>> --- /dev/null
>>> +++ b/mm/mm_slot.h
>>> @@ -0,0 +1,55 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +
>>> +#ifndef _LINUX_MM_SLOT_H
>>> +#define _LINUX_MM_SLOT_H
>>> +
>>> +#include <linux/hashtable.h>
>>> +#include <linux/slab.h>
>>> +
>>> +/*
>>> + * struct mm_slot - hash lookup from mm to mm_slot
>>> + * @hash: link to the mm_slots hash list
>>> + * @mm_node: link into the mm_slots list
>>> + * @mm: the mm that this information is valid for
>>> + */
>>> +struct mm_slot {
>>> +     struct hlist_node hash;
>>> +     struct list_head mm_node;
>>> +     struct mm_struct *mm;
>>> +};
>>
>> It appears that the presence of an mm_struct in the hash list does not
>> contribute to the mm_struct's refcount?  That's somewhat unexpected.
> 
> I didn't find time to look into the series yet, but when the
> mm/mm_slot was added to the list, mmgrab() was definitely called if
> this was not changed by the series.

Yeah, and this series does not change that.

> 
>>
>> It would be helpful to add some words here describing the means by
>> which a user of mm_slot would prevent the mm_struct from getting freed
>> while on the list.  I assume "caller must maintain a reference on the
>> mm_struct while it remains on an mm_slot hash list"?
>>
>>> +#define mm_slot_entry(ptr, type, member) \
>>> +     container_of(ptr, type, member)
>>> +
>>> +static inline void *alloc_mm_slot(struct kmem_cache *cache)
>>> +{
>>> +     if (!cache)     /* initialization failed */
>>> +             return NULL;
>>> +     return kmem_cache_zalloc(cache, GFP_KERNEL);
>>> +}
>>> +
>>> +static inline void free_mm_slot(struct kmem_cache *cache, void *objp)
>>> +{
>>> +     kmem_cache_free(cache, objp);
>>> +}
>>> +
>>> +#define get_mm_slot(_hashtable, _mm)                                        \
>>> +({                                                                          \
>>> +     struct mm_slot *tmp_slot, *mm_slot = NULL;                             \
>>> +                                                                            \
>>> +     hash_for_each_possible(_hashtable, tmp_slot, hash, (unsigned long)_mm) \
>>> +             if (_mm == tmp_slot->mm) {                                     \
>>> +                     mm_slot = tmp_slot;                                    \
>>> +                     break;                                                 \
>>> +             }                                                              \
>>> +                                                                            \
>>> +     mm_slot;                                                               \
>>> +})
>>
>> Is there a reason why this must be implemented as a macro?  That's
>> preferable, although this may be overly large for inlining.  mm/util.c
>> might suit.
>>
>>> +#define insert_to_mm_slots_hash(_hashtable, _mm, _mm_slot)                  \
>>> +({                                                                          \
>>> +     _mm_slot->mm = _mm;                                                    \
>>> +     hash_add(_hashtable, &_mm_slot->hash, (unsigned long)_mm);             \
>>> +})
>>
>> Does this need to be a macro?
>>
>>
>> And the naming.  Can we please have
>>
>> mm_slot_entry
>> mm_slot_alloc
>> mm_slot_free
>> mm_slot_get
>> mm_slot_insert
>>
>> Also, "get" usually implies that a refcout is taken on the obtained
>> object, so mm_slot_lookup() would be more appropriate.
>>

-- 
Thanks,
Qi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ