lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 16 Aug 2021 21:40:06 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Yang Shi <shy828301@...il.com>
Cc:     HORIGUCHI NAOYA(堀口 直也) 
        <naoya.horiguchi@....com>, Oscar Salvador <osalvador@...e.de>,
        tdmackey@...tter.com, Andrew Morton <akpm@...ux-foundation.org>,
        Jonathan Corbet <corbet@....net>,
        Linux MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] mm: hwpoison: don't drop slab caches for offlining
 non-LRU page

On 16.08.21 21:37, Yang Shi wrote:
> On Mon, Aug 16, 2021 at 12:15 PM David Hildenbrand <david@...hat.com> wrote:
>>
>> On 16.08.21 20:09, Yang Shi wrote:
>>> In the current implementation of soft offline, if non-LRU page is met,
>>> all the slab caches will be dropped to free the page then offline.  But
>>> if the page is not slab page all the effort is wasted in vain.  Even
>>> though it is a slab page, it is not guaranteed the page could be freed
>>> at all.
>>
>> ... but there is a chance it could be and the current behavior is
>> actually helpful in some setups.
> 
> I don't disagree it is kind of helpful for some cases, but the
> question is how likely it is helpful and if the cost is worth it or
> not. For non-slab page (of course, non-lru too), dropping slab doesn't
> make any sense. Even though it is slab page, it must be a reclaimable
> slab. Even though it is a reclaimable slab, dropping slab can't
> guarantee all objects on the same page are dropped.
> 
> IMHO the likelihood is not worth the cost and side effect, for example
> the unsuable system.
> 
>>
>> [...]
>>
>>> The lockup made the machine is quite unusable.  And it also made the
>>> most workingset gone, the reclaimabled slab caches were reduced from 12G
>>> to 300MB, the page caches were decreased from 17G to 4G.
>>>
>>> But the most disappointing thing is all the effort doesn't make the page
>>> offline, it just returns:
>>>
>>> soft_offline: 0x1469f2: unknown non LRU page type 5ffff0000000000 ()
>>>
>>
>> In your example, yes. I had a look at the introducing commit:
>> facb6011f399 ("HWPOISON: Add soft page offline support")
>>
>> "
>>       When the page is not free or LRU we try to free pages
>>       from slab and other caches. The slab freeing is currently
>>       quite dumb and does not try to focus on the specific slab
>>       cache which might own the page. This could be potentially
>>       improved later.
>> "
>>
>> I wonder, if instead of removing it altogether, we could actually
>> improve it as envisioned.
>>
>> To be precise, for alloc_contig_range() it would also make sense to be
>> able to shrink only in a specific physical memory range; this here seems
>> to be a similar thing. (actually, alloc_contig_range(), actual memory
>> offlining and hw poisoning/soft-offlining have a lot in common)
>>
>> Unfortunately, the last time I took a brief look at teaching shrinkers
>> to be range-aware, it turned out to be a lot of work ... so maybe this
>> is really a long term goal to be mitigated in the meantime by disabling
>> it, if it turns out to be more of a problem than actually help.
> 
> Do you mean physical page range? Yes, it would need a lot of work.
> TBH, I don't think it is quite feasible for the time being.
> 
> The problem is slabs for shrinker are managed by objects rather than
> pages. For example, dentry and inode objects (the most consumed
> reclaimable slabs) are linked to lru, and shrinkers traverse the lru
> to shrink the objects. The objects in a certain range can not be
> guaranteed in the same range of physical pages.

Right, essentially you would have to look at each individual object and 
test if it falls into the physical range of interest. Not that it can't 
be done I guess, but it screams to be a lot of work.

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ