lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 30 May 2023 09:42:33 -0600
From:   Khalid Aziz <khalid.aziz@...cle.com>
To:     David Hildenbrand <david@...hat.com>,
        Matthew Wilcox <willy@...radead.org>,
        John Hubbard <jhubbard@...dia.com>
Cc:     Steven Sistare <steven.sistare@...cle.com>,
        akpm@...ux-foundation.org, ying.huang@...el.com,
        mgorman@...hsingularity.net, baolin.wang@...ux.alibaba.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Khalid Aziz <khalid@...nel.org>
Subject: Re: [PATCH v4] mm, compaction: Skip all non-migratable pages during
 scan

On 5/29/23 03:25, David Hildenbrand wrote:
> On 29.05.23 02:31, Matthew Wilcox wrote:
>> On Sun, May 28, 2023 at 04:49:52PM -0700, John Hubbard wrote:
>>> On 5/26/23 20:18, Matthew Wilcox wrote:
>>>> On Fri, May 26, 2023 at 07:11:05PM -0700, John Hubbard wrote:
>>>>>> So any user with 1024 processes can fragment physical memory? :/
>>>>>>
>>>>>> Sorry, I'd like to minimize the usage of folio_maybe_dma_pinned().
>>>>>
>>>>> I was actually thinking that we should minimize any more cases of
>>>>> fragile mapcount and refcount comparison, which then leads to
>>>>> Matthew's approach here!
>>>>
>>>> I was wondering if we shouldn't make folio_maybe_dma_pinned() a little
>>>> more accurate.  eg:
>>>>
>>>>           if (folio_test_large(folio))
>>>>                   return atomic_read(&folio->_pincount) > 0;
>>>>     return (unsigned)(folio_ref_count(folio) - folio_mapcount(folio)) >=
>>>>             GUP_PIN_COUNTING_BIAS;
>>>
>>> I'm trying to figure out what might be wrong with that, but it seems
>>> OK. We must have talked about this earlier, but I recall vaguely that
>>> there was not a lot of concern about the case of a page being mapped
>>>> 1024 times. Because pinned or not, it's likely to be effectively
>>> locked into memory due to LRU effects. As mentioned here, too.
>>
>> That was my point of view, but David convinced me that a hostile process
>> can effectively lock its own memory into place.
>>
> 
> 1) My opinion on this optimization
> 
> Before I start going into detail, let me first phrase my opinion so we are on the same page:
> 
> "a tiny fraction of Linux installations makes heavy use of long-term pinning -- the *single* mechanism that completely 
> *destroys* the whole purpose of memory compaction -- and the users complain about memory compaction overhead. So we are 
> talking about optimizing for that by eventually harming *everybody else*."
> 
> Ehm ... I'm all for reasonable optimization, but not at any price.
> 
> We don't care about a handful of long-term pinned pages in the system, this is really about vfio long-term pinning a 
> significant amount of system RAM, and we only care about shmem here.
> 
> 
> *maybe* there is an issue with page migration when we have many page mappings, but (a) it's a separate issue and to be 
> dealt with separately, not buried into such checks (b) it's unclear how many page mappings are too many, the magic 
> number 1024 is just a random number (c) it needs much finer control (hostile processes).
> 
> 
> 2) mapcount vs. pagecount
> 
> Now, doing these mapcount vs. pagecount checks is perfectly reasonable (see mm/ksm.c) as long as know what we're doing. 
> For example, we have to make sure that a possible compound page cannot get split concurrently (e.g., hold a reference). 
> It's been used forever, I consider it stable.
> 
> I completely agree that we should be careful with such mapcount vs. pagecount checks, and if we can use something 
> better, let's use something *better*.

When we have a reliable folio_maybe_dma_longterm_pinned() function, it will be better to call that instead of doing 
refcount vs mapcount check. Until that better function to check for pinned pages is in place, may I propose that the 
current patch fixes a customer problem though not optimally and is a good enough working solution. When a better 
function is in place, page_has_extra_refs() function can be updated to rely on this other function instead of refcount 
vs mapcount.

Thanks,
Khalid


> 
> 
> 3) page_maybe_dma_pinned()
> 
> Now, why do I dislike bringing up page_maybe_dma_pinned() [IOW, why is it not better]? Besides it ignoring FOLL_GET for 
> now, that might be fixed at some point.
> 
> I think we agree that order-0 pages are the problem, because we get guaranteed false positives with many mappings (not 
> just on speculative page pinnings). For these order-0 pages, it's perfectly reasonable to check page_maybe_dma_pinned() 
> *as long as* we know the number of mappings is very small.
> 
> I don't consider anon pages the issue here, we barely get 1024 mappings (not even with KSM), and it's much harder to 
> exploit because you cannot simply get new mappings via mmap(), only via fork().
> 
> In fact, we could optimize easily for order-0 anon pages if we'd need to: I have a patch lying around, it just wasn't 
> really worth it for now, because there is only a single relevant page_maybe_dma_pinned() call in vmscan that could benefit:
> 
> https://github.com/davidhildenbrand/linux/commit/0575860d064694d4e2f307b2c20a880a6a7b59ab
> 
> We cannot do the same for pagecache pages, so we would possibly introduce harm by carelessly checking 
> page_maybe_dma_pinned() on pages
> with many mappings.
> 
> 
> 4) folio_maybe_dma_longterm_pinned() ?
> 
> I thought yesterday if we'd want something like folio_maybe_dma_longterm_pinned() here. Essentially using what we 
> learned about long-term pinning of fs pages:
> 
> (1) ZONE_MOVABLE, MIGRATE_CMA -> "return false;"
> (2) If !anon, !hugetlb, !shmem -> "return false;"
> (3) "return folio_maybe_dma_pinned()"
> 
> Yes, above would easily create false-positives for short-term pinned pages (anon/hugetlb/shmem), but would never create 
> false-positives for any other page (shared library ...).
> 
> 
> We would use it in the following way:
> 
> bool skip_folio_in_isolation()
> {
>      /*
>           * Avoid skipping pages that are short-term pinned, the pin
>       * might go away any moment and we'll succeed to migrate.
>           *
>           * We get false positives for short-term pinned anon, shmem and
>           * hugetl pages for now, but such short-term pins are transient.
>           */
>      if (!folio_maybe_dma_longterm_pinned())
>          return false;
>          /*
>           * order-0 pages with many mappings can easily be confused
>           * for pinned pages and this could be exploited by
>           * malicious user-space to cause fragmentation. This is only
>           * an optimization, so if a page (especially shmem) is mapped
>           * many times, we'll rather try migrating it instead of
>           * accidentally skipping it all the time.
>           */
>      return folio_order(folio) != 0 || && total_mappings <= 32)
> }
> 
> Someone long-term pins an shmem page with many mappings? Too bad, we don't optimize for that and still try migrating it.
> 
> 
> BUT, I am still confused if we want to check here for "any additional references", which is what mapcount vs. refcount 
> is, or folio_maybe_dma_longterm_pinned().
> 
> Of course, we could similarly write a variant of skip_folio_in_isolation:
> 
> bool skip_folio_in_isolation()
> {
>      /*
>           * If a page is not pinned, try migrating it. Note that this
>           * does not consider any FOLL_GET used for DMA yet.
>           */
>      if (!folio_maybe_dma_pinned())
>          return false;
>          /*
>           * order-0 pages with many mappings can easily be confused
>           * for pinned pages and this could be exploited by
>           * malicious user-space to cause fragmentation. This is only
>           * an optimization, so if a page is mapped
>           * many times, we'll rather try migrating it instead of
>           * accidentally skipping it all the time.
>           */
>      return folio_order(folio) != 0 || && total_mappings <= 32)
> }
> 
> 
> As long as FOLL_GET is still used for DMA, the mapcount vs. pagecount checks might be better ... but it depends on if we 
> care about short-term or long-term pinned pages here.
> 
>>> Anyway, sure.
>>>
>>> A detail:
>>>
>>> The unsigned cast, I'm not sure that helps or solves anything, right?
>>> That is, other than bugs, is it possible to get refcount < mapcount?
> 
> BUG IMHO.
> 
>>>
>>> And if it's only due to bugs, then the casting, again, isn't likely to
>>> going to mitigate the fallout from whatever mess the bug caused.
>>
>> I wasn't thinking too hard about the cast.  If the caller has the folio
>> lock, I don't think it's possible for refcount < mapcount.  This caller
>> has a refcount, but doesn't hold the lock, so it is possible for them
>> to read mapcount first, then have both mapcount and refcount decremented
>> and see refcount < mapcount.
>>
>> I don't think it matters too much.  We don't hold the folio lock, so
>> it might transition from pinned to unpinned as much as a refcount might
>> be decremented or a mapcount incremented.  What's important is that a
>> hostile process can't prevent memory from being moved indefinitely.
>>
>> David, have I missed something else?
> 
> 
> What I learned from staring at the code in mm/ksm.c:write_protect_page() for too long a while ago is that:
> 
> (1) Mapping a page first increments the refcount, then the mapcount
> (2) Unmapping a page first decrements the mapcount, then the refcount
> 
> So the mapcount is supposed to be always larger than the refcount. Especially, if you take a snapshot of both (read 
> first the mapcount, then the mapcount).
> 
> A hostile process wouldn't be able to block compaction here forever, even if we accidentally would make the wrong call 
> once when deciding whether to isolate a page. It would work on the next attempt.
> 
> That's the difference to page_maybe_dma_pinned(), which can be made to consistently block compaction.
> 
> 
> [sorry for the lengthy mail]
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ