lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YEuiI44IRjBOQ8Wy@google.com>
Date:   Fri, 12 Mar 2021 09:17:23 -0800
From:   Minchan Kim <minchan@...nel.org>
To:     David Hildenbrand <david@...hat.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>, joaodias@...gle.com,
        surenb@...gle.com, cgoldswo@...eaurora.org, willy@...radead.org,
        mhocko@...e.com, vbabka@...e.cz, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v3 3/3] mm: fs: Invalidate BH LRU during page migration

On Fri, Mar 12, 2021 at 10:33:48AM +0100, David Hildenbrand wrote:
> On 12.03.21 10:03, David Hildenbrand wrote:
> > On 10.03.21 17:14, Minchan Kim wrote:
> > > ffer_head LRU caches will be pinned and thus cannot be migrated.
> > > This can prevent CMA allocations from succeeding, which are often used
> > > on platforms with co-processors (such as a DSP) that can only use
> > > physically contiguous memory. It can also prevent memory
> > > hot-unplugging from succeeding, which involves migrating at least
> > > MIN_MEMORY_BLOCK_SIZE bytes of memory, which ranges from 8 MiB to 1
> > > GiB based on the architecture in use.
> > 
> > Actually, it's memory_block_size_bytes(), which can be even bigger
> > (IIRC, 128MiB..2 GiB on x86-64) that fails to get offlined. But that
> > will prevent bigger granularity (e.g., a whole DIMM) from getting unplugged.
> > 
> > > 
> > > Correspondingly, invalidate the BH LRU caches before a migration
> > > starts and stop any buffer_head from being cached in the LRU caches,
> > > until migration has finished.
> > 
> > Sounds sane to me.
> > 
> 
> Diving a bit into the code, I am wondering:
> 
> 
> a) Are these buffer head pages marked as movable?
> 
> IOW, are they either PageLRU() or __PageMovable()?
> 
> 
> b) How do these pages end up on ZONE_MOVABLE or MIGRATE_CMA?
> 
> I assume these pages come via
> alloc_page_buffers()->alloc_buffer_head()->kmem_cache_zalloc(GFP_NOFS |
> __GFP_ACCOUNT)
> 

It's indirect it was not clear

try_to_release_page
    try_to_free_buffers
        buffer_busy
            failed

Yeah, comment is misleading. This one would be better.

        /*
         * the refcount of buffer_head in bh_lru prevents dropping the
         * attached page(i.e., try_to_free_buffers) so it could cause
         * failing page migrationn.
         * Skip putting upcoming bh into bh_lru until migration is done.
         */

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ