[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210812212314.4fkrmzebluyl3umo@box.shutemov.name>
Date: Fri, 13 Aug 2021 00:23:14 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Joerg Roedel <jroedel@...e.de>, Andi Kleen <ak@...ux.intel.com>,
Borislav Petkov <bp@...en8.de>,
Andy Lutomirski <luto@...nel.org>,
Sean Christopherson <seanjc@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Kuppuswamy Sathyanarayanan
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
David Rientjes <rientjes@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Tom Lendacky <thomas.lendacky@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Varad Gautam <varad.gautam@...e.com>,
Dario Faggioli <dfaggioli@...e.com>, x86@...nel.org,
linux-mm@...ck.org, linux-coco@...ts.linux.dev,
linux-kernel@...r.kernel.org,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH 1/5] mm: Add support for unaccepted memory
On Thu, Aug 12, 2021 at 01:59:01PM -0700, Dave Hansen wrote:
> On 8/12/21 1:49 PM, Kirill A. Shutemov wrote:
> > On Thu, Aug 12, 2021 at 07:14:20AM -0700, Dave Hansen wrote:
> >> On 8/12/21 1:19 AM, Joerg Roedel wrote:
> >> maybe_accept_page()
> >> {
> >> unsigned long huge_pfn = page_to_phys(page) / PMD_SIZE;
> >>
> >> /* Test the bit before taking any locks: */
> >> if (test_bit(huge_pfn, &accepted_bitmap))
> >> return;
> >>
> >> spin_lock_irq();
> >> /* Retest inside the lock: */
> >> if (test_bit(huge_pfn, &accepted_bitmap))
> >> return;
> >> tdx_accept_page(page, PMD_SIZE);
> >> set_bit(huge_pfn, &accepted_bitmap));
> >> spin_unlock_irq();
> >> }
> >>
> >> That's still not great. It's still a global lock and the lock is still
> >> held for quite a while because that accept isn't going to be lightning
> >> fast. But, at least it's not holding any *other* locks. It also
> >> doesn't take any locks in the fast path where the 2MB page was already
> >> accepted.
> >
> > I expect a cache line with bitmap to bounce around during warm up. One
> > cache line covers a gig of RAM.
>
> The bitmap bouncing around isn't going to really matter when you have a
> global lock protecting it from writes.
The idea with static key would not work if we mark shared memory as
unaccepted there.
> > It's also not clear at all at what point the static key has to be
> > switched. We don't have any obvious point where we can say we are done
> > with accepting (unless you advocate for proactive accepting).
>
> Two easy options:
> 1. Run over the bitmap and counts the bits left. That can be done
> outside the lock even.
> 2. Keep a counter of the number of bits set in the bitmap.
>
> > I like PageOffline() because we only need to consult the cache page
> > allocator already have in hands before looking into bitmap.
>
> I like it too. But, it's really nasty if the value is only valid deep
> in the allocator.
>
> We could keep the PageOffline() thing and then move it to some other
> field in 'struct page' that's only valid between ClearPageOffline() and
> prep_new_page(). Some magic value that says: "This page has not yet
> been accepted, you better check the bitmap." Something like:
>
> if (TestClearPageOffline(page))
> page->private = 0Xdeadbeef;
>
> and then check page->private in prep_new_page(). There should be plenty
> of 'struct page' space to hijack in that small window.
PageOffline() encoded in mapcount and check_new_page_bad() would complain
is mapcount is not -1.
> BTW, I was going to actually try and hack something up, but I got
> annoyed that your patches don't apply upstream and gave up. A git tree
> with all of the dependencies would be nice. <hint, hint>
Okay.
--
Kirill A. Shutemov
Powered by blists - more mailing lists