[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0eccea46-648d-ff70-dcc6-fdca88ff1234@amd.com>
Date: Wed, 3 Aug 2022 16:03:35 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: Dave Hansen <dave.hansen@...el.com>, linux-kernel@...r.kernel.org,
x86@...nel.org
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
"H. Peter Anvin" <hpa@...or.com>,
Michael Roth <michael.roth@....com>,
Joerg Roedel <jroedel@...e.de>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH v1.1 1/2] x86/sev: Use per-CPU PSC structure in prep for
unaccepted memory support
On 8/3/22 13:24, Dave Hansen wrote:
> On 8/3/22 11:21, Tom Lendacky wrote:
>>> Would it be simpler to just do a spin_trylock_irqsave()? You fall back
>>> to early_set_pages_state() whenever you can't acquire the lock.
>>
>> I was looking at that and can definitely go that route if this approach
>> is preferred.
>
> I prefer it for sure.
>
> This whole iteration does look good to me versus the per-cpu version, so
> I say go ahead with doing this for v2 once you wait a bit for any more
> feedback.
I'm still concerned about the whole spinlock and performance. What if I
reduce the number of entries in the PSC structure to, say, 64, which
reduces the size of the struct to 520 bytes. Any issue if that is put on
the stack, instead? It definitely makes things less complicated and feels
like a good compromise on the size vs the number of PSC VMGEXIT requests.
Thanks,
Tom
Powered by blists - more mailing lists