[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eb69188fead8b96a4a7cfbfe4c586c19ca233819.camel@amazon.com>
Date: Mon, 20 Apr 2020 00:24:57 +0000
From: "Singh, Balbir" <sblbir@...zon.com>
To: "tglx@...utronix.de" <tglx@...utronix.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: "keescook@...omium.org" <keescook@...omium.org>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"benh@...nel.crashing.org" <benh@...nel.crashing.org>,
"jpoimboe@...hat.com" <jpoimboe@...hat.com>,
"x86@...nel.org" <x86@...nel.org>,
"dave.hansen@...el.com" <dave.hansen@...el.com>
Subject: Re: [PATCH v3 4/5] arch/x86: Optionally flush L1D on context switch
On Sat, 2020-04-18 at 12:17 +0200, Thomas Gleixner wrote:
>
>
> "Singh, Balbir" <sblbir@...zon.com> writes:
> > On Fri, 2020-04-17 at 16:41 +0200, Thomas Gleixner wrote:
> > > Balbir Singh <sblbir@...zon.com> writes:
> > > static void *l1d_flush_pages;
> > > static DEFINE_MUTEX(l1d_flush_mutex);
> > >
> > > int l1d_flush_init(void)
> > > {
> > > int ret;
> > >
> > > if (static_cpu_has(X86_FEATURE_FLUSH_L1D) || l1d_flush_pages)
> > > return 0;
> > >
> > > mutex_lock(&l1d_flush_mutex);
> > > if (!l1d_flush_pages)
> > > l1d_flush_pages = l1d_flush_alloc_pages();
> > > ret = l1d_flush_pages ? 0 : -ENOMEM;
> > > mutex_unlock(&l1d_flush_mutex);
> > > return ret;
> > > }
> > > EXPORT_SYMBOL_GPL(l1d_flush_init);
> > >
> > > which removes the export of l1d_flush_alloc_pages() and gets rid of the
> > > cleanup counterpart. In a real world deployment unloading of VMX if used
> > > once is unlikely and with the task based one you end up with these pages
> > > 'leaked' anyway if used once.
> > >
> >
> > I don't want the patches to be enforce that one cannot unload the kvm
> > module,
> > but I can refactor those bits a bit more
>
> Not freeing the l1d flush pages does not prevent unloading the kvm
> module. It just keeps the around. It's the same problem with your L1D
> flush for tasks. If one tasks uses it then the pages stay around until
> the system reboots.
Yes, Fair enough, you also seem to suggest that the same set of pages can be
shared across VMX and L1D flushes (which is fine by me), I suspect at some
point we'd need to to per NUMA node allocations, but lets not prematurely
optimize.
>
> > > If any other architecture enables this, then it will have _ALL_ of this
> > > code duplicated. So we should rather have:
> >
> > But that is being a bit prescriptive to arch's to implement their L1D
> > flushing
> > using TIF flags, arch's should be free to use bits in struct_mm for their
> > arch
> > if they feel so.
> > > - All architectures have to use TIF_SPEC_FLUSH_L1D if they want to
> > > support the prctl.
> > >
> >
> > That is a concern (see above), should we enforce this?
>
> Fair enough, but it's trivial enough to have:
>
> static inline void arch_task_l1d_flush_update(bool enable)
> static inline bool arch_task_l1d_flush_state(void)
>
> and the rest of the logic is just identical.
>
OK, so you'd still like to see the logic move to lib/l1d_flush.c? Let me get
working on that and see what the changes look like
Thanks for the review,
Balbir Singh
Powered by blists - more mailing lists