[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fc5cd013-c230-2eb2-02c5-cf9bbf350ec2@intel.com>
Date: Tue, 18 Jun 2019 07:41:10 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Jan Kiszka <jan.kiszka@...mens.com>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
Namhyung Kim <namhyung@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86: Optimize load_mm_cr4 to load_mm_cr4_irqsoff
On 6/18/19 12:32 AM, Jan Kiszka wrote:
> Thus, we can avoid disabling interrupts again in cr4_set/clear_bits.
Seems reasonable.
Your *_irqsoff() variants need lockdep_assert_irqs_disabled(), at least,
though.
Can you talk a bit about the motivation here, though? Did you encounter
some performance issue that led you to make this patch, or was it simply
an improvement you realized you could make from code inspection?
Powered by blists - more mailing lists