lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <00B78ADC-546A-4738-AB29-B35FD0ECBB88@amacapital.net>
Date:   Sun, 20 Jan 2019 13:40:50 -0800
From:   Andy Lutomirski <luto@...capital.net>
To:     Andrew Cooper <andrew.cooper3@...rix.com>
Cc:     Andy Lutomirski <luto@...nel.org>,
        Fenghua Yu <fenghua.yu@...el.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Borislav Petkov <bp@...en8.de>, Ingo Molnar <mingo@...hat.com>,
        H Peter Anvin <hpa@...or.com>,
        Ashok Raj <ashok.raj@...el.com>,
        Ravi V Shankar <ravi.v.shankar@...el.com>,
        linux-kernel <linux-kernel@...r.kernel.org>, x86 <x86@...nel.org>
Subject: Re: [PATCH v2 3/3] x86/umwait: Control umwait maximum time



> On Jan 20, 2019, at 11:12 AM, Andrew Cooper <andrew.cooper3@...rix.com> wrote:
> 
>> On 17/01/2019 00:00, Andy Lutomirski wrote:
>>> On Wed, Jan 16, 2019 at 1:24 PM Fenghua Yu <fenghua.yu@...el.com> wrote:
>>> IA32_UMWAIT_CONTROL[31:2] determines the maximum time in TSC-quanta
>>> that processor can stay in C0.1 or C0.2.
>>> 
>>> The maximum time value in IA32_UMWAIT_CONTROL[31-2] is set as zero which
>>> means there is no global time limit for UMWAIT and TPAUSE instructions.
>>> Each process sets its own umwait maximum time as the instructions operand.
>>> 
>>> User can specify global umwait maximum time through interface:
>>> /sys/devices/system/cpu/umwait_control/umwait_max_time
>>> The value in the interface is in decimal in TSC-quanta. Bits[1:0]
>>> are cleared when the value is stored.
>>> 
>>> Signed-off-by: Fenghua Yu <fenghua.yu@...el.com>
>>> ---
>>> arch/x86/include/asm/msr-index.h |  2 ++
>>> arch/x86/power/umwait.c          | 42 +++++++++++++++++++++++++++++++-
>>> 2 files changed, 43 insertions(+), 1 deletion(-)
>>> 
>>> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
>>> index b56bfecae0de..42b9104fc15b 100644
>>> --- a/arch/x86/include/asm/msr-index.h
>>> +++ b/arch/x86/include/asm/msr-index.h
>>> @@ -62,6 +62,8 @@
>>> #define MSR_IA32_UMWAIT_CONTROL                0xe1
>>> #define UMWAIT_CONTROL_C02_BIT         0x0
>>> #define UMWAIT_CONTROL_C02_MASK                0x00000001
>>> +#define UMWAIT_CONTROL_MAX_TIME_BIT    0x2
>>> +#define UMWAIT_CONTROL_MAX_TIME_MASK   0xfffffffc
>>> 
>>> #define MSR_PKG_CST_CONFIG_CONTROL     0x000000e2
>>> #define NHM_C3_AUTO_DEMOTE             (1UL << 25)
>>> diff --git a/arch/x86/power/umwait.c b/arch/x86/power/umwait.c
>>> index 95b3867aac1e..4a1a507d3bb7 100644
>>> --- a/arch/x86/power/umwait.c
>>> +++ b/arch/x86/power/umwait.c
>>> @@ -10,6 +10,7 @@
>>> #include <asm/msr.h>
>>> 
>>> static int umwait_enable_c0_2 = 1; /* 0: disable C0.2. 1: enable C0.2. */
>>> +static u32 umwait_max_time; /* In TSC-quanta. Only bits [31:2] are used. */
>>> static DEFINE_MUTEX(umwait_lock);
>>> 
>>> /* Return value that will be used to set umwait control MSR */
>>> @@ -20,7 +21,8 @@ static inline u32 umwait_control_val(void)
>>>         * When bit 0 is 1, C0.2 is disabled. Otherwise, C0.2 is enabled.
>>>         * So value in bit 0 is opposite of umwait_enable_c0_2.
>>>         */
>>> -       return ~umwait_enable_c0_2 & UMWAIT_CONTROL_C02_MASK;
>>> +       return (~umwait_enable_c0_2 & UMWAIT_CONTROL_C02_MASK) |
>>> +              umwait_max_time;
>>> }
>>> 
>>> static ssize_t umwait_enable_c0_2_show(struct device *dev,
>>> @@ -61,8 +63,46 @@ static ssize_t umwait_enable_c0_2_store(struct device *dev,
>>> 
>>> static DEVICE_ATTR_RW(umwait_enable_c0_2);
>>> 
>>> +static ssize_t umwait_max_time_show(struct device *kobj,
>>> +                                   struct device_attribute *attr, char *buf)
>>> +{
>>> +       return sprintf(buf, "%u\n", umwait_max_time);
>>> +}
>>> +
>>> +static ssize_t umwait_max_time_store(struct device *kobj,
>>> +                                    struct device_attribute *attr,
>>> +                                    const char *buf, size_t count)
>>> +{
>>> +       u32 msr_val, max_time;
>>> +       int cpu, ret;
>>> +
>>> +       ret = kstrtou32(buf, 10, &max_time);
>>> +       if (ret)
>>> +               return ret;
>>> +
>>> +       mutex_lock(&umwait_lock);
>>> +
>>> +       /* Only get max time value from bits [31:2] */
>>> +       max_time &= UMWAIT_CONTROL_MAX_TIME_MASK;
>>> +       /* Update the max time value in memory */
>>> +       umwait_max_time = max_time;
>>> +       msr_val = umwait_control_val();
>>> +       get_online_cpus();
>>> +       /* All CPUs have same umwait max time */
>>> +       for_each_online_cpu(cpu)
>>> +               wrmsr_on_cpu(cpu, MSR_IA32_UMWAIT_CONTROL, msr_val, 0);
>>> +       put_online_cpus();
>>> +
>>> +       mutex_unlock(&umwait_lock);
>>> +
>>> +       return count;
>>> +}
>>> +
>>> +static DEVICE_ATTR_RW(umwait_max_time);
>>> +
>>> static struct attribute *umwait_attrs[] = {
>>>        &dev_attr_umwait_enable_c0_2.attr,
>>> +       &dev_attr_umwait_max_time.attr,
>>>        NULL
>>> };
>> You need something to make sure that newly onlined CPUs get the right
>> value in the MSR.  You also need to make sure you restore it on resume
>> from suspend.  Something like cpu_init() might be the right place.
>> 
>> Also, as previously discussed, I think we should set the default to
>> something quite small, maybe 100 microseconds.  IMO the goal is to
>> pick a value that is a high enough multiple of the C0.2 entry+exit
>> latency that we get most of the power and SMT resource savings while
>> being small enough that no one things that UMWAIT is more than a
>> glorified, slightly improved, and far more misleading version of REP
>> NOP.
>> 
>> Andrew, would having Linux default to a small value do much to
>> mitigate your concerns that UMWAIT is problematic for hypervisors?
> 
> Sadly no - not really.
> 
> Being an MSR, there is no way the guest kernel is having unfiltered
> access, so the hypervisor can set whatever bound it wishes.
> 
> For any non-trivial wait period, it would be better for the system as a
> whole to switch to a different vcpu, but the semantics don't allow for
> that.  Shortening the timeout just results in userspace taking over
> again, and most likely concluding that there was an early wakeup and
> going back to sleep.

What I mean is: if Linux makes the timeout short for everyone, then applications that use UNWAIT will have to be written with the expectation that they are spinning, so the incidence of problematic cases may drop.

> 
> More useful semantics would be something similar to pause-loop-exiting
> so we can swap contexts while the processor is logically idle in userspace.
> 
> ~Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ