lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eb50d813-ea48-80f8-53aa-846aae33cf13@intel.com>
Date:   Fri, 12 May 2023 08:23:26 -0700
From:   Reinette Chatre <reinette.chatre@...el.com>
To:     Peter Newman <peternewman@...gle.com>
CC:     Fenghua Yu <fenghua.yu@...el.com>, Babu Moger <babu.moger@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>, <x86@...nel.org>,
        "H. Peter Anvin" <hpa@...or.com>,
        Stephane Eranian <eranian@...gle.com>,
        James Morse <james.morse@....com>,
        <linux-kernel@...r.kernel.org>, <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH v1 2/9] x86/resctrl: Hold a spinlock in __rmid_read() on
 AMD

Hi Peter,

On 5/12/2023 6:23 AM, Peter Newman wrote:
> Hi Reinette,
> 
> On Thu, May 11, 2023 at 11:36 PM Reinette Chatre
> <reinette.chatre@...el.com> wrote:
>> On 4/21/2023 7:17 AM, Peter Newman wrote:

>>> +             raw_spin_lock_irqsave(&hw_dom->evtsel_lock, flags);
>>> +
>>>       /*
>>>        * As per the SDM, when IA32_QM_EVTSEL.EvtID (bits 7:0) is configured
>>>        * with a valid event code for supported resource type and the bits
>>> @@ -161,6 +166,9 @@ static int __rmid_read(u32 rmid, enum resctrl_event_id eventid, u64 *val)
>>>       wrmsr(MSR_IA32_QM_EVTSEL, eventid, rmid);
>>>       rdmsrl(MSR_IA32_QM_CTR, msr_val);
>>>
>>> +     if (static_branch_likely(&rmid_read_locked))
>>> +             raw_spin_unlock_irqrestore(&hw_dom->evtsel_lock, flags);
>>> +
>>
>> If the first "if (static_branch_likely(&rmid_read_locked))" was taken then the second
>> if branch _has_ to be taken. It should not be optional to release a lock if it was taken. I
>> think it would be more robust if a single test of the static key decides whether the
>> spinlock should be used.
> 
> Is the concern that the branch value could change concurrently and
> result in a deadlock?

Possibly ... it may be that the static key cannot change value during
this call but that thus requires deeper understanding of various flows
for this logic to be trusted. I think this should be explicit: if a lock
is taken then releasing it should not be optional at all.

> I'm curious as to whether this case is performance critical enough to
> justify using a static branch. It's clear that we should be using them
> in the context switch path, but I'm confused about other places
> they're used when there are also memory flags.

Alternatively, there could be a, (for example) __rmid_read_lock() that
is called from context switch and it always takes a spin lock.

Reinette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ