lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab6504c7-419e-b274-1cc3-c40428907edd@intel.com>
Date:   Tue, 8 Nov 2022 10:49:42 -0800
From:   Reinette Chatre <reinette.chatre@...el.com>
To:     Peter Newman <peternewman@...gle.com>,
        Fenghua Yu <fenghua.yu@...el.com>
CC:     Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        <x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
        <linux-kernel@...r.kernel.org>, <jannh@...gle.com>,
        <eranian@...gle.com>, <kpsingh@...gle.com>, <derkling@...gle.com>,
        <james.morse@....com>
Subject: Re: [PATCH 0/1] x86/resctrl: fix task CLOSID update race

Hi Peter,

On 11/3/2022 7:16 AM, Peter Newman wrote:
> Below is my patch to address the IPI race we discussed in the container
> move RFD thread[1].

Thank you very much for taking this on.

> 
> The patch below uses the new task_call_func() interface to serialize
> updating closid and rmid with any context switch of the task. AFAICT,
> the implementation of this function acts like a mutex with context
> switch, but I'm not certain whether it is intended to be one. If this is
> not how task_call_func() is meant to be used, I will instead move the
> code performing the update under sched/ where it can be done holding the
> task_rq_lock() explicitly, as Reinette has suggested before[2].
> 
> From my own measurements, this change will double the time to complete a
> mass-move operation, such as rmdir on an rdtgroup with a large task
> list. But to the best of my knowedge, these large-scale reconfigurations
> of the control groups are infrequent, and the baseline I'm measuring
> against is racy anyways.
> 
> What's still unclear to me is, when processing a large task list, is
> obtaining the pi/rq locks for thousands of tasks (all while read-locking
> the tasklist_lock) better than just blindly notifying all CPUs? My guess
> is that the situation where notifying all CPUs would be better is
> uncommon for most users and probably more likely in Google's use case
> than most others, as we have a use case for moving large container jobs
> to a different MBA group.
> 

It was unclear to me also so I asked for advice and learned that, in
general, sending extra IPIs is not evil. I learned that there is precedent
for sending unnecessary IPIs, for example, in the TLB flushing code where
it is common to land in the TLB flush IPI hander and learn that the TLB
does not need to be flushed. Highlighting that the user initiated resctrl
flow in question is rare when compared to TLB flush.

>From what I understand even going through the extra locking and resulting
delays to avoid unnecessary IPIs with usage of task_call_func() it is still
possible to send unnecessary IPIs because the information about where modified
tasks are running may be stale by the time the IPIs are sent. To me it seems that
the risk of stale information increases as the size of the moved task group
increases. The benefit of using task_call_func() when moving a group of
tasks is thus not clear to me.

I do not see it as an either/or though. I think that using task_call_func()
to serialize with context switching is a good idea when moving a single
task. Sending IPIs to all CPUs in this case seems overkill. On the other hand,
when moving a group of tasks I think that notifying all CPUs would be
simpler. The current code already ensures that it does not modify the
PQR register unnecessarily. I would really like to learn more about this
from the experts but at this point I am most comfortable with such a
solution and look forward to to learning from the experts when it is
presented to the x86 maintainers for inclusion.

Reinette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ