lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e734691b-e9e1-10a0-88ee-73d8fceb50f9@redhat.com>
Date:   Tue, 5 Oct 2021 11:38:29 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Nitesh Narayan Lal <nitesh@...hat.com>,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        seanjc@...gle.com, vkuznets@...hat.com, mtosatti@...hat.com,
        tglx@...utronix.de, frederic@...nel.org, mingo@...nel.org,
        nilal@...hat.com, Wanpeng Li <kernellwp@...il.com>
Subject: Re: [PATCH v1] KVM: isolation: retain initial mask for kthread VM
 worker

[+Wanpeng]

On 05/10/21 00:26, Nitesh Narayan Lal wrote:
> From: Marcelo Tosatti <mtosatti@...hat.com>
> 
> kvm_vm_worker_thread() creates a kthread VM worker and migrates it
> to the parent cgroup using cgroup_attach_task_all() based on its
> effective cpumask.
> 
> In an environment that is booted with the nohz_full kernel option, cgroup's
> effective cpumask can also include CPUs running in nohz_full mode. These
> CPUs often run SCHED_FIFO tasks which may result in the starvation of the
> VM worker if it has been migrated to one of these CPUs.

There are other effects of cgroups (e.g. memory accounting) than just 
the cpumask; for v1 you could just skip the cpuset, but if 
cgroup_attach_task_all is ever ported to v2's cgroup_attach_task, we 
will not be able to separate the cpuset cgroup from the others.

Why doesn't the scheduler move the task to a CPU that is not being 
hogged by vCPU SCHED_FIFO tasks?  The parent cgroup should always have 
one for userspace's own housekeeping.

As an aside, if we decide that KVM's worker threads count as 
housekeeping, you'd still want to bind the kthread to the housekeeping 
CPUs(*).

Paolo

(*) switching from kthread_run to kthread_create+kthread_bind_mask

> Since unbounded kernel threads allowed CPU mask already respects nohz_full
> CPUs at the time of their setup (because of 9cc5b8656892: "isolcpus: Affine
> unbound kernel threads to housekeeping cpus"), retain the initial CPU mask
> for the kthread by stopping its migration to the parent cgroup's effective
> CPUs.
> 
> Signed-off-by: Marcelo Tosatti <mtosatti@...hat.com>
> Signed-off-by: Nitesh Narayan Lal <nitesh@...hat.com>
> ---
>   virt/kvm/kvm_main.c | 20 +++++++++++++++-----
>   1 file changed, 15 insertions(+), 5 deletions(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 7851f3a1b5f7..87bc193fd020 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -56,6 +56,7 @@
>   #include <asm/processor.h>
>   #include <asm/ioctl.h>
>   #include <linux/uaccess.h>
> +#include <linux/sched/isolation.h>
>   
>   #include "coalesced_mmio.h"
>   #include "async_pf.h"
> @@ -5634,11 +5635,20 @@ static int kvm_vm_worker_thread(void *context)
>   	if (err)
>   		goto init_complete;
>   
> -	err = cgroup_attach_task_all(init_context->parent, current);
> -	if (err) {
> -		kvm_err("%s: cgroup_attach_task_all failed with err %d\n",
> -			__func__, err);
> -		goto init_complete;
> +	/*
> +	 * For nohz_full enabled environments, don't migrate the worker thread
> +	 * to parent cgroup as its effective mask may have a CPU running in
> +	 * nohz_full mode. nohz_full CPUs often run SCHED_FIFO task which could
> +	 * result in starvation of the worker thread if it is pinned on the same
> +	 * CPU.
> +	 */
> +	if (!housekeeping_enabled(HK_FLAG_KTHREAD)) {
> +		err = cgroup_attach_task_all(init_context->parent, current);
> +		if (err) {
> +			kvm_err("%s: cgroup_attach_task_all failed with err %d\n",
> +				__func__, err);
> +			goto init_complete;
> +		}
>   	}
>   
>   	set_user_nice(current, task_nice(init_context->parent));
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ