[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1075473571.25688.1643746930751.JavaMail.zimbra@efficios.com>
Date: Tue, 1 Feb 2022 15:22:10 -0500 (EST)
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Florian Weimer <fw@...eb.enyo.de>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
paulmck <paulmck@...nel.org>, Boqun Feng <boqun.feng@...il.com>,
"H. Peter Anvin" <hpa@...or.com>, Paul Turner <pjt@...gle.com>,
linux-api <linux-api@...r.kernel.org>,
Christian Brauner <christian.brauner@...ntu.com>,
David Laight <David.Laight@...LAB.COM>,
carlos <carlos@...hat.com>, Peter Oskolkov <posk@...k.io>
Subject: Re: [RFC PATCH 2/3] rseq: extend struct rseq with per thread group
vcpu id
----- On Feb 1, 2022, at 3:03 PM, Florian Weimer fw@...eb.enyo.de wrote:
> * Mathieu Desnoyers:
>
>> If a thread group has fewer threads than cores, or is limited to run on
>> few cores concurrently through sched affinity or cgroup cpusets, the
>> virtual cpu ids will be values close to 0, thus allowing efficient use
>> of user-space memory for per-cpu data structures.
>
> From a userspace programmer perspective, what's a good way to obtain a
> reasonable upper bound for the possible tg_vcpu_id values?
Some effective upper bounds:
- sysconf(3) _SC_NPROCESSORS_CONF,
- the number of threads which exist concurrently in the process,
- the number of cpus in the cpu affinity mask applied by sched_setaffinity,
except in corner-case situations such as cpu hotplug removing all cpus from
the affinity set,
- cgroup cpuset "partition" limits,
Note that AFAIR non-partition cgroup cpusets allow a cgroup to "borrow"
additional cores from the rest of the system if they are idle, therefore
allowing the number of concurrent threads to go beyond the specified limit.
>
> I believe not all users of cgroup cpusets change the affinity mask.
AFAIR the sched affinity mask is tweaked independently of the cgroup cpuset.
Those are two mechanisms both affecting the scheduler task placement.
I would expect the user-space code to use some sensible upper bound as a
hint about how many per-vcpu data structure elements to expect (and how many
to pre-allocate), but have a "lazy initialization" fall-back in case the
vcpu id goes up to the number of configured processors - 1. And I suspect
that even the number of configured processors may change with CRIU.
>
>> diff --git a/kernel/rseq.c b/kernel/rseq.c
>> index 13f6d0419f31..37b43735a400 100644
>> --- a/kernel/rseq.c
>> +++ b/kernel/rseq.c
>> @@ -86,10 +86,14 @@ static int rseq_update_cpu_node_id(struct task_struct *t)
>> struct rseq __user *rseq = t->rseq;
>> u32 cpu_id = raw_smp_processor_id();
>> u32 node_id = cpu_to_node(cpu_id);
>> + u32 tg_vcpu_id = task_tg_vcpu_id(t);
>>
>> if (!user_write_access_begin(rseq, t->rseq_len))
>> goto efault;
>> switch (t->rseq_len) {
>> + case offsetofend(struct rseq, tg_vcpu_id):
>> + unsafe_put_user(tg_vcpu_id, &rseq->tg_vcpu_id, efault_end);
>> + fallthrough;
>> case offsetofend(struct rseq, node_id):
>> unsafe_put_user(node_id, &rseq->node_id, efault_end);
>> fallthrough;
>
> Is the switch really useful? I suspect it's faster to just write as
> much as possible all the time. The switch should be well-predictable
> if running uniform userspace, but still …
The switch ensures the kernel don't try to write to a memory area beyond
the rseq size which has been registered by user-space. So it seems to be
useful to ensure we don't corrupt user-space memory. Or am I missing your
point ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
Powered by blists - more mailing lists