[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1roztLsZtYQ6hzI@google.com>
Date: Thu, 27 Oct 2022 20:23:42 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Vipin Sharma <vipinsh@...gle.com>
Cc: "Wang, Wei W" <wei.w.wang@...el.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"dmatlack@...gle.com" <dmatlack@...gle.com>,
"andrew.jones@...ux.dev" <andrew.jones@...ux.dev>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v6 5/5] KVM: selftests: Allowing running
dirty_log_perf_test on specific CPUs
On Thu, Oct 27, 2022, Vipin Sharma wrote:
> On Thu, Oct 27, 2022 at 8:56 AM Sean Christopherson <seanjc@...gle.com> wrote:
> >
> > On Thu, Oct 27, 2022, Wang, Wei W wrote:
> > > On Wednesday, October 26, 2022 11:44 PM, Sean Christopherson wrote:
> > > > If we go this route in the future, we'd need to add a worker trampoline as the
> > > > pinning needs to happen in the worker task itself to guarantee that the pinning
> > > > takes effect before the worker does anything useful. That should be very
> > > > doable.
> > >
> > > The alternative way is the one I shared before, using this:
> > >
> > > /* Thread created with attribute ATTR will be limited to run only on
> > > the processors represented in CPUSET. */
> > > extern int pthread_attr_setaffinity_np (pthread_attr_t *__attr,
> > > size_t __cpusetsize,
> > > const cpu_set_t *__cpuset)
> > >
> > > Basically, the thread is created on the pCPU as user specified.
> > > I think this is better than "creating the thread on an arbitrary pCPU
> > > and then pinning it to the user specified pCPU in the thread's start routine".
> >
> > Ah, yeah, that's better.
> >
>
> pthread_create() will internally call sched_setaffinity() syscall
> after creation of a thread on a random CPU. So, from the performance
> side there is not much difference between the two approaches.
>
> However, we will still need pin_this_task_to_pcpu()/sched_affinity()
> to move the main thread to a specific pCPU, therefore,
Heh, that's a good point too.
> I am thinking of keeping the current approach unless there is a strong objection
> to it.
No objection here, I don't see an obvious way to make that helper going away.
Powered by blists - more mailing lists