[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPNVh5fOPn+yuH5kEcGBFLphxDHSvsVS1QeFtHh63d4Go9UAJA@mail.gmail.com>
Date: Fri, 21 May 2021 15:03:18 -0700
From: Peter Oskolkov <posk@...gle.com>
To: Jann Horn <jannh@...gle.com>
Cc: Andrei Vagin <avagin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
kernel list <linux-kernel@...r.kernel.org>,
Linux API <linux-api@...r.kernel.org>,
Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
Peter Oskolkov <posk@...k.io>,
Joel Fernandes <joel@...lfernandes.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrei Vagin <avagin@...gle.com>,
Jim Newsome <jnewsome@...project.org>
Subject: Re: [RFC PATCH v0.1 4/9] sched/umcg: implement core UMCG API
On Fri, May 21, 2021 at 2:32 PM Jann Horn <jannh@...gle.com> wrote:
>
> On Fri, May 21, 2021 at 9:09 PM Andrei Vagin <avagin@...il.com> wrote:
> > On Thu, May 20, 2021 at 11:36:09AM -0700, Peter Oskolkov wrote:
> > > @@ -67,7 +137,75 @@ SYSCALL_DEFINE4(umcg_register_task, u32, api_version, u32, flags, u32, group_id,
> > > */
> > > SYSCALL_DEFINE1(umcg_unregister_task, u32, flags)
> > > {
> > > - return -ENOSYS;
> > > + struct umcg_task_data *utd;
> > > + int ret = -EINVAL;
> > > +
> > > + rcu_read_lock();
> > > + utd = rcu_dereference(current->umcg_task_data);
> > > +
> > > + if (!utd || flags)
> > > + goto out;
> > > +
> > > + task_lock(current);
> > > + rcu_assign_pointer(current->umcg_task_data, NULL);
> > > + task_unlock(current);
> > > +
> > > + ret = 0;
> > > +
> > > +out:
> > > + rcu_read_unlock();
> > > + if (!ret && utd) {
> > > + synchronize_rcu();
> >
> > synchronize_rcu is expensive. Do we really need to call it here? Can we
> > use kfree_rcu?
> >
> > Where is task->umcg_task_data freed when a task is destroyed?
>
> or executed - the umcg stuff includes a userspace pointer, so it
> probably shouldn't normally be kept around across execve?
Ack - thanks for these and other comments. Please keep them coming.
I'll address them in v0.2.
Thanks,
Peter
Powered by blists - more mailing lists