[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YnKqpkdATqqlDHvK@fuller.cnet>
Date: Wed, 4 May 2022 13:32:38 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: linux-kernel@...r.kernel.org, Nitesh Lal <nilal@...hat.com>,
Nicolas Saenz Julienne <nsaenzju@...hat.com>,
Frederic Weisbecker <frederic@...nel.org>,
Christoph Lameter <cl@...ux.com>,
Juri Lelli <juri.lelli@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Alex Belits <abelits@...its.com>, Peter Xu <peterx@...hat.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Oscar Shiang <oscar0225@...email.tw>
Subject: Re: [patch v12 09/13] task isolation: add preempt notifier to sync
per-CPU vmstat dirty info to thread info
On Wed, Apr 27, 2022 at 02:09:16PM +0200, Thomas Gleixner wrote:
> On Wed, Apr 27 2022 at 09:11, Thomas Gleixner wrote:
> > On Tue, Mar 15 2022 at 12:31, Marcelo Tosatti wrote:
> >> If a thread has task isolation activated, is preempted by thread B,
> >> which marks vmstat information dirty, and is preempted back in,
> >> one might return to userspace with vmstat dirty information on the
> >> CPU in question.
> >>
> >> To address this problem, add a preempt notifier that transfers vmstat dirty
> >> information to TIF_TASK_ISOL thread flag.
> >
> > How does this compile with CONFIG_KVM=n?
>
> Aside of that, the existance of this preempt notifier alone tells me
> that this is either a design fail or has no design in the first place.
>
> The state of vmstat does not matter at all at the point where a task is
> scheduled in. It matters when an isolated task goes out to user space or
> enters a VM.
If the following happens, with two threads with names that mean whether
a thread has task isolation enabled or not:
Thread-no-task-isol, Thread-task-isol.
Events:
not-runnable Thread-task-isol
runnable Thread-task-no-isol
marks vmstat dirty Thread-task-no-isol (writes to some per-CPU vmstat
counter)
not-runnable Thread-task-no-isol
runnable Thread-task-isol
Then we have to transfer the "vmstat dirty" information from per-CPU
bool to per-thread TIF_TASK_ISOL bit (so that the
task_isolation_process_work thing executes on return to userspace).
> We already have something similar in the exit to user path:
>
> tick_nohz_user_enter_prepare()
>
> So you can do something like the below and have:
>
> static inline void task_isol_exit_to_user_prepare(void)
> {
> if (unlikely(current_needs_isol_exit_to_user())
> __task_isol_exit_to_user_prepare();
> }
>
> where current_needs_isol_exit_to_user() is a simple check of either an
> existing mechanism like
>
> task->syscall_work & SYSCALL_WORK_TASK_ISOL_EXIT
>
> or of some new task isolation specific member of task_struct which is
> placed so it is cache hot at that point:
>
> task->isol_work & SYSCALL_TASK_ISOL_EXIT
>
> which is going to be almost zero overhead for any non isolated task.
Sure, but who sets SYSCALL_TASK_ISOL_EXIT or SYSCALL_TASK_ISOL_EXIT ?
> It's trivial enough to encode the real stuff into task->isol_work and
> I'm pretty sure, that a 32bit member is sufficient for that. There is
> absolutely no need for a potential 64x64 bit feature matrix.
Well, OK, the meaning of TIF_TASK_ISOL thread flag is ambiguous:
1) We set it when quiescing vmstat feature of task isolation.
2) We set it when switching between tasks A and B, B has
task isolation configured and activated, and per-CPU vmstat information
was dirty.
3) We clear it on return to userspace:
if (test_bit(TIF_TASK_ISOL, thread->flags)) {
clear_bit(TIF_TASK_ISOL, thread->flags))
process_task_isol_work();
}
So you prefer to separate:
Use TIF_TASK_ISOL for "task isolation configured and activated,
quiesce vmstat work on return to userspace" only, and then have
the "is vmstat per-CPU data dirty?" information held on
task->syscall_work or task->isol_work ? (that will be probably be two
cachelines).
You'd still need the preempt notifier, though (unless i am missing
something).
Happy with either case.
Thanks for the review!
> Thanks,
>
> tglx
> ---
> --- a/kernel/entry/common.c
> +++ b/kernel/entry/common.c
> @@ -142,6 +142,12 @@ void noinstr exit_to_user_mode(void)
> /* Workaround to allow gradual conversion of architecture code */
> void __weak arch_do_signal_or_restart(struct pt_regs *regs) { }
>
> +static void exit_to_user_update_work(void)
> +{
> + tick_nohz_user_enter_prepare();
> + task_isol_exit_to_user_prepare();
> +}
> +
> static unsigned long exit_to_user_mode_loop(struct pt_regs *regs,
> unsigned long ti_work)
> {
> @@ -178,8 +184,7 @@ static unsigned long exit_to_user_mode_l
> */
> local_irq_disable_exit_to_user();
>
> - /* Check if any of the above work has queued a deferred wakeup */
> - tick_nohz_user_enter_prepare();
> + exit_to_user_update_work();
>
> ti_work = read_thread_flags();
> }
> @@ -194,8 +199,7 @@ static void exit_to_user_mode_prepare(st
>
> lockdep_assert_irqs_disabled();
>
> - /* Flush pending rcuog wakeup before the last need_resched() check */
> - tick_nohz_user_enter_prepare();
> + exit_to_user_update_work();
>
> if (unlikely(ti_work & EXIT_TO_USER_MODE_WORK))
> ti_work = exit_to_user_mode_loop(regs, ti_work);
>
>
Powered by blists - more mailing lists