lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190522143545.GG16275@worktop.programming.kicks-ass.net>
Date:   Wed, 22 May 2019 16:35:45 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Viktor Rosendahl <viktor.rosendahl@...il.com>
Cc:     Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
        Joel Fernandes <joel@...lfernandes.org>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>
Subject: Re: [PATCH v4 1/4] ftrace: Implement fs notification for
 tracing_max_latency

On Wed, May 22, 2019 at 02:30:14AM +0200, Viktor Rosendahl wrote:

> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 874c427742a9..440cd1a62722 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3374,6 +3374,7 @@ static void __sched notrace __schedule(bool preempt)
>  	struct rq *rq;
>  	int cpu;
>  
> +	trace_disable_fsnotify();
>  	cpu = smp_processor_id();
>  	rq = cpu_rq(cpu);
>  	prev = rq->curr;
> @@ -3449,6 +3450,7 @@ static void __sched notrace __schedule(bool preempt)
>  	}
>  
>  	balance_callback(rq);
> +	trace_enable_fsnotify();
>  }
>  
>  void __noreturn do_task_dead(void)
> diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> index 80940939b733..1a38bcdb3652 100644
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -225,6 +225,7 @@ static void cpuidle_idle_call(void)
>  static void do_idle(void)
>  {
>  	int cpu = smp_processor_id();
> +	trace_disable_fsnotify();
>  	/*
>  	 * If the arch has a polling bit, we maintain an invariant:
>  	 *
> @@ -284,6 +285,7 @@ static void do_idle(void)
>  	smp_mb__after_atomic();
>  
>  	sched_ttwu_pending();
> +	/* schedule_idle() will call trace_enable_fsnotify() */
>  	schedule_idle();
>  
>  	if (unlikely(klp_patch_pending(current)))

I still hate this.. why are we doing this? We already have this
stop_critical_timings() nonsense and are now adding more gunk.



> +static DEFINE_PER_CPU(atomic_t, notify_disabled) = ATOMIC_INIT(0);

> +	atomic_set(&per_cpu(notify_disabled, cpu), 1);

> +	atomic_set(&per_cpu(notify_disabled, cpu), 0);

> +	if (!atomic_read(&per_cpu(notify_disabled, cpu)))

That's just wrong on so many levels..

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ