[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191016184346.GT2328@hirez.programming.kicks-ass.net>
Date: Wed, 16 Oct 2019 20:43:46 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Marco Elver <elver@...gle.com>
Cc: akiyks@...il.com, stern@...land.harvard.edu, glider@...gle.com,
parri.andrea@...il.com, andreyknvl@...gle.com, luto@...nel.org,
ard.biesheuvel@...aro.org, arnd@...db.de, boqun.feng@...il.com,
bp@...en8.de, dja@...ens.net, dlustig@...dia.com,
dave.hansen@...ux.intel.com, dhowells@...hat.com,
dvyukov@...gle.com, hpa@...or.com, mingo@...hat.com,
j.alglave@....ac.uk, joel@...lfernandes.org, corbet@....net,
jpoimboe@...hat.com, luc.maranget@...ia.fr, mark.rutland@....com,
npiggin@...il.com, paulmck@...ux.ibm.com, tglx@...utronix.de,
will@...nel.org, kasan-dev@...glegroups.com,
linux-arch@...r.kernel.org, linux-doc@...r.kernel.org,
linux-efi@...r.kernel.org, linux-kbuild@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org
Subject: Re: [PATCH 1/8] kcsan: Add Kernel Concurrency Sanitizer
infrastructure
On Wed, Oct 16, 2019 at 10:39:52AM +0200, Marco Elver wrote:
> +bool __kcsan_check_watchpoint(const volatile void *ptr, size_t size,
> + bool is_write)
> +{
> + atomic_long_t *watchpoint;
> + long encoded_watchpoint;
> + unsigned long flags;
> + enum kcsan_report_type report_type;
> +
> + if (unlikely(!is_enabled()))
> + return false;
> +
> + watchpoint = find_watchpoint((unsigned long)ptr, size, !is_write,
> + &encoded_watchpoint);
> + if (watchpoint == NULL)
> + return true;
> +
> + flags = user_access_save();
Could use a comment on why find_watchpoint() is save to call without
user_access_save() on.
> + if (!try_consume_watchpoint(watchpoint, encoded_watchpoint)) {
> + /*
> + * The other thread may not print any diagnostics, as it has
> + * already removed the watchpoint, or another thread consumed
> + * the watchpoint before this thread.
> + */
> + kcsan_counter_inc(kcsan_counter_report_races);
> + report_type = kcsan_report_race_check_race;
> + } else {
> + report_type = kcsan_report_race_check;
> + }
> +
> + /* Encountered a data-race. */
> + kcsan_counter_inc(kcsan_counter_data_races);
> + kcsan_report(ptr, size, is_write, raw_smp_processor_id(), report_type);
> +
> + user_access_restore(flags);
> + return false;
> +}
> +EXPORT_SYMBOL(__kcsan_check_watchpoint);
> +
> +void __kcsan_setup_watchpoint(const volatile void *ptr, size_t size,
> + bool is_write)
> +{
> + atomic_long_t *watchpoint;
> + union {
> + u8 _1;
> + u16 _2;
> + u32 _4;
> + u64 _8;
> + } expect_value;
> + bool is_expected = true;
> + unsigned long ua_flags = user_access_save();
> + unsigned long irq_flags;
> +
> + if (!should_watch(ptr))
> + goto out;
> +
> + if (!check_encodable((unsigned long)ptr, size)) {
> + kcsan_counter_inc(kcsan_counter_unencodable_accesses);
> + goto out;
> + }
> +
> + /*
> + * Disable interrupts & preemptions, to ignore races due to accesses in
> + * threads running on the same CPU.
> + */
> + local_irq_save(irq_flags);
> + preempt_disable();
Is there a point to that preempt_disable() here?
Powered by blists - more mailing lists