lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1331034734.11248.287.camel@twins>
Date:	Tue, 06 Mar 2012 12:52:14 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Lai Jiangshan <laijs@...fujitsu.com>
Cc:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org, mingo@...e.hu, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...ymtl.ca,
	josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
	rostedt@...dmis.org, Valdis.Kletnieks@...edu, dhowells@...hat.com,
	eric.dumazet@...il.com, darren@...art.com, fweisbec@...il.com,
	patches@...aro.org
Subject: Re: [RFC PATCH 5/6] implement per-cpu&per-domain state machine
 call_srcu()

On Tue, 2012-03-06 at 17:57 +0800, Lai Jiangshan wrote:
> +void srcu_barrier(struct srcu_struct *sp)
> +{
> +       struct srcu_sync sync;
> +       struct srcu_head *head = &sync.head;
> +       unsigned long chck_seq; /* snap */
> +
> +       int idle_loop = 0;
> +       int cpu;
> +       struct srcu_cpu_struct *scp;
> +
> +       spin_lock_irq(&sp->gp_lock);
> +       chck_seq = sp->chck_seq;
> +       for_each_possible_cpu(cpu) {

ARGH!! this is really not ok.. so we spend all this time killing
srcu_sync_expidited and co because they prod at all cpus for no good
reason, and what do you do?

Also, what happens if your cpu isn't actually online?


> +               scp = per_cpu_ptr(sp->srcu_per_cpu, cpu);
> +               if (scp->head && !safe_less_than(chck_seq, scp->head->chck_seq,
> +                               sp->chck_seq)) {
> +                       /* this path is likely enterred only once */
> +                       init_completion(&sync.completion);
> +                       srcu_queue_callback(sp, scp, head,
> +                                       __synchronize_srcu_callback);
> +                       /* don't need to wakeup the woken state machine */
> +                       spin_unlock_irq(&sp->gp_lock);
> +                       wait_for_completion(&sync.completion);
> +                       spin_lock_irq(&sp->gp_lock);
> +               } else {
> +                       if ((++idle_loop & 0xF) == 0) {
> +                               spin_unlock_irq(&sp->gp_lock);
> +                               udelay(1);
> +                               spin_lock_irq(&sp->gp_lock);
> +                       }

The purpose of this bit isn't quite clear to me, is this simply a lock
break?

> +               }
> +       }
> +       spin_unlock_irq(&sp->gp_lock);
> +
> +       flush_workqueue(srcu_callback_wq);

Since you already waited for the completions one by one, what's the
purpose of this?

> +}
> +EXPORT_SYMBOL_GPL(srcu_barrier); 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ