lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 6 Mar 2012 22:44:22 +0800
From:	Lai Jiangshan <eag0628@...il.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Lai Jiangshan <laijs@...fujitsu.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org, mingo@...e.hu, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...ymtl.ca,
	josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
	rostedt@...dmis.org, Valdis.Kletnieks@...edu, dhowells@...hat.com,
	eric.dumazet@...il.com, darren@...art.com, fweisbec@...il.com,
	patches@...aro.org
Subject: Re: [RFC PATCH 5/6] implement per-cpu&per-domain state machine call_srcu()

On Tue, Mar 6, 2012 at 7:52 PM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Tue, 2012-03-06 at 17:57 +0800, Lai Jiangshan wrote:
>> +void srcu_barrier(struct srcu_struct *sp)
>> +{
>> +       struct srcu_sync sync;
>> +       struct srcu_head *head = &sync.head;
>> +       unsigned long chck_seq; /* snap */
>> +
>> +       int idle_loop = 0;
>> +       int cpu;
>> +       struct srcu_cpu_struct *scp;
>> +
>> +       spin_lock_irq(&sp->gp_lock);
>> +       chck_seq = sp->chck_seq;
>> +       for_each_possible_cpu(cpu) {
>
> ARGH!! this is really not ok.. so we spend all this time killing
> srcu_sync_expidited and co because they prod at all cpus for no good
> reason, and what do you do?

it is srcu_barrier(), it have to wait all callbacks complete for all
cpus since it is per-cpu
implementation.

>
> Also, what happens if your cpu isn't actually online?

The workqueue handles it, not here, if a cpu state machine has callbacks, the
state machine is started, if it has no callback,  srcu_barrier() does
nothing for
this cpu

>
>
>> +               scp = per_cpu_ptr(sp->srcu_per_cpu, cpu);
>> +               if (scp->head && !safe_less_than(chck_seq, scp->head->chck_seq,
>> +                               sp->chck_seq)) {
>> +                       /* this path is likely enterred only once */
>> +                       init_completion(&sync.completion);
>> +                       srcu_queue_callback(sp, scp, head,
>> +                                       __synchronize_srcu_callback);
>> +                       /* don't need to wakeup the woken state machine */
>> +                       spin_unlock_irq(&sp->gp_lock);
>> +                       wait_for_completion(&sync.completion);
>> +                       spin_lock_irq(&sp->gp_lock);
>> +               } else {
>> +                       if ((++idle_loop & 0xF) == 0) {
>> +                               spin_unlock_irq(&sp->gp_lock);
>> +                               udelay(1);
>> +                               spin_lock_irq(&sp->gp_lock);
>> +                       }
>
> The purpose of this bit isn't quite clear to me, is this simply a lock
> break?

Yes, the main purpose is:
make the time of sp->gp_lock short, can be determined.

>
>> +               }
>> +       }
>> +       spin_unlock_irq(&sp->gp_lock);
>> +
>> +       flush_workqueue(srcu_callback_wq);
>
> Since you already waited for the completions one by one, what's the
> purpose of this?
>
>> +}
>> +EXPORT_SYMBOL_GPL(srcu_barrier);
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ