lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080318173438.GA24237@in.ibm.com>
Date:	Tue, 18 Mar 2008 23:04:38 +0530
From:	Dipankar Sarma <dipankar@...ibm.com>
To:	Peter Teoh <htmldeveloper@...il.com>
Cc:	Eric Dumazet <dada1@...mosbay.com>,
	Johannes Weiner <hannes@...urebad.de>,
	Jeremy Fitzhardinge <jeremy@...p.org>,
	LKML <linux-kernel@...r.kernel.org>, Tejun Heo <htejun@...il.com>
Subject: Re: per cpun+ spin locks coexistence?

On Wed, Mar 19, 2008 at 01:00:27AM +0800, Peter Teoh wrote:
> First, the following is possible:
> 
>                 fddef = &get_cpu_var(fdtable_defer_list);
>                 spin_lock(&fddef->lock);
>                 fdt->next = fddef->next;
>                 fddef->next = fdt;==============>executing at CPU A
>                 /* vmallocs are handled from the workqueue context */
>                 schedule_work(&fddef->wq);
>                 spin_unlock(&fddef->lock);==============>executing at CPU B
>                 put_cpu_var(fdtable_defer_list);
> 
> where the execution can switch CPU after the schedule_work() API, then
> LOGICALLY u definitely need the spin_lock(), and the per_cpu data is
> really not necessary.
> 
> But without the per_cpu structure, then the following "dedicated
> chunk" can only execute on one processor, with the possibility of
> switching to another processor after schedule_work():

Wrong. You cannot switch to another processor in schedule_work(),
it doesn't block. Just read the code.

The reason why I used a per-cpu fdtable_defer_list is that I
did not want to make it a global list of deferred fdtable
structures to be freed and then have to protect it by a global lock.  
Every fdtable free would have had to take the same global lock
in that case and this would have resulted in scalability
issues (cacheline bouncing, lock contention). Instead,
I used a per-cpu list in essence creating finer-grain locking.
Two CPUs doing fdtable free operations simultaneously will
not contend for the same lock in this case. The fddef lock
is still required for the reasons Eric pointed out.


> and per_cpu is so that the same chunk of code can be executing at
> multiple CPUs all at the same time.
> 
> Now the key issue rises up - as I have just asked before u answered my question:
> 
> http://mail.nl.linux.org/kernelnewbies/2008-03/msg00236.html
> 
> can schedule_work() sleep?   (just like schedule(), whcih can sleep right?)
> schedule_work() is guaranteed to execute the work queue at least once,
> and so this thread may or may not sleep. correct?   Or wrong?

Wrong.

Thanks
Dipankar
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ