lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140210154917.GC320@dhcp-26-207.brq.redhat.com>
Date:	Mon, 10 Feb 2014 16:49:17 +0100
From:	Alexander Gordeev <agordeev@...hat.com>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	Kent Overstreet <kmo@...erainc.com>, Jens Axboe <axboe@...nel.dk>,
	Shaohua Li <shli@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [patch 1/2]percpu_ida: fix a live lock

On Mon, Feb 10, 2014 at 01:29:42PM +0100, Alexander Gordeev wrote:
> > We'll defintively need a fix to be able to allow the whole tag space.
> > For large numbers of tags per device the flush might work, but for
> > devices with low number of tags we need something more efficient.  The
> > case of less tags than CPUs isn't that unusual either and we probably
> > want to switch to an allocator without per cpu allocations for them to
> > avoid all this.  E.g. for many ATA devices we just have a single tag,
> > and many scsi drivers also only want single digit outstanding commands
> > per LUN.
> 
> Do we really always need the pool for these classes of devices?
> 
> Pulling tags from local caches to the pool just to (near to) dry it at
> the very next iteration does not seem beneficial. Not to mention caches
> vs pool locking complexities.

And I meant here we do not scrap per cpu allocations.

-- 
Regards,
Alexander Gordeev
agordeev@...hat.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ