lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.1003240747040.2126@cobra.newdream.net>
Date:	Wed, 24 Mar 2010 07:54:49 -0700 (PDT)
From:	Sage Weil <sage@...dream.net>
To:	Neil Brown <neilb@...e.de>
cc:	Stephen Rothwell <sfr@...b.auug.org.au>, Greg KH <greg@...ah.com>,
	linux-next@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: linux-next: build failure after merge of the driver-core tree

On Wed, 24 Mar 2010, Neil Brown wrote:
> Sage Weil <sage@...dream.net> wrote:
> > > It is a pity that this code cannot use mempool_t....
> > > What if mempool_t were changed to only re-alloc the vector of pointers when
> > > it grew, or when it shrank to less than 1/2 it's current size.  Would that
> > > reduce the frequency of allocations enough for you to be comfortable with it? 
> > > i.e. always make the vector a power-of-2 size (which is what is probably
> > > allocated anyway) while the pool size might be less.
> > > ??
> > 
> > That would improve the situation, but still mean potentially large 
> > allocations (the pools can grow pretty big) that aren't strictly 
> > necessary.  I can imagine a more modular mempool_t with an ops vector for 
> > adding/removing from the pool to cope with situations like this, but I'm 
> > not sure it's worth the effort?
> 
> How big?
> mempools (and equivalents) should just be large enough to get you through a
> tight spot.  The working assumption is that they will not normally be used.
> So 2 or 3 should normally be plenty.
> 
> (looks at code)
> 
> The only time you resize a ceph_mempool is in ceph_monc_do_statfs
> where you increment it, perform a synchronous statfs call on the 
> network, then decrement the size of the mempool.
> How many concurrent statfs calls does it even make sense to make.
> I'm probably missing something obvious, but wouldn't it make sense to
> put that all under a mutex so there was only ever one outstanding statfs (per
> filesystem) - or maybe under a counting semaphore to allow some small number,
> and make sure to prime the mempool to cover that number.
> Then you would never resize a mempool at all.

You're right.  In fact, after reviewing the code again, it looks like 
_none_ of the msgpools (current or planned) needs to get large anymore.  
A protocol change a month or two back made it possible to allocate space 
for the reply along with the request, which means the only remaining use 
for the pools is for low memory writeout and the handful of messages we 
might receive from servers without asking for them.  (There used to be 
more of the msgpool resizing going on for other types of requests, but 
it's been mostly fixed up.)

I'll work on cleaning up the request/reply instances to avoid pools 
altogether.  The remaining pools should be small enough to use the 
standard mempool_t.

Thanks for looking into this!
sage
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ