lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1C8E6A7A-45FE-4862-A6AD-397548588F96@gmail.com>
Date:	Thu, 24 Jun 2010 12:35:19 +0300
From:	"Henri Häkkinen" <henrih81@...il.com>
To:	Alan Cox <alan@...rguk.ukuu.org.uk>
Cc:	gregkh@...e.de, ossama.othman@...el.com,
	Matti Lammi <mattij.lammi@...il.com>, randy.dunlap@...cle.com,
	devel@...verdev.osuosl.org, linux-kernel@...r.kernel.org
Subject: Fwd: [PATCH] Staging: memrar: Moved memrar_allocator struct into memrar_allocator.c

On 24.6.2010, at 12.09, Alan Cox wrote:

>> size_t memrar_allocator_largest_free_area(struct memrar_allocator *allocator)
>> {
>> -	if (allocator == NULL)
>> -		return 0;
>> -	return allocator->largest_free_area;
>> +	size_t tmp = 0;
>> +
>> +	if (allocator != NULL) {
>> +		mutex_lock(&allocator->lock);
>> +		tmp = allocator->largest_free_area;
>> +		mutex_unlock(&allocator->lock);
> 
> This doesn't seem to make any sense (in either version). The moment you
> drop the lock the value in "tmp" becomes stale as the allocator could
> change it. ?
> 

The idea was proposed by Ossama Othman in his earlier reply.


Begin forwarded message:

> From: "Othman, Ossama" <ossama.othman@...el.com>
> To: Henri Häkkinen <henuxd@...il.com>, "gregkh@...e.de" <gregkh@...e.de>, "randy.dunlap@...cle.com" <randy.dunlap@...cle.com>, "alan@...ux.intel.com" <alan@...ux.intel.com>
> Cc: "devel@...verdev.osuosl.org" <devel@...verdev.osuosl.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
> Subject: RE: [PATCH] Staging: memrar: Moved memrar_allocator struct into memrar_allocator.c
> 
> Hi,
> 
>> Forward declared memrar_allocator in memrar_allocator.h and moved it
>> to memrar_allocator.c file.  Implemented memrar_allocator_capacity(),
>> memrar_allocator_largest_free_area(), memrar_allocoator_lock() and
>> memrar_allocator_unlock().
> ...
>> -	mutex_lock(&allocator->lock);
>> -	r->largest_block_size = allocator->largest_free_area;
>> -	mutex_unlock(&allocator->lock);
>> +	memrar_allocator_lock(allocator);
>> +	r->largest_block_size =
>> memrar_allocator_largest_free_area(allocator);
>> +	memrar_allocator_unlock(allocator);
> 
> I don't think it's necessary to expose the allocator lock.  Why not just grab the lock in memrar_allocator_largest_free_area() while the underlying struct field is being accessed and then unlock it before that function returns?  That would allow the allocator lock to remain an internal implementation detail.  We only need to ensure access to the struct field itself is synchronized, e.g.:
> 
> size_t memrar_allocator_largest_free_area(struct memrar_allocator *allocator)
> {
> 	size_t tmp = 0;
> 
> 	if (allocator != NULL) {
> 		mutex_lock(&allocator->lock);
> 		tmp = allocator->largest_free_area;
> 		mutex_unlock(&allocator->lock);
> 	}
> 
> 	return tmp;
> }
> 
> Certainly the allocator->largest_free_area value could be updated after the lock is released and by the time it is returned to the user (for statistical purposes), but at least the internal allocator state would remain consistent in the presences of multiple threads.
> 
> HTH,
> -Ossama
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ