lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7035e8a9-4bcd-bc87-4272-7efa6ed5ac53@redhat.com>
Date:   Sat, 17 Apr 2021 17:11:46 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Theodore Ts'o <tytso@....edu>,
        Wedson Almeida Filho <wedsonaf@...gle.com>
Cc:     Peter Zijlstra <peterz@...radead.org>, ojeda@...nel.org,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        rust-for-linux@...r.kernel.org, linux-kbuild@...r.kernel.org,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 00/13] [RFC] Rust support

On 16/04/21 17:58, Theodore Ts'o wrote:
> Another fairly common use case is a lockless, racy test of a
> particular field, as an optimization before we take the lock before we
> test it for realsies.  In this particular case, we can't allocate
> memory while holding a spinlock, so we check to see without taking the
> spinlock to see whether we should allocate memory (which is expensive,
> and unnecessasry most of the time):
> 
> alloc_transaction:
> 	/*
> 	 * This check is racy but it is just an optimization of allocating new
> 	 * transaction early if there are high chances we'll need it. If we
> 	 * guess wrong, we'll retry or free the unused transaction.
> 	 */
> 	if (!data_race(journal->j_running_transaction)) {
> 		/*
> 		 * If __GFP_FS is not present, then we may be being called from
> 		 * inside the fs writeback layer, so we MUST NOT fail.
> 		 */
> 		if ((gfp_mask & __GFP_FS) == 0)
> 			gfp_mask |= __GFP_NOFAIL;
> 		new_transaction = kmem_cache_zalloc(transaction_cache,
> 						    gfp_mask);
> 		if (!new_transaction)
> 			return -ENOMEM;
> 	}

 From my limited experience with Rust, things like these are a bit 
annoying indeed, sooner or later Mutex<> just doesn't cut it and you 
have to deal with its limitations.

In this particular case you would use an AtomicBool field, place it 
outside the Mutex-protected struct, and make sure that is only accessed 
under the lock just like in C.
One easy way out is to make the Mutex protect (officially) nothing, i.e. 
Mutex<()>, and handle the mutable fields yourself using RefCell (which 
gives you run-time checking but has some some space cost) or UnsafeCell 
(which is unsafe as the name says).  Rust makes it pretty easy to write 
smart pointers (Mutex<>'s lock guard itself is a smart pointer) so you 
also have the possibility of writing a safe wrapper for the combination 
of Mutex<()> and UnsafeCell.

Another example is when yu have a list of XYZ objects and use the same 
mutex for both the list of XYZ and a field in struct XYZ.  You could 
place that field in an UnsafeCell and write a function that receives a 
guard for the list lock and returns the field, or something like that. 
It *is* quite ugly though.

As an aside, from a teaching standpoint associating a Mutex with a 
specific data structure is bad IMNSHO, because it encourages too 
fine-grained locking.  Sometimes the easiest path to scalability is to 
use a more coarse lock and ensure that contention is extremely rare. 
But it does work for most simple use cases (and device drivers would 
qualify as simple more often than not).

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ