lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f0c46aba-9756-5f05-a843-51bc4898a313@redhat.com>
Date: Mon, 30 Jun 2025 17:45:18 +0200 (CEST)
From: Mikulas Patocka <mpatocka@...hat.com>
To: Dongsheng Yang <dongsheng.yang@...ux.dev>
cc: agk@...hat.com, snitzer@...nel.org, axboe@...nel.dk, hch@....de, 
    dan.j.williams@...el.com, Jonathan.Cameron@...wei.com, 
    linux-block@...r.kernel.org, linux-kernel@...r.kernel.org, 
    linux-cxl@...r.kernel.org, nvdimm@...ts.linux.dev, 
    dm-devel@...ts.linux.dev
Subject: Re: [PATCH v1 00/11] dm-pcache – persistent-memory cache for block devices



On Mon, 30 Jun 2025, Dongsheng Yang wrote:

> Hi Mikulas,
> 
>     The reason why we don’t release the spinlock here is that if we do, the
> subtree could change.
> 
> For example, in the `fixup_overlap_contained()` function, we may need to split
> a certain `cache_key`, and that requires allocating a new `cache_key`.
> 
> If we drop the spinlock at this point and then re-acquire it after the
> allocation, the subtree might already have been modified, and we cannot safely
> continue with the split operation.

Yes, I understand this.

>     In this case, we would have to restart the entire subtree search and walk.
> But the new walk might require more memory—or less,
> 
> so it's very difficult to know in advance how much memory will be needed
> before acquiring the spinlock.
> 
>     So allocating memory inside a spinlock is actually a more direct and
> feasible approach. `GFP_NOWAIT` fails too easily, maybe `GFP_ATOMIC` is more
> appropriate.

Even GFP_ATOMIC can fail. And it is not appropriate to return error and 
corrupt data when GFP_ATOMIC fails.

> What do you think?

If you need memory, you should drop the spinlock, allocate the memory 
(with mempool_alloc(GFP_NOIO)), attach the allocated memory to "struct 
pcache_request" and retry the request (call pcache_cache_handle_req 
again).

When you retry the request, there are these possibilities:
* you don't need the memory anymore - then, you just free it
* you need the amount of memory that was allocated - you just proceed 
  while holding the spinlock
* you need more memory than what you allocated - you drop the spinlock, 
  free the memory, allocate a larger memory block and retry again

Mikulas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ