lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 9 Apr 2022 15:01:25 +0800
From:   "yukuai (C)" <yukuai3@...wei.com>
To:     Bart Van Assche <bvanassche@....org>, <axboe@...nel.dk>,
        <andriy.shevchenko@...ux.intel.com>, <john.garry@...wei.com>,
        <ming.lei@...hat.com>
CC:     <linux-block@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <yi.zhang@...wei.com>
Subject: Re: [PATCH -next RFC v2 8/8] sbitmap: wake up the number of threads
 based on required tags

在 2022/04/09 12:16, Bart Van Assche 写道:
> On 4/8/22 19:17, yukuai (C) wrote:
>> I think the reason to wake up 'wake_batch' waiters is to make sure
>> wakers will use up 'wake_batch' tags that is just freed, because each
>> wakers should aquire at least one tag. Thus I think if we can make sure
>> wakers will use up 'wake_batch' tags, it's ok to wake up less waiters.
> 
> Hmm ... I think it's up to you to (a) explain this behavior change in 
> detail in the commit message and (b) to prove that this behavior change 
> won't cause trouble (I guess this change will cause trouble).

Hi, Bart

Sorry that the commit message doesn't explain clearly.

There are only two situations that wakers will be less than 'wake_batch'
after this patch:

(a) some wakers will acquire multipul tags, as I mentioned above, this
is ok because wakers will use up 'wake_batch' tags.

(b) the total number of waiters is less than 'wake_batch', this is
problematic if tag preemption is disabled, because io concurrency will
be declined.(patch 5 should fix the problem)

For the race that new threads are waited after get_wake_nr() and before
wake_up_nr() in situation (b), I can't figure out how this can be
problematic, however, this can be optimized by triggering additional
wake up:

@@ -623,15 +623,17 @@ static unsigned int get_wake_nr(struct 
sbq_wait_state *ws, unsigned int nr_tags)
         spin_lock_irq(&ws->wait.lock);
         list_for_each_entry(entry, &ws->wait.head, entry) {
                 wait = container_of(entry, struct sbq_wait, wait);
-               if (nr_tags <= wait->nr_tags)
+               if (nr_tags <= wait->nr_tags) {
+                       nr_tags = 0;
                         break;
+               }

                 nr++;
                 nr_tags -= wait->nr_tags;
         }
         spin_unlock_irq(&ws->wait.lock);

-       return nr;
+       return nr + nr_tags;
  }

What do you think?

Thanks,
Kuai

> 
> Thanks,
> 
> Bart.
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ