lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r080306d.ffs@tglx>
Date: Mon, 28 Oct 2024 13:02:34 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
 linux-kernel@...r.kernel.org, André Almeida
 <andrealmeid@...lia.com>,
 Darren Hart <dvhart@...radead.org>, Davidlohr Bueso <dave@...olabs.net>,
 Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
 Valentin Schneider <vschneid@...hat.com>, Waiman Long <longman@...hat.com>
Subject: Re: [RFC PATCH 2/3] futex: Add basic infrastructure for local task
 local hash.

On Mon, Oct 28 2024 at 12:00, Peter Zijlstra wrote:
> On Mon, Oct 28, 2024 at 11:58:18AM +0100, Thomas Gleixner wrote:
>> > Let me post v2 the signal_struct and then think about auto ON.
>> 
>> It only affects actual futex users. A lot of executables never use
>> them. For ease of testing, can you please make this automatic so there
>> is no need to modify a test case?
>> 
>> Aside of that for RT we really want it automatically enabled and as
>> Linus suggested back then probably for NUMA too.
>> 
>> Stick a trace point or a debugfs counter into the allocation so you can
>> observe how many of those are actually allocated and used concurrently.
>
> Ideally it would re-hash and auto-scale to something like 4*nr_threads,
> but I realize that's probably a pain in the arse to make happen.

That's what we did with the original series, but with this model it's
daft. What we maybe could do there is:

private_hash()
   scoped_guard(rcu) {
      hash = rcu_dereference(current->signal->futex_hash);
      if (hash && rcuref_get(&hash->ref))
         return hash;
   }

   guard(spinlock_irq)(&task->sighand->siglock);
   hash = current->signal->futex_hash;
   if (hash && rcuref_get(&hash->ref))
       return hash;
   // Let alloc scale according to signal->nr_threads
   // alloc acquires a reference count
   ....

And on fork do the following:

   scoped_guard(spinlock_irq, &task->sighand->siglock) {
      hash = current->signal->futex_hash;
      if (!hash || hash_size_ok())
   	return hash;

      // Drop the initial reference, which forces the last
      // user and subsequent new users into the respective
      // slow paths, where they get stuck on sighand lock.
      if (!rcuref_put(&hash->ref))
        return;

      // rcuref_put() dropped the last reference
      old_hash = realloc_hash(hash);
      hash = current->signal->futex_hash;
   }
   kfree_rcu(old_hash);
   return hash;

A similar logic is required when putting the last reference

futex_hash_put()
{
   if (!rcuref_put(&hash->ref))
      return;

   scoped_guard(spinlock_irq, &task->sighand->siglock) {
      // Fork might have raced with this
      if (hash != current->signal->futex_hash)
      	 return;
      old_hash = realloc_hash(hash);
   }
   kfree_rcu(old_hash);  
}

realloc_hash(old_hash)
{
   new_hash = alloc():
   if (!new_hash) {
      // Make the old hash alive again
      rcuref_init(&old_hash->ref);
      return NULL;
   }
   rehash(old_hash, new_hash);
   rcu_assign_pointer(current->signal->new_hash);
   return old_hash;
}

Or something like that. On the usage site this needs

   // Takes a reference count on the hash
   hb = futex_hash(key);

   lock(hb);
   queue();
   unlock(hb);
   futex_hash_put(hb);

which means, after the put @hb is not longer valid as the rehashing
might happen right in the put or afterwards. That needs some auditing of
the usage sites, but that should work. Whether it's worth it is a
different question.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ