lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250526053900.asTaMltl@linutronix.de>
Date: Mon, 26 May 2025 07:39:00 +0200
From: Nam Cao <namcao@...utronix.de>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Alexander Viro <viro@...iv.linux.org.uk>,
	Christian Brauner <brauner@...nel.org>, Jan Kara <jack@...e.cz>,
	John Ogness <john.ogness@...utronix.de>,
	Clark Williams <clrkwllms@...nel.org>,
	Steven Rostedt <rostedt@...dmis.org>, linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-rt-devel@...ts.linux.dev,
	linux-rt-users@...r.kernel.org, Joe Damato <jdamato@...tly.com>,
	Martin Karsten <mkarsten@...terloo.ca>,
	Jens Axboe <axboe@...nel.dk>,
	Frederic Weisbecker <frederic@...nel.org>,
	Valentin Schneider <vschneid@...hat.com>
Subject: Re: [PATCH v2] eventpoll: Fix priority inversion problem

On Fri, May 23, 2025 at 02:26:11PM +0200, Sebastian Andrzej Siewior wrote:
> On 2025-05-23 08:11:04 [+0200], Nam Cao wrote:
> On the AMD I tried
> Unpatched:
> | $ perf bench epoll all 2>&1 | grep -v "^\["
> | # Running epoll/wait benchmark...
> | Run summary [PID 3019]: 255 threads monitoring on 64 file-descriptors for 8 secs.
> |
> |
> | Averaged 785 operations/sec (+- 0.05%), total secs = 8
> |
> | # Running epoll/ctl benchmark...
> | Run summary [PID 3019]: 256 threads doing epoll_ctl ops 64 file-descriptors for 8 secs.
> |
> |
> | Averaged 2652 ADD operations (+- 1.19%)
> | Averaged 2652 MOD operations (+- 1.19%)
> | Averaged 2652 DEL operations (+- 1.19%)
> 
> Patched:
> | $ perf bench epoll all 2>&1 | grep -v "^\["
> | # Running epoll/wait benchmark...
> | Run summary [PID 3001]: 255 threads monitoring on 64 file-descriptors for 8 secs.
> | 
> | 
> | Averaged 1386 operations/sec (+- 3.94%), total secs = 8
> | 
> | # Running epoll/ctl benchmark...
> | Run summary [PID 3001]: 256 threads doing epoll_ctl ops 64 file-descriptors for 8 secs.
> | 
> | 
> | Averaged 1495 ADD operations (+- 1.11%)
> | Averaged 1495 MOD operations (+- 1.11%)
> | Averaged 1495 DEL operations (+- 1.11%)
> 
> The epoll_waits improves again, epoll_ctls does not. I'm not sure how to
> read the latter. My guess would be that ADD/ MOD are fine but DEL is a
> bit bad because it has to del, iterate, …, add back.

Yeah EPOLL_CTL_DEL is clearly worse. But epoll_ctl() is not
performance-critical, so I wouldn't worry about it.

> > diff --git a/fs/eventpoll.c b/fs/eventpoll.c
> > index d4dbffdedd08e..483a5b217fad4 100644
> > --- a/fs/eventpoll.c
> > +++ b/fs/eventpoll.c
> > @@ -136,14 +136,29 @@ struct epitem {
> >  		struct rcu_head rcu;
> >  	};
> >  
> > -	/* List header used to link this structure to the eventpoll ready list */
> > -	struct list_head rdllink;
> > +	/*
> > +	 * Whether epitem.rdllink is currently used in a list. When used, it cannot be detached or
> 
> Notation wise I would either use plain "rdllink" or the C++ notation
> "epitem::rdllink".
> 
> > +	 * inserted elsewhere.
> 
> When set, it is attached to eventpoll::rdllist and can not be attached
> again.
> This nothing to do with detaching.
> 
> > +	 * It may be in use for two reasons:
> > +	 *
> > +	 * 1. This item is on the eventpoll ready list.
> > +	 * 2. This item is being consumed by a waiter and stashed on a temporary list. If inserting
> > +	 *    is blocked due to this reason, the waiter will add this item to the list once
> > +	 *    consuming is done.
> > +	 */
> > +	bool link_used;
> >  
> >  	/*
> > -	 * Works together "struct eventpoll"->ovflist in keeping the
> > -	 * single linked chain of items.
> > +	 * Indicate whether this item is ready for consumption. All items on the ready list has this
>                                                                                            have
> > +	 * flag set. Item that should be on the ready list, but cannot be added because of
> > +	 * link_used (in other words, a waiter is consuming the ready list), also has this flag
> > +	 * set. When a waiter is done consuming, the waiter will add ready items to the ready list.
> 
> This sounds confusing. What about:
> 
> | Ready items should be on eventpoll::rdllist. This might be not the case
> | if a waiter is consuming the list and removed temporary all items while
> | doing so. Once done, the item will be added back to eventpoll::rdllist.
> 
> The reason is either an item is removed from the list and you have to
> remove them all, look for the right one, remove it from the list, splice
> what is left to the original list.
> I did not find another reason for that.

Thanks for the comments. However, while looking at them again, I think I
complicate things with these flags.

Instead of "link_used", I could take advantage of llist_node::next. Instead
of "ready", I could do another ep_item_poll().

Therefore I am removing them for v3, then there won't be any more confusion
with these flags.

Thanks for the review, I will resolve your other comments in v3.
Nam

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ