lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250613202937.679-1-khaliidcaliy@gmail.com>
Date: Fri, 13 Jun 2025 20:28:49 +0000
From: Khalid Ali <khaliidcaliy@...il.com>
To: tglx@...utronix.de,
	peterz@...radead.org,
	luto@...nel.org
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH] kernel/entry: Remove some redundancy checks on syscall works

On Wed, Jun 11 2025 at 11:43, Khalid Ali wrote:
> > There is a redundant checks of thread syscall work.

> Not really.
>
> >  After we read thread syscall work we are checking the work bits using
>
> We are doing nothing. Please write your changelogs in imperative mood
> and do not try to impersonate code.

Sorry, i guess my english sucks.

> > SYSCALL_WORK_ENTER and SYSCALL_WORK_EXIT on syscall entry and exit
> > respectively, and at the same time syscall_trace_enter() and
> > syscall_exit_work() checking bits one by one, the bits we already checked.
> > This is redundancy. So either we need to check the work bits one by one as I
> > did, or check as whole. On my prespective, i think the way code is
> > implemented now checking work bits one by one is simpler and gives us
> > more granular control.

> That's just wrong and absolutely not redundant. Care to look at the
> definition of SYSCALL_WORK_ENTER:
>
> #define SYSCALL_WORK_ENTER	(SYSCALL_WORK_SECCOMP |			\
>				 SYSCALL_WORK_SYSCALL_TRACEPOINT |	\
>				 SYSCALL_WORK_SYSCALL_TRACE |		\
>				 SYSCALL_WORK_SYSCALL_EMU |		\
>				 SYSCALL_WORK_SYSCALL_AUDIT |		\
>				 SYSCALL_WORK_SYSCALL_USER_DISPATCH |	\
>				 ARCH_SYSCALL_WORK_ENTER)
>
> So this initial check avoids:
>
>    1) Doing an unconditional out of line call
>
>    2) Checking bit for bit to figure out that there is none set.
>
> Same applies for SYSCALL_WORK_EXIT.
>
> Your change neither makes anything simpler nor provides more granular
> control.
>
> All it does is adding overhead and therefore guaranteed to introduce a
> performance regression.
>
> Not going to happen.
>
> Thanks,
>
>        tglx
Thanks, for the response and noted all your points, however i spotted some minor details also:

First if we are talking about performance then we may need likely() on SYSCALL_WORK_ENTER since 
the probability of condition evaluating as true is very high.

Second syscall_enter_audit() missing SYSCALL_WORK_SYSCALL_AUDIT	evaluation, aren't we supposed to call
it only if SYSCALL_WORK_SYSCALL_AUDIT is set?

Should i create another patch fixing these two points, of course if i am right?

Thanks, Khalid Ali

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ