lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Zr3IZTGnY-e-SHPy@pathway.suse.cz>
Date: Thu, 15 Aug 2024 11:20:37 +0200
From: Petr Mladek <pmladek@...e.com>
To: zhang warden <zhangwarden@...il.com>
Cc: Josh Poimboeuf <jpoimboe@...nel.org>, Miroslav Benes <mbenes@...e.cz>,
	Jiri Kosina <jikos@...nel.org>,
	Joe Lawrence <joe.lawrence@...hat.com>,
	live-patching@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/1] livepatch: Add using attribute to klp_func for
 using function show

On Wed 2024-08-14 22:23:21, zhang warden wrote:
> 
> 
> > On Aug 14, 2024, at 00:05, Petr Mladek <pmladek@...e.com> wrote:
> > 
> > Alternative solution would be to store the pointer of struct klp_ops
> > *ops into struct klp_func. Then using_show() could just check if
> > the related struct klp_func in on top of the stack.
> > 
> > It would allow to remove the global list klp_ops and all the related
> > code. klp_find_ops() would instead do:
> > 
> >   for_each_patch
> >     for_each_object
> >       for_each_func
> > 
> > The search would need more code. But it would be simple and
> > straightforward. We do this many times all over the code.
> > 
> > IMHO, it would actually remove some complexity and be a win-win solution.
> 
> Hi Peter!
> 
> With your suggestions, it seems that you suggest move the klp_ops pinter into struct klp_func.
> 
> I may do this operation:
> 
> struct klp_func {
> 
> /* internal */
> void *old_func;
> struct kobject kobj;
> struct list_head node;
> struct list_head stack_node;
> + struct klp_ops *ops;
> unsigned long old_size, new_size;
> bool nop;
> bool patched;
> bool transition;
> };

Yes.

> With this operation, klp_ops global list will no longer needed. And if we want the ftrace_ops of a function, we just need to get the ops member of klp_func eg, func->ops. 
> 
> And klp_find_ops() will be replaced by `ops = func->ops`, which is more easy.

func->ops will work only when it is already assigned, for example, in

   + klp_check_stack_func()
   + klp_unpatch_func()
   + using_show()	/* the new sysfs callback */

But we will still need klp_find_ops() in klp_patch_func() to find
whether an already registered livepatch has already attached
the ftrace handled for the same function (func->old_func).

The new version would need to go through all registred patches,
something like:

struct klp_ops *klp_find_ops(void *old_func)
{
	struct klp_patch *patch;
	struct klp_object *obj;
	struct klp_func *func;

	klp_for_each_patch(patch) {
		klp_for_each_object(patch, obj) {
			klp_for_each_func(obj, func) {
				/*
				 * Ignore entry where func->ops has not been
				 * assigned yet. It is most likely the one
				 * which is about to be created/added.
				 */
				if (func->old_func == old_func && func->ops)
					return func->ops
			}
		}
	}

	return NULL;
}

BTW: It really looks useful. klp_check_stack_func() is called for_each_func()
     also during task transition, even from the scheduler:

       + klp_cond_resched()
	 + __klp_sched_try_switch()
	   + klp_try_switch_task()
	     + klp_check_and_switch_task()
	       + klp_check_stack()
		 + klp_for_each_object()
		   + klp_for_each_func()
		     + klp_find_ops()

      It would newly just use func->ops. It might be even noticeable
      speedup.

Please, implement this in a separate patch:

  + 1st patch adds func->ops
  + 2nd patch adds "using" sysfs interface.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ