lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YYvaU1sRRbNAqigG@alley>
Date:   Wed, 10 Nov 2021 15:42:27 +0100
From:   Petr Mladek <pmladek@...e.com>
To:     Ming Lei <ming.lei@...hat.com>
Cc:     Josh Poimboeuf <jpoimboe@...hat.com>,
        Jiri Kosina <jikos@...nel.org>,
        Miroslav Benes <mbenes@...e.cz>, live-patching@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Luis Chamberlain <mcgrof@...nel.org>,
        Joe Lawrence <joe.lawrence@...hat.com>
Subject: Re: [PATCH V4 3/3] livepatch: free klp_patch object synchronously

On Fri 2021-11-05 15:59:56, Ming Lei wrote:
> On Wed, Nov 03, 2021 at 02:55:19PM +0100, Petr Mladek wrote:
> > On Tue 2021-11-02 22:59:32, Ming Lei wrote:
> > > klp_mutex isn't acquired before calling kobject_put(klp_patch), so it is
> > > fine to free klp_patch object synchronously.
> > > 
> > > One issue is that enabled store() method, in which the klp_patch kobject
> > > itself is deleted & released. However, sysfs has provided APIs for dealing
> > > with this corner case, so use sysfs_break_active_protection() and
> > > sysfs_unbreak_active_protection() for releasing klp_patch kobject from
> > > enabled_store(), meantime the enabled attribute has to be removed
> > > before deleting the klp_patch kobject.
> > > 
> > > --- a/kernel/livepatch/core.c
> > > +++ b/kernel/livepatch/core.c
> > > @@ -369,10 +370,18 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
> > >  out:
> > >  	mutex_unlock(&klp_mutex);
> > >  
> > > -	klp_free_patches_async(&to_free);
> > > -
> > >  	if (ret)
> > >  		return ret;
> > > +
> > > +	if (!list_empty(&to_free)) {
> > > +		kn = sysfs_break_active_protection(kobj, &attr->attr);
> > > +		WARN_ON_ONCE(!kn);
> > > +		sysfs_remove_file(kobj, &attr->attr);
> > > +		klp_free_patches(&to_free);
> > > +		if (kn)
> > > +			sysfs_unbreak_active_protection(kn);
> > > +	}
> > 
> > I agree that using workqueues for free_work looks like a hack.
> > But this looks even more tricky and fragile to me. It feels like
> > playing with sysfs/kernfs internals.
> > 
> > It might look less tricky when using sysfs_remove_file_self().
> 
> The protection needs to cover removing both 'enabled' attribute and
> the patch kobject, so sysfs_remove_file_self() isn't good here.

I see.

> > Anyway, there are only few users of these APIs:
> > 
> >    + sysfs_break_active_protection() is used only scsi
> >    + kernfs_break_active_protection() is used by cgroups, cpusets, and rdtgroup.
> >    + sysfs_remove_file_self() is used by some RDMA-related stuff.
> > 
> > It means that there are some users but it is not widely used API.
> 
> It is used by generic pci device and scsi device, both are the most popular
> devices in the world, either one of the two subsystem should have huge amount
> of users, so it means the interface itself has been proved/verified for long
> time by many enough real users.

Good to know. It means that if there is a regression then scsi users
should find it quickly.


> > > +		kn = sysfs_break_active_protection(kobj, &attr->attr);
> > > +		WARN_ON_ONCE(!kn);
> > > +		sysfs_remove_file(kobj, &attr->attr);
> > > +		klp_free_patches(&to_free);
> > > +		if (kn)
> > > +			sysfs_unbreak_active_protection(kn);


> > I would personally prefer to keep it as is. I do not see any
> > fundamental advantage of the new code. But I might be biased
> > because the current code was written by me ;-)
> 
> The fundamental advantage is that the API has been used/verified by
> enough real users. Also killing attribute/kobject itself isn't unique
> for livepatch, that is actually one common pattern, so it needn't
> such hacky implementation.

I am not sure what you mean by many users:

   + sysfs_break_active_protection() is used only once
     by sdev_store_delete()

   + sysfs_remove_file_self() seems to be used 7x in kernel sources.


It all goes down to kernfs_break_active_protection() that has
a bit scary description:

 * This function releases the active reference of @kn the caller is
 * holding.  Once this function is called, @kn may be removed at any point
 * and the caller is solely responsible for ensuring that the objects it
 * dereferences are accessible.


and the related kernfs_unbreak_active_protection() has even more
scarry description:

 * If kernfs_break_active_protection() was called, this function must be
 * invoked before finishing the kernfs operation.  Note that while this
 * function restores the active reference, it doesn't and can't actually
 * restore the active protection - @kn may already or be in the process of
 * being removed.  Once kernfs_break_active_protection() is invoked, that
 * protection is irreversibly gone for the kernfs operation instance.
 *
 * While this function may be called at any point after
 * kernfs_break_active_protection() is invoked, its most useful location
 * would be right before the enclosing kernfs operation returns.


It feels like this API allows you to cut the branch you are staying on.
You have to be sure that you do it in the right order and remove
the spot under your feet as the very last piece. While normally
this is guranteed by the refecence counters.

In compare, the workqueue approach looks less risky. You just ask
someone (worker) to remove your branch after you leave. It will be
naturally done only when nobody is on the branch and in the right
order thanks to the reference counters.

Best Regards,
Petr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ