lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <635f75c9.a2a4.196f30f91ca.Coremail.00107082@163.com>
Date: Wed, 21 May 2025 21:36:46 +0800 (CST)
From: "David Wang" <00107082@....com>
To: "Greg KH" <gregkh@...uxfoundation.org>
Cc: mathias.nyman@...el.com, oneukum@...e.com, stern@...land.harvard.edu,
	linux-usb@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/2] USB: core: add a memory pool to urb caching
 host-controller private data

At 2025-05-21 20:58:18, "Greg KH" <gregkh@...uxfoundation.org> wrote:
>On Wed, May 21, 2025 at 07:25:12PM +0800, David Wang wrote:
>> At 2025-05-21 18:32:09, "Greg KH" <gregkh@...uxfoundation.org> wrote:
>> >On Sat, May 17, 2025 at 04:38:19PM +0800, David Wang wrote:
>> >> ---
>> >> Changes since v2:
>> >> 1. activat the pool only when the urb object is created via
>> >> usb_alloc_urb()
>> >> Thanks to Oliver Neukum <oneukum@...e.com>'s review.
>> >
>> >Changes go below the bottom --- line, not at the top.  Please read the
>> >documentation for how to do this.
>> >
>> >Also, these are not "threaded" together, making them hard to pick out.
>> >Please when you resend, make them be together using git send-email or
>> >some such tool.
>> 
>> >
>> 
>> Roger that~
>> 
>> 
>> >> ---
>> >> URB objects have long lifecycle, an urb can be reused between
>> >> submit loops; The private data needed by some host controller
>> >> has very short lifecycle, the memory is alloced when enqueue, and
>> >> released when dequeue. For example, on a system with xhci, in
>> >> xhci_urb_enqueue:
>> >> Using a USB webcam would have ~250/s memory allocation;
>> >> Using a USB mic would have ~1K/s memory allocation;
>> >> 
>> >> High frequent allocations for host-controller private data can be
>> >> avoided if urb take over the ownership of memory, the memory then shares
>> >> the longer lifecycle with urb objects.
>> >> 
>> >> Add a mempool to urb for hcpriv usage, the mempool only holds one block
>> >> of memory and grows when larger size is requested.
>> >> 
>> >> The mempool is activated only when the URB object is created via
>> >> usb_alloc_urb() in case some drivers create a URB object by other
>> >> means and manage it lifecycle without corresponding usb_free_urb.
>> >> 
>> >> The performance difference with this change is insignificant when
>> >> system is under no memory pressure or under heavy memory pressure.
>> >> There could be a point inbetween where extra 1k/s memory alloction
>> >> would dominate the preformance, but very hard to pinpoint it.
>> >> 
>> >> Signed-off-by: David Wang <00107082@....com>
>> >> ---
>> >>  drivers/usb/core/urb.c | 45 ++++++++++++++++++++++++++++++++++++++++++
>> >>  include/linux/usb.h    |  5 +++++
>> >>  2 files changed, 50 insertions(+)
>> >> 
>> >> diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c
>> >> index 5e52a35486af..53117743150f 100644
>> >> --- a/drivers/usb/core/urb.c
>> >> +++ b/drivers/usb/core/urb.c
>> >> @@ -23,6 +23,8 @@ static void urb_destroy(struct kref *kref)
>> >>  
>> >>  	if (urb->transfer_flags & URB_FREE_BUFFER)
>> >>  		kfree(urb->transfer_buffer);
>> >> +	if (urb->hcpriv_mempool_activated)
>> >> +		kfree(urb->hcpriv_mempool);
>> >>  
>> >>  	kfree(urb);
>> >>  }
>> >> @@ -77,6 +79,8 @@ struct urb *usb_alloc_urb(int iso_packets, gfp_t mem_flags)
>> >>  	if (!urb)
>> >>  		return NULL;
>> >>  	usb_init_urb(urb);
>> >> +	/* activate hcpriv mempool when urb is created via usb_alloc_urb */
>> >> +	urb->hcpriv_mempool_activated = true;
>> >>  	return urb;
>> >>  }
>> >>  EXPORT_SYMBOL_GPL(usb_alloc_urb);
>> >> @@ -1037,3 +1041,44 @@ int usb_anchor_empty(struct usb_anchor *anchor)
>> >>  
>> >>  EXPORT_SYMBOL_GPL(usb_anchor_empty);
>> >>  
>> >> +/**
>> >> + * urb_hcpriv_mempool_zalloc - alloc memory from mempool for hcpriv
>> >> + * @urb: pointer to URB being used
>> >> + * @size: memory size requested by current host controller
>> >> + * @mem_flags: the type of memory to allocate
>> >> + *
>> >> + * Return: NULL if out of memory, otherwise memory are zeroed
>> >> + */
>> >> +void *urb_hcpriv_mempool_zalloc(struct urb *urb, size_t size, gfp_t mem_flags)
>> >> +{
>> >> +	if (!urb->hcpriv_mempool_activated)
>> >> +		return kzalloc(size, mem_flags);
>> >> +
>> >> +	if (urb->hcpriv_mempool_size < size) {
>> >> +		kfree(urb->hcpriv_mempool);
>> >> +		urb->hcpriv_mempool_size = size;
>> >> +		urb->hcpriv_mempool = kmalloc(size, mem_flags);
>> >> +	}
>> >> +	if (urb->hcpriv_mempool)
>> >> +		memset(urb->hcpriv_mempool, 0, size);
>> >> +	else
>> >> +		urb->hcpriv_mempool_size = 0;
>> >> +	return urb->hcpriv_mempool;
>> >> +}
>> >> +EXPORT_SYMBOL_GPL(urb_hcpriv_mempool_zalloc);
>> >> +
>> >> +/**
>> >> + * urb_free_hcpriv - free hcpriv data if necessary
>> >> + * @urb: pointer to URB being used
>> >> + *
>> >> + * If mempool is activated, private data's lifecycle
>> >> + * is managed by urb object.
>> >> + */
>> >> +void urb_free_hcpriv(struct urb *urb)
>> >> +{
>> >> +	if (!urb->hcpriv_mempool_activated) {
>> >> +		kfree(urb->hcpriv);
>> >> +		urb->hcpriv = NULL;
>> >
>> >You seem to set this to NULL for no reason, AND check for
>> >hcpriv_mempool_activated.  Only one is going to be needed, you don't
>> 
>> >need to have both, right?  Why not just rely on hcdpriv being set?
>> 
>> I needs to distinguish two situations;
>> 1.  the memory pool is used, then the urb_free_hcpriv should do nothing
>> 2.  the memory was alloced by hcd,  then the memory should be kfreed
>> 
>> Using hcpriv_mempool_activated does look confusing...
>> what about following changes:
>> 
>> +	if (urb->hcpriv != urb->hcpriv_mempool) {
>> +		kfree(urb->hcpriv);
>> +		urb->hcpriv = NULL;
>> +	}
>> 
>> >
>> >And are you sure that the hcd can actually use a kmalloced "mempool"?  I
>> 
>> The patch for xhci is here:  https://lore.kernel.org/lkml/20250517083750.6097-1-00107082@163.com/
>> xhci was kzallocing memory for its private data, and when using USB webcam/mic, I can observe 1k+/s kzallocs
>> And with this patch, during my obs session(with USB webcam/mic), no memory allocation
>> observed for usb sub system;
>> 
>> >don't understand why xhci can't just do this in its driver instead of
>> >this being required in the usb core and adding extra logic and size to
>> >every urb in the system.
>> 
>> Yes, it is possible to make a mempool in hcds. But the lifecycle management would not be an easy one,
>> basically a "mempool" would need to be build up from zero-ground, lots of details need to be addressed,
>> e.g. when should resize the mempool when mempool is too big.
>> Using URB as a mempool slot holder would be a very simple approach. The URB objects  are already well managed:
>> based on my memory profiling, the alive urb objects and the rate of creating new  urb objects are both at small scale.
>> Reusing urb lifecycle management would save lots of troubles, I image....
>> 
>> Also, I would image other hcds could use similar simple changes to cache its private data when they get hold on a URB object.
>
>There is already a hcd-specific pointer in the urb, why can't they just
>use that?

All hcds would need changes, and I have only xhci to verify.
Meant to make a small step first, without breaking existing hcds.

>
>Also, while I know you saw less allocation/freeing happening, was that
>actually measurable in a real way?  Without that, the added complexity
By "measurable in a real way", hope you are not meaning measurable from end user's point of view
I have not find a solid proof, yet.   (When system is under memory pressure, everything is slow.  I feel strongly
there would be a point in middle where extra allocation would cost, but failed to pinpoint it yet.)

I am using memory profiling[1] to watch this, with a accumulative counter patch[2],
whenever a memory is alloced, a counter would be incremented by 1, by calculating
delta(counter)/delta(time), I can measure the allocation rate for most call site.


[1] https://docs.kernel.org/mm/allocation-profiling.html
[2] https://lore.kernel.org/lkml/20240617153250.9079-1-00107082@163.com/
>feels wrong (i.e. you are optimizing for something that is not really
>needed.)
>
>thanks,
>
>greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ