lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 30 Nov 2021 10:39:39 +0100
From:   Dmitry Vyukov <dvyukov@...gle.com>
To:     eric.dumazet@...il.com
Cc:     davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
        netdev@...r.kernel.org
Subject: Re: [RFC -next 1/2] lib: add reference counting infrastructure

Hi Eric,

Nice! Especially ref_tracker_dir_print() in netdev_wait_allrefs().

> +	*trackerp = tracker = kzalloc(sizeof(*tracker), gfp);

This may benefit from __GFP_NOFAIL. syzkaller will use fault injection to fail
this. And I think it will do more bad than good.

We could also note this condition in dir, along the lines of:

	if (!tracker) {
		dir->failed = true;

To print on any errors and to check in ref_tracker_free():

int ref_tracker_free(struct ref_tracker_dir *dir,
		     struct ref_tracker **trackerp)
{
...
	if (!tracker) {
		WARN_ON(!dir->failed);
		return -EEXIST;
	}

This would be a bug, right?
Or:

	*trackerp = tracker = kzalloc(sizeof(*tracker), gfp);
	if (!tracker) {
		*tracker = TRACKERP_ALLOC_FAILED;
		 return -ENOMEM;
	}

and then check TRACKERP_ALLOC_FAILED in ref_tracker_free().
dev_hold_track() ignores the return value, so it would be useful to note
this condition.

> +	if (tracker->dead) {
> +		pr_err("reference already released.\n");

This and other custom prints won't be detected as bugs by syzkaller and other
testing systems, they detect the standard BUG/WARNING. Please use these.

ref_tracker_free() uses unnecesary long critical sections. I understand this
is debugging code, but frequently debugging code is so pessimistic that nobody
use it. If we enable this on syzbot, it will also slowdown all fuzzing.
I think with just a small code shuffling critical sections can be
significantly reduced:

	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
	tracker->free_stack_handle = stack_depot_save(entries, nr_entries, GFP_ATOMIC);

	spin_lock_irqsave(&dir->lock, flags);
	if (tracker->dead)
		...
	tracker->dead = true;

	list_move_tail(&tracker->head, &dir->quarantine);
	if (!dir->quarantine_avail) {
		tracker = list_first_entry(&dir->quarantine, struct ref_tracker, head);
		list_del(&tracker->head);
	} else {
		dir->quarantine_avail--;
		tracker = NULL;
	}
	spin_unlock_irqrestore(&dir->lock, flags);

	kfree(tracker);

> +#define REF_TRACKER_STACK_ENTRIES 16
> +	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
> +	tracker->alloc_stack_handle = stack_depot_save(entries, nr_entries, gfp);

The saved stacks can be longer because they are de-duped. But stacks insered
into stack_depot need to be trimmed with filter_irq_stacks(). It seems that
almost all current users got it wrong. We are considering moving
filter_irq_stacks() into stack_depot_save(), but it's not done yet.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ