lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b5f49ae4-a905-4c64-8918-83aa53d3dbcd@huawei.com>
Date: Tue, 25 Jun 2024 22:29:47 +0800
From: chenridong <chenridong@...wei.com>
To: Waiman Long <longman@...hat.com>, Michal Koutný
	<mkoutny@...e.com>
CC: <tj@...nel.org>, <lizefan.x@...edance.com>, <hannes@...xchg.org>,
	<bpf@...r.kernel.org>, <cgroups@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH -next] cgroup: fix uaf when proc_cpuset_show


On 2024/6/25 22:16, Waiman Long wrote:
> On 6/25/24 10:11, chenridong wrote:
>>
>>
>> On 2024/6/25 18:10, Michal Koutný wrote:
>>> Hello.
>>>
>>> On Tue, Jun 25, 2024 at 11:12:20AM GMT, 
>>> chenridong<chenridong@...wei.com>  wrote:
>>>> I am considering whether the cgroup framework has a method to fix this
>>>> issue, as other subsystems may also have the same underlying problem.
>>>> Since the root css will not be released, but the css->cgrp will be
>>>> released.
>>> <del>First part is already done in
>>>     d23b5c5777158 ("cgroup: Make operations on the cgroup root_list 
>>> RCU safe")
>>> second part is that</del>
>>> you need to take RCU read lock and check for NULL, similar to
>>>     9067d90006df0 ("cgroup: Eliminate the need for cgroup_mutex in 
>>> proc_cgroup_show()")
>>>
>>> Does that make sense to you?
>>>
>>> A Fixes: tag would be nice, it seems at least
>>>     a79a908fd2b08 ("cgroup: introduce cgroup namespaces")
>>> played some role. (Here the RCU lock is not for cgroup_roots list 
>>> but to
>>> preserve the root cgrp itself css_free_rwork_fn/cgroup_destroy_root.
>>>
>>> HTH,
>>> Michal
>>
>> Thank you, Michal, that is a good idea. Do you mean as below?
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>
>> index c12b9fdb22a4..2ce0542067f1 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -5051,10 +5051,17 @@ int proc_cpuset_show(struct seq_file *m, 
>> struct pid_namespace *ns,
>>         if (!buf)
>>                 goto out;
>>
>> +       rcu_read_lock();
>> +       spin_lock_irq(&css_set_lock);
>>         css = task_get_css(tsk, cpuset_cgrp_id);
>> -       retval = cgroup_path_ns(css->cgroup, buf, PATH_MAX,
>> - current->nsproxy->cgroup_ns);
>> +
>> +       retval = cgroup_path_ns_locked(css->cgroup, buf, PATH_MAX,
>> +               current->nsproxy->cgroup_ns);
>>         css_put(css);
>> +
>> +       spin_unlock_irq(&css_set_lock);
>> +       cgroup_unlock();
>> +
>>         if (retval == -E2BIG)
>>                 retval = -ENAMETOOLONG;
>>
>>         if (retval < 0)
>>
> That should work. However, I would suggest that you take 
> task_get_css() and css_put() outside of the critical section. The 
> task_get_css() is a while loop that may take a while to execute and 
> you don't want run it with interrupt disabled.
>
> Cheers,
> Longman
>
>
>
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -5050,11 +5050,18 @@ int proc_cpuset_show(struct seq_file *m, struct 
pid_namespace *ns,
         buf = kmalloc(PATH_MAX, GFP_KERNEL);
         if (!buf)
                 goto out;
-
         css = task_get_css(tsk, cpuset_cgrp_id);
-       retval = cgroup_path_ns(css->cgroup, buf, PATH_MAX,
-                               current->nsproxy->cgroup_ns);
+
+       rcu_read_lock();
+       spin_lock_irq(&css_set_lock);
+
+       retval = cgroup_path_ns_locked(css->cgroup, buf, PATH_MAX,
+               current->nsproxy->cgroup_ns);
+
+       spin_unlock_irq(&css_set_lock);
+       rcu_read_unlock();
         css_put(css);
+
         if (retval == -E2BIG)
                 retval = -ENAMETOOLONG;

         if (retval < 0)


Yeah, that looks good, i will test for a while. I will send a new patch 
if no other problem occurs.

Thank you.

Regards,
Ridong



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ