[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAGUWgD-yN5_bLJHHA-_4sgTsXvB4AmiZ-6mWsHzzhPpkAOWWKA@mail.gmail.com>
Date: Mon, 10 Feb 2025 14:57:35 +0200
From: Georgi Guninski <gguninski@...il.com>
To: fulldisclosure@...lists.org
Subject: [FD] ChatGPT AI finds "security concern" (XSS) in DeepSeek's code
Summary: On 2025-02-09 ChatGPT AI found "security concern" (XSS) in
DeepSeek's AI python code.
Background:
Consider the simple coding question (Q):
Write Python CGI which takes as an argument NAME and outputs: "Hello NAME".
First page and results on google for "python CGI" return for me
tutorials, which are flawed and textbook examples of the cross site
scripting (XSS) vulnerability. This is a "knowledge chain attack"
which applies to training AI bots.
Timeline:
2023: ChatGPT writes textbook vulnerable code for (Q) [1]
2025-01-28: DeepSeek fails (Q) too the same way
2025-02-09: We gave to ChatGPT the buggy DeepSeek's solution for
review of python code and ChatGPT wrote:
===
Security Concern: HTML Injection
The script directly inserts user input into the response without
sanitization, making it vulnerable to HTML injection (e.g., someone
could pass ?NAME=<script>alert('Hacked!')</script>).
===
Observe that the review includes exploit too, and the current standard
term XSS is not used ("HTML injection" was in the 90's).
ChatGPT gave the improved code:
# Get the query parameters
form = cgi.FieldStorage()
# Extract and sanitize the 'NAME' parameter
name = html.escape(form.getvalue('NAME', 'World'))
While correct from security point of view, this code breaks special
characters in input for general web apps AFAICT.
AI bots training each other appear scary for me.
Related rant:
This might be a joke:
Humans built a super AI and the first question was: "Is there god?".
The answer was: "Since now there is". (In Bulgarian: Хората направили
супер изкуствен интелект и първият въпрос бил: "Има ли бог".
Отговорът: "Вече има")
From Wikipeia on Singularity [2]
The technological singularity—or simply the singularity—is a
hypothetical future point in time at which technological growth
becomes uncontrollable and irreversible, resulting in unforeseeable
consequences for human civilization.
[1]: https://www.linkedin.com/pulse/ai-chatgpt-writes-insecure-code-georgi-guninski
[2]: https://en.wikipedia.org/wiki/Technological_singularity
_______________________________________________
Sent through the Full Disclosure mailing list
https://nmap.org/mailman/listinfo/fulldisclosure
Web Archives & RSS: https://seclists.org/fulldisclosure/
Powered by blists - more mailing lists