Scammers have been spotted abusing AI site builder Lovable to mimic trusted brands, steal credentials, drain crypto wallets, and spread malware.
Hackers Hijacked Google’s Gemini AI With a Poisoned Calendar Invite to Take Over a Smart Home. For likely the first time ever, security researchers have shown how AI can be hacked to create real-world havoc, allowing them to turn off lights, open smart shutters, and more.Traduction réalisée par Nicolas Haeringer de « The rise of end times fascism », un article de Naomi Klein et Astra Taylor paru dans The Guardian le 13 avril 2025.
Generative AI and LLM technologies have shown great potential in recent years, and for this reason, an increasing number of applications are starting to integrate them for multiple purposes. These applications are becoming increasingly complex, adopting approaches that involve multiple specialized agents, each focused on one or more tasks, interacting with one another and using external tools to access information, perform operations, or carry out tasks that LLMs are not capable of handling directly (e.g., mathematical computations).
On June 25, Google released Gemini CLI, an AI agent that helps developers explore and write code using Google Gemini, directly from their command line.
On June 27, Tracebit reported a vulnerability to Google VDP which meant Gemini CLI in its default configuration could silently execute arbitrary malicious code on a user's machine when run in the context of untrusted code. Crucially, this can be achieved in such a way as to obscure this from the victim of the attack.
This discovery was ultimately classified by Google VDP as a P1 / S1 issue and fixed in v0.1.14 released July 25 with agreed disclosure date July 28.
On Friday, OpenAI's new ChatGPT Agent, which can perform multistep tasks for users, proved it can pass through one of the Internet's most common security checkpoints by clicking Cloudflare's anti-bot verification—the same checkbox that's supposed to keep automated programs like itself at bay.
ChatGPT Agent is a feature that allows OpenAI's AI assistant to control its own web browser, operating within a sandboxed environment with its own virtual operating system and browser that can access the real Internet. Users can watch the AI's actions through a window in the ChatGPT interface, maintaining oversight while the agent completes tasks. The system requires user permission before taking actions with real-world consequences, such as making purchases. Recently, Reddit users discovered the agent could do something particularly ironic.
The evidence came from Reddit, where a user named "logkn" of the r/OpenAI community posted screenshots of the AI agent effortlessly clicking through the screening step before it would otherwise present a CAPTCHA (short for "Completely Automated Public Turing tests to tell Computers and Humans Apart") while completing a video conversion task—narrating its own process as it went.