Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    CEOs Use AI Avatars for Earnings Calls, Leading Innovation

    May 23, 2025

    Walmart Lays Off 1,500 Employees to Reshape Operations

    May 22, 2025

    Hacked AI Chatbots Raise Security Concerns, Researchers Warn

    May 21, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Times TribuneTimes Tribune
    Subscribe
    • Home
    • Business
    • World
    • Politics
    • Media & Culture
    • Life Style
    • About Us
    • Get In Touch
    Times TribuneTimes Tribune
    Home»Technology»Hacked AI Chatbots Raise Security Concerns, Researchers Warn
    Technology

    Hacked AI Chatbots Raise Security Concerns, Researchers Warn

    Jamie CarpenterBy Jamie CarpenterMay 21, 2025Updated:May 21, 2025No Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    hacked-ai-chatbots-raise-security-concerns,-researchers-warn
    Share
    Facebook Twitter LinkedIn Pinterest Email

    AI-powered chatbots, such as ChatGPT, Gemini, and Claude, are increasingly vulnerable to being “jailbroken,” a process that allows them to generate dangerous, illicit information despite built-in safety measures. Researchers warn that this could make harmful knowledge accessible to anyone, including cybercriminals and malicious actors.

    Jailbreaking AI Systems: A Growing Threat

    Jailbreaking AI chatbots involves tricking them into bypassing safety controls designed to prevent harmful or illegal responses. These safety systems are essential for ensuring that AI does not share dangerous content, such as instructions for hacking or making bombs. However, jailbreaking exploits the AI’s primary goal of following user instructions, overriding the system’s safeguards.

    Dark LLMs and Unrestricted AI Models

    Some AI models, known as “dark LLMs,” are deliberately designed without safety controls or are modified to bypass them. These dark models are increasingly available online, often marketed as AI tools that can assist with illegal activities like cybercrime, fraud, and more. The researchers highlight that once these AI systems are compromised, they can generate responses to almost any query, including dangerous instructions.

    The Scale and Impact of the Threat

    Researchers from Ben Gurion University of the Negev, including Prof. Lior Rokach and Dr. Michael Fire, conducted a study demonstrating the ease with which AI models could be tricked into providing illicit information. This includes step-by-step instructions for hacking, drug production, and other criminal activities. The alarming aspect of this threat is its accessibility, scalability, and adaptability, which makes it far more dangerous than previous technological risks.

    Industry Response to Jailbreak Threats

    Despite contacting major LLM providers to warn them about the vulnerability, the researchers received an underwhelming response. Some companies did not respond at all, while others claimed that jailbreak attacks fell outside the scope of their bounty programs, which reward ethical hackers for reporting vulnerabilities. This lack of action has raised concerns about the industry’s commitment to addressing AI safety risks.

    Recommendations for Improved AI Security

    The report urges AI developers to screen training data more rigorously, implement stronger firewalls to block risky queries, and adopt “machine unlearning” techniques to help chatbots forget illicit information they might absorb. Researchers also suggest that dark LLMs should be treated as serious security threats, similar to unlicensed weapons and explosives, and that providers should be held accountable.

    Expert Opinions on AI Security

    Experts in AI security, such as Dr. Ihsen Alouani and Prof. Peter Garraghan, emphasize the need for more robust security practices, including red teaming and context-based threat modeling. Alouani also points out that AI-driven scams and disinformation campaigns could become significantly more sophisticated as jailbreaks become more common. Companies need to invest in AI security more seriously, rather than relying solely on front-end safeguards.

    Industry Efforts and Challenges

    OpenAI, the creator of ChatGPT, has taken steps to make its latest AI model, o1, more resilient to jailbreaks. However, the company acknowledges that it must continue investigating ways to improve the robustness of its programs. Other tech giants, including Meta, Google, Microsoft, and Anthropic, have been approached for comment regarding their efforts to safeguard their AI systems from such threats.

    AI chatbots AI models AI safety ChatGPT cybercrime dark LLMs fraud hacking jailbreaking OpenAI security risks
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Jamie Carpenter

    Related Posts

    CEOs Use AI Avatars for Earnings Calls, Leading Innovation

    May 23, 2025

    FBI Warns of AI-Generated Voice Scams Targeting U.S. Officials

    May 16, 2025

    Instagram Tests AI to Verify Teen Users’ Ages

    April 21, 2025

    Comments are closed.

    Our Picks

    Putin Says Western Sanctions are Akin to Declaration of War

    January 9, 2020

    Investors Jump into Commodities While Keeping Eye on Recession Risk

    January 8, 2020

    Marquez Explains Lack of Confidence During Qatar GP Race

    January 7, 2020

    There’s No Bigger Prospect in World Football Than Pedri

    January 6, 2020
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss

    CEOs Use AI Avatars for Earnings Calls, Leading Innovation

    Technology May 23, 2025

    In a fascinating new development, tech company CEOs are taking the AI-first approach to a…

    Walmart Lays Off 1,500 Employees to Reshape Operations

    May 22, 2025

    Hacked AI Chatbots Raise Security Concerns, Researchers Warn

    May 21, 2025

    Honda Scales Back EV Investment, GM Halts Exports to China

    May 20, 2025

    Subscribe to Updates

    About Us
    About Us
    Our Picks

    Putin Says Western Sanctions are Akin to Declaration of War

    January 9, 2020

    Investors Jump into Commodities While Keeping Eye on Recession Risk

    January 8, 2020

    Marquez Explains Lack of Confidence During Qatar GP Race

    January 7, 2020
    More Links
    • About Us
    • Get In Touch
    • Fitness
    • Life Style
    • Travels
    • Technology
    • Privacy Policy
    Facebook X (Twitter) Instagram
    Copyright © 2025. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.