Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    OpenAI Hits $500B Valuation After $6.6B Share Sale

    October 3, 2025

    Madagascar Gen-Z Protests Escalate Despite Government Shake-Up

    October 3, 2025

    Trump Threatens Deep Cuts Amid Shutdown Standoff

    October 2, 2025
    Facebook X (Twitter) Instagram
    Times TribuneTimes Tribune
    • Home
    • Business
    • World
    • Politics
    • Media & Culture
    • Life Style
    • About Us
    • Contact Us
    Times TribuneTimes Tribune
    Home » Hacked AI Chatbots Raise Security Concerns, Researchers Warn
    Technology

    Hacked AI Chatbots Raise Security Concerns, Researchers Warn

    Jamie CarpenterBy Jamie CarpenterMay 21, 2025Updated:July 11, 2025No Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    hacked-ai-chatbots-raise-security-concerns,-researchers-warn
    Share
    Facebook Twitter LinkedIn Pinterest Email

    AI-powered chatbots, such as ChatGPT, Gemini, and Claude, are increasingly vulnerable to being “jailbroken,” a process that allows them to generate dangerous, illicit information despite built-in safety measures. Researchers warn that this could make harmful knowledge accessible to anyone, including cybercriminals and malicious actors.

    Jailbreaking AI Systems: A Growing Threat

    Jailbreaking AI chatbots involves tricking them into bypassing safety controls designed to prevent harmful or illegal responses. These safety systems are essential for ensuring that AI does not share dangerous content, such as instructions for hacking or making bombs. However, jailbreaking exploits the AI’s primary goal of following user instructions, overriding the system’s safeguards.

    Dark LLMs and Unrestricted AI Models

    Some AI models, known as “dark LLMs,” are deliberately designed without safety controls or are modified to bypass them. These dark models are increasingly available online, often marketed as AI tools that can assist with illegal activities like cybercrime, fraud, and more. The researchers highlight that once these AI systems are compromised, they can generate responses to almost any query, including dangerous instructions.

    The Scale and Impact of the Threat

    Researchers from Ben Gurion University of the Negev, including Prof. Lior Rokach and Dr. Michael Fire, conducted a study demonstrating the ease with which AI models could be tricked into providing illicit information. This includes step-by-step instructions for hacking, drug production, and other criminal activities. The alarming aspect of this threat is its accessibility, scalability, and adaptability, which makes it far more dangerous than previous technological risks.

    Industry Response to Jailbreak Threats

    Despite contacting major LLM providers to warn them about the vulnerability, the researchers received an underwhelming response. Some companies did not respond at all, while others claimed that jailbreak attacks fell outside the scope of their bounty programs, which reward ethical hackers for reporting vulnerabilities. This lack of action has raised concerns about the industry’s commitment to addressing AI safety risks.

    Recommendations for Improved AI Security

    The report urges AI developers to screen training data more rigorously, implement stronger firewalls to block risky queries, and adopt “machine unlearning” techniques to help chatbots forget illicit information they might absorb. Researchers also suggest that dark LLMs should be treated as serious security threats, similar to unlicensed weapons and explosives, and that providers should be held accountable.

    Expert Opinions on AI Security

    Experts in AI security, such as Dr. Ihsen Alouani and Prof. Peter Garraghan, emphasize the need for more robust security practices, including red teaming and context-based threat modeling. Alouani also points out that AI-driven scams and disinformation campaigns could become significantly more sophisticated as jailbreaks become more common. Companies need to invest in AI security more seriously, rather than relying solely on front-end safeguards.

    Industry Efforts and Challenges

    OpenAI, the creator of ChatGPT, has taken steps to make its latest AI model, o1, more resilient to jailbreaks. However, the company acknowledges that it must continue investigating ways to improve the robustness of its programs. Other tech giants, including Meta, Google, Microsoft, and Anthropic, have been approached for comment regarding their efforts to safeguard their AI systems from such threats.

    AI chatbots AI models AI safety ChatGPT cybercrime dark LLMs fraud hacking jailbreaking OpenAI security risks
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Jamie Carpenter

    Related Posts

    Google Brings Gemini to Smart Homes

    October 1, 2025

    OpenAI Adds Instant Checkout to ChatGPT

    September 30, 2025

    Microsoft Grants Free Windows 10 Support in EU

    September 26, 2025

    Comments are closed.

    Our Picks
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Don't Miss

    OpenAI Hits $500B Valuation After $6.6B Share Sale

    Business October 3, 2025

    Employee Share Sale Boosts Market Value OpenAI has reached a record $500 billion valuation following…

    Madagascar Gen-Z Protests Escalate Despite Government Shake-Up

    October 3, 2025

    Trump Threatens Deep Cuts Amid Shutdown Standoff

    October 2, 2025

    Why Italy’s Slow Season Is the Best Time to Visit

    October 2, 2025

    Subscribe to Updates

    About Us
    About Us
    Our Picks
    More Links
    • About Us
    • Contact Us
    • Fitness
    • Life Style
    • Travels
    • Technology
    • Privacy Policy
    Facebook X (Twitter) Instagram
    © 2025 Times Tribune | All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.