-->
Security > Features
Protecting sensitive data, maintaining model integrity, preventing adversarial attacks, safeguarding against unauthorized access and exploitation -- robust security measures are crucial to building trustworthy and resilient AI systems.

Enterprise AI World 2024 Is Nov. 20-21 in Washington, DC. Register now for $100 off!

The Rise of GenAI and LLMs

In 1950, Alan Turing suggested a test to determine if computers could mimic human intelligence well enough that an impartial observer could no longer tell the difference. We are still talking about the Turing Test almost 75 years after its inception.

The Coming AI Regulations

Advances in artificial intelligence offer both promise and peril. Regulation is essential to the ethical and responsible development of AI. This article is an overview of the current and future regulations surrounding artificial intelligence.

ChatGPT Cyber Risks

Overview of the cyber risks associated with Chat GPT. Those risks include exposing sensitive data, generating dangerous malware, aiding phishing attacks, and more.

Deepfake and AI Generated Security Threats

Deepfakes: Overview and How to Spot Them

A deepfake is a form of synthetic content, such as images, audio or video, that purports to be real but is actually false and misleading. The word itself is a portmanteau of deep learning, a form of AI, and fake. In addition to manipulating existing content, the purveyors of deepfakes can create entirely new content where an individual is represented in a false manner, portrayed as doing or saying something that he or she has not done or said. An example of this is a deepfake video from 2022 that appears to depict President Zelenskyy of the Ukraine instructing Ukrainian troops to surrender to Russian aggressors. The primary aim of deepfakes is to spread false information so that it appears to come from trusted sources.

Adversarial AI: Cybersecurity Implications

Cyber adversaries are using artificial intelligence and machine learning to create more aggressive, coordinated attacks. They are also leveraging intelligence such as personal information on targets gathered from social media and other sources to generate more effective phishing campaigns, achieving email open rates as high as 60 percent.

AI-Powered Cybersecurity: Challenges, Benefits, and Use Cases

Artificial intelligence (AI) holds much promise for speeding up and more accurately detecting and countering malicious activities. Forbes has found that 76 percent of enterprises are prioritizing AI and machine learning in their IT budgets and plans. According to Pillsbury Law, 44 percent of global organizations are already leveraging AI for detecting security threats and intrusions. The interest being shown in AI is considerable, leading to the market size for AI cybersecurity growing from $17 billion in 2022 to $102 billion by 2032, according to Verified Market Research.

AI Risk Management Frameworks Overview

Hopefully, to avoid a repeat of today's Internet nightmares, risk professionals have begun the process of developing artificial intelligence risk management frameworks - frameworks inclusive enough to encompass the evolving role of AI, including AI and cloud computing, AI and edge computing, AI and the Internet of Things (IoT), as well as AI and finance, medicine, transportation, and so-called "knowledge work."