A recent study conducted by NIST researchers found that face identification performance is optimal when insights from a human expert are combined with a top machine learning algorithm.
Without cognitive insights, a security intelligence platform does little to ease the pressure on short-staffed security operations center (SOC) teams to analyze massive volumes of threat data.
Artificial intelligence (AI) tools enable security teams to identify behavioral patterns that could point to insider threats more quickly.
As AI progresses, security professionals must prepare for the inevitability of machines writing their own malware to infect other machines in the not-so-distant future.
While fraudsters have yet to master adversarial AI, the only way for the security community to get ahead of the emerging threat is through collaborative defense.
Today, IBM introduced the Resilient Incident Response Platform (IRP) with Intelligent Orchestration and X-Force Threat Management services to help organizations connect human and machine intelligence.
Is cognitive security all hype, or can AI-powered tools help organizations defend their networks against evolving cyberthreats today?
While some observers fear a Skynet-esque future of malicious, self-aware machines, Dudu Mimran envisions a world in which AI and cybersecurity work together to keep emerging threats in check.
Generative adversarial networks are neural networks that compete in a game in which a generator attempts to fool a discriminator with examples that look similar to a training set.
To defend their confidential data from increasingly sophisticated cybercriminals, security teams must leverage machine learning to perform analytical tasks that are too tedious for humans to complete.