When we covered SecOps in May 2015 and again in January 2017, we discussed the importance of security within the DevOps-focused enterprise, discussing topics such as what data you gather, threat modeling, encryption, education, vulnerability management, embracing automation, incident management and cognitive.

From a cybersecurity perspective, 2017 brought both wins and challenges to the community. Challenges include:

  • High-profile vulnerabilities putting your vulnerability management processes to the test;
  • Lack of education of basic IT security best practices, enabling malware to spread fast; and
  • Awareness of baseline configuration settings in cloud services, which left adopters exposed from the start.

Looking at the positives, we saw the emergence of cognitive technologies, along with machine learning, playing a key part in cybersecurity. For example, Watson for Cyber Security helped in bridging the skills gap and providing quicker root cause analysis. User behavior analytics with machine learning started closing the insider threat gap in understanding the risks associated with privileged users. There is also closer integration of security information and event management (SIEM) systems with incident response capabilities.

2018 will continue to produce challenges, and we will see GDPR being enforced in Europe, which requires action now. The key steps are:

  • Identifying what data is being collected;
  • Deciding how to protect the data against internal and external attacks;
  • Providing customers with a means to be forgotten; and
  • Establishing incident management.

The Crucial Roles of SecOps and Cognitive Security

Information security continues to shift left, whether that be with known secure starting templates or more frequent code scanning via up-to-date cloud services and continuous security testing, and SecOps will play a crucial role in helping to ensure improved security without compromising agility. Cognitive-enabled tools will again be key to faster identification and resolution.

The availability of new hosting technologies such as Kubernetes by the large cloud infrastructure-as-a-service (IaaS) providers will bring interesting new challenges. Adopters must look beyond the hype when selecting vendors and consider key security considerations, including:

  • Network protection. Are sufficient firewalling capabilities provided by the service provider?
  • Hosting infrastructure security. Is the responsibility shared, and how does it impact our service availability?

Staying Ahead of Threats Through Collaboration

We are only as secure as our weakest link, and if we consume or delegate services to external vendors, then their security posture feeds into ours. Ultimately, we are responsible to our customers, so we must ask our providers for their security posture and what standards they have certified against. Transparency will be a key differentiator as we move forward.

As cloud vendors in 2018, we must stay ahead of our would-be attackers. With the potential for increasing financial and reputational penalties, it’s becoming even more critical. Threat sharing and collaboration will allow us to improve our security as a community while minimizing cost. Leaders in the IT and security spaces recognize the value of this collaboration at an enterprise level, and developers continue to drive content through threat portals such as the X-Force Exchange. We should ask ourselves, are we selecting our security vendors with their community presence in mind?

Yes, GPDR is a big ticket item for 2018, but hopefully it has enabled budgets to be allocated to key security activities.

Read the Interactive Solution Brief: Ready, Set, GDPR

Notice: Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation. Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations.

The products, services, and other capabilities described herein are not suitable for all client situations and may have restricted availability. IBM does not provide legal, accounting or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today