When we covered SecOps in May 2015 and again in January 2017, we discussed the importance of security within the DevOps-focused enterprise, discussing topics such as what data you gather, threat modeling, encryption, education, vulnerability management, embracing automation, incident management and cognitive.

From a cybersecurity perspective, 2017 brought both wins and challenges to the community. Challenges include:

  • High-profile vulnerabilities putting your vulnerability management processes to the test;
  • Lack of education of basic IT security best practices, enabling malware to spread fast; and
  • Awareness of baseline configuration settings in cloud services, which left adopters exposed from the start.

Looking at the positives, we saw the emergence of cognitive technologies, along with machine learning, playing a key part in cybersecurity. For example, Watson for Cyber Security helped in bridging the skills gap and providing quicker root cause analysis. User behavior analytics with machine learning started closing the insider threat gap in understanding the risks associated with privileged users. There is also closer integration of security information and event management (SIEM) systems with incident response capabilities.

2018 will continue to produce challenges, and we will see GDPR being enforced in Europe, which requires action now. The key steps are:

  • Identifying what data is being collected;
  • Deciding how to protect the data against internal and external attacks;
  • Providing customers with a means to be forgotten; and
  • Establishing incident management.

The Crucial Roles of SecOps and Cognitive Security

Information security continues to shift left, whether that be with known secure starting templates or more frequent code scanning via up-to-date cloud services and continuous security testing, and SecOps will play a crucial role in helping to ensure improved security without compromising agility. Cognitive-enabled tools will again be key to faster identification and resolution.

The availability of new hosting technologies such as Kubernetes by the large cloud infrastructure-as-a-service (IaaS) providers will bring interesting new challenges. Adopters must look beyond the hype when selecting vendors and consider key security considerations, including:

  • Network protection. Are sufficient firewalling capabilities provided by the service provider?
  • Hosting infrastructure security. Is the responsibility shared, and how does it impact our service availability?

Staying Ahead of Threats Through Collaboration

We are only as secure as our weakest link, and if we consume or delegate services to external vendors, then their security posture feeds into ours. Ultimately, we are responsible to our customers, so we must ask our providers for their security posture and what standards they have certified against. Transparency will be a key differentiator as we move forward.

As cloud vendors in 2018, we must stay ahead of our would-be attackers. With the potential for increasing financial and reputational penalties, it’s becoming even more critical. Threat sharing and collaboration will allow us to improve our security as a community while minimizing cost. Leaders in the IT and security spaces recognize the value of this collaboration at an enterprise level, and developers continue to drive content through threat portals such as the X-Force Exchange. We should ask ourselves, are we selecting our security vendors with their community presence in mind?

Yes, GPDR is a big ticket item for 2018, but hopefully it has enabled budgets to be allocated to key security activities.

Read the Interactive Solution Brief: Ready, Set, GDPR

Notice: Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation. Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations.

The products, services, and other capabilities described herein are not suitable for all client situations and may have restricted availability. IBM does not provide legal, accounting or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation.

More from Artificial Intelligence

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today