GRC Viewpoint

Unlocking the Windows Server Benchmark Puzzle

Expert tips and recommendations for the necessary work of baseline hardening using Windows Server Benchmark implementation.

Organizations have a set of configuration standards and industry best practices to harden their digital configurations. These benchmarks are set by the Center for Internet Security (CIS) and can be used to measure the effectiveness and efficiency of cybersecurity best practices. With the IT infrastructure being so complex multiple dependencies need to be configured with these best practices in mind. The technical task of hardening an organization’s servers to comply with comprehensive security policies is an involved process that is very time-consuming. 

The gap between Security and IT/DevOps teams is a significant pain point for many organizations. IT and Security teams have a fundamental conflict of interest that stems from differing objectives and KPIs. The Security team must protect and monitor every asset, server, desktop, and device using best practices and strict standards and then request a server hardening project to begin. Server hardening includes a set of security configurations that need to be changed to create a robust security baseline, and it is the IT team’s responsibility to actually perform the work. Therefore, collaboration between the teams is crucial for the organization’s success.  

A survey conducted by CalCom Software asking IT teams from over 100 enterprise organizations to offer their top three reasons for inadequate baseline hardening said:

  • Testing: Implementing a server security policy requires them to change server configuration settings to meet policy requirements. Applying the policy directly to production systems can cause severe damage and most often breaks server operations and applications. In order to generate an impact analysis report detailing how the policy will affect production, a test environment will need to be built. Setting up a test environment to perfectly simulate each policy on a server that is running different operating systems versions, roles, and applications all running at the same time is near impossible. 
  • Configuration Drift: A drift from the baseline that goes unnoticed can expose the organization to configuration vulnerabilities and failure to identify and correct the cause means lost productivity and downtime.
  • Policy Monitoring and Reporting: Ongoing monitoring of the baseline is required to keep a record of the multiple environments, their compliance scores, and audit results. With the production environment constantly changing, new vulnerabilities are often found. Logging information about what changes were made, where and when is crucial and often not done.

Process of implementing Windows Benchmarks

While Windows Server is the backbone of many IT infrastructures, it is the Windows System administrators responsibility to deploy technology, maintain services and ensure computing uptime for the organization quickly and flexibly. Implementing CIS benchmarks for windows servers is resource-intensive and time-consuming when trying to meet the best practice standards. Benchmarks vary according to different Windows versions and also differences between member servers and domain controllers and other Windows roles and applications servers. A benchmark can include a 1000+ page PDF with hundreds of objects to address. To better understand the implementation, let’s review the process:

Production environment when implementing a Windows Server Benchmark

The only way to avoid outages once the Windows Server benchmark is implemented is to thoroughly research the potential impact of the configuration setting change on server operations. This impact analysis typically requires testing in a lab environment that has been set up to replicate the production environment.

Deployment location of the benchmark 

Once the Windows server benchmark has been tested, it will then need to be managed. In an optimal situation, the benchmark would be managed in a centralized location making it easier to properly configure and manage updates. Because there is no “one” Windows baseline but multiple ones that are differentiated by the Windows Server version and role, and while GPOs can’t enforce granular policies for every server, intensive work needs to be performed. 

Monitoring the benchmark

When a benchmark is added and there is a modification to the server’s applications there are infrastructure changes. Constant monitoring of the benchmark once implemented requires a lot of time and effort to prevent configuration drifts and vulnerabilities.  

System Admin Challenges when Implementing benchmarks

System Administrators have the very laborious job of continually implementing Windows Server benchmarks according to the given recommendations. For a single benchmark implementation, dozens of pages must be read and a vast amount of research must be done before applying it to production. If done manually, aligning the existing assets to relevant benchmarks while making sure that they remain compliant with the recommended benchmark standard can take years.  

As of today, the CIS benchmarks introduce about 474 security settings recommended to include as a baseline for Windows servers. Imagine having to manually configure 100 servers or more with the Windows operating system in an organizational environment. Managing all of them means configuring and performing checks on almost 50,000 configurations, and those are only the configurations of the operating system. Other system configurations such as IIS and SQL also need to be managed, and this adds up to thousands of actions and decisions which makes it incredibly demanding and time-consuming. 

How to become audit ready 

Windows is a complex operating system with many subsystems and features. This complexity makes it difficult to identify which metrics and workloads are most important to measure. Also, the Windows codebase is constantly changing, which can invalidate existing benchmarks and requires considerable effort to keep them up to date. IT leaders recognize the advantages of automation and look to solution providers such as CalCom Software to automate the impact analysis process, reduce the occurrence of change-related outages, and provide a framework for maintaining an effective security posture after implementation.

Our advice to anyone beginning a hardening project is to carefully review the hardening management platform. Be sure that it’s capable of providing a drill down to a single server so that the security policy can cascade to all servers based on role and applications installed. Avoid wasting time and choose a solution that eliminates the cost of creating lab environments for simulating the impact of security policies on servers.  Save your department’s resources by using a solution that analyzes the direct impact of your policy on the production environment and gives you real-time data.


By Dvir Goren CTO (Chief Technology Officer) at CalCom 

Dvir is the CTO at CalCom and is a veteran of the IT industry with over 25 years of experience in system, security and Microsoft infrastructures. Dvir has been managing complex hardening projects for over 15 years and is considered a thought leader in his field. Dvir also leads CalCom’s product team including developing the product strategy and road map.

Related Articles

Latest Articles