data analytics Archives - IT Solutions Provider - IT Consulting - Technology Solutions /blog/topic/data-analytics/ IT Solutions Provider - IT Consulting - Technology Solutions Mon, 28 Jul 2025 16:19:52 +0000 en-US hourly 1 /wp-content/uploads/2025/11/cropped-favico-32x32.png data analytics Archives - IT Solutions Provider - IT Consulting - Technology Solutions /blog/topic/data-analytics/ 32 32 Moneyball for Cybersecurity /blog/moneyball-for-cybersecurity/ /blog/moneyball-for-cybersecurity/#respond Thu, 17 Oct 2024 12:45:00 +0000 https://dev.wei.com/blog/moneyball-for-cybersecurity/ A guest writer of WEI, see Bill Frank’s biography and contact information at the end of this article. Michael Lewis coined the term, Moneyball, in his eponymous book published in...

The post Moneyball for Cybersecurity appeared first on IT Solutions Provider - IT Consulting - Technology Solutions.

]]>

A guest writer of WEI, see Bill Frank’s biography and contact information at the end of this article.

Michael Lewis coined the term, Moneyball, in his eponymous book published in 2003 and made into a movie in 2011 starring Brad Pitt. Moneyball was about applying analytics to baseball. Billy Beane, the Oakland Athletics General Manager, was the first baseball executive to use analytics to increase the probability of winning games.

Baseball is obviously about the players and constrained budgets. So Beane’s goal was to use analytics to create a better roster of players.

The analytics the Athletics developed were new and contradicted all the “rules-of-thumb” baseball scouts used to select players for over 100 years.

Moneyball for cybersecurity is about applying analytics to cybersecurity to reduce the probability of material financial impact due to cyber-related loss events.

Cybersecurity is about controls – people, processes, and technologies – constrained by budgets and resources. So the objective is to create a better portfolio of controls and to improve collaboration with the business leaders who set cybersecurity budgets.

This requires a new analytical approach that calculates and visualizes the aggregate effectiveness of an organization’s control portfolio across the cyber-related loss events of greatest concern to business leaders. In other words, visualize cyber defenses in dollars.

It can be misleading to project the risk reduction value of a control improvement based on evaluating it in isolation. Yet we do this all the time. Risk reduction is about how a proposed control improvement will work in concert with the other deployed controls.

Learn More About WEI's Left of Bang Approach

Why We need Moneyball for Cybersecurity

There is a cybersecurity paradox. Overall cybersecurity spending increases every year. New frameworks are published, and older ones are updated. In addition, various government agencies are pressuring organizations to improve their cyber postures.

Despite these efforts, the number and financial impact of cyber-related loss events continue to increase.

Some say it’s due to the increasing pace of digital transformation. Others say it’s due to the increase in remote work and cloud computing. Still others say it’s due to a lack of trained cybersecurity professionals.

While those factors may contribute, two issues are more fundamental – prioritizing control investments and justifying cybersecurity budget proposals.

1. Prioritizing Control Investments

A control’s performance when evaluated in isolation does not indicate how effective it will be in reducing risk when deployed in concert with all the other controls. This makes it difficult to select which control improvements should be funded and which should not.

The underlying issue is the complexity of cybersecurity. Organizations deploy dozens of controls. There are hundreds of threat types as defined by MITRE ATT. There are hundreds to thousands of overlapping and intertwined attack paths into and through an organization’s IT/OT estate.

Therefore, each loss event scenario involves thousands of overlapping end-to-end kill chains. Adding to the complexity, many controls appear on many kill chains and many controls appear in multiple loss event scenarios.

In addition, it’s difficult to compare controls across different IT domains. How do you compare the value of a network control to an endpoint control? How do you compare the value of identity and access controls to malware detection controls? How do you compare left-of-bang to right-of-bang controls?

2. Justifying cybersecurity budgets

Security leaders often have difficulty justifying proposed control investments to the business leaders who set cybersecurity budgets due to the security metrics – business risk gap. Security teams use a wide range of technical metrics to monitor control performance that business leaders do not understand.

Business leaders know that cyber risk is business risk. Business leaders want to manage cyber risk as they do other strategic risks. They are frustrated by the difficulties of collaborating with security leaders who don’t speak their language – money.

Business leaders want to know how control investments will reduce the probability of material financial impact due to cyber loss events. To get their budget requests approved, security leaders need a credible approach to bridge the security metrics – business risk gap.

Implementing Moneyball For Cybersecurity

Monaco Risk’s advisory services use its patented Cyber Defense Graph to make Moneyball for Cybersecurity useful to security teams and credible to business leaders.

Better control selection

Monaco Risk’s Cyber Defense Graph statistical simulation solves the exponential kill chain problem described above. All of the kill chains related to a loss event scenario are analyzed together taking into consideration the capabilities, coverage, and governance of the controls involved.

Figure 1: This is an example of Monaco Risk’s modular Cyber Defense Graphic. Threats enter from the left. Threats move along attack paths shown as arrows. Controls are shown as boxes. Loss events result from threats that are not blocked by controls.

The resulting kill graphs display the critical path weaknesses into and through the organization’s IT/OT estate.

We generate tornado charts to show each control’s current and potential contribution to the aggregate effectiveness of the control portfolio.

Figure 2: Tornado Chart example showing the contribution of individual controls to “aggregate control effectiveness.

In addition, we aggregate control effectiveness across multiple kill graphs.

In addition, we have developed a set of standardized control parameters that enables the Cyber Defense Graph software to compare the risk reduction value of disparate types of controls. We can compare network controls to host controls, identity/access to malware prevention controls, and left-of-bang to right-of-bang controls.

This improves the decision-making process for prioritizing control selection by showing how alternative control improvements will reduce the probability of material financial impact due to cyber-related loss events.

Improved collaboration with business leaders

Better collaboration with business leaders who set cybersecurity budgets hinges on bridging the security metrics – business risk gap. The Cyber Defense Graph enables credible business risk reduction analysis, in dollars, of alternative control investments.

We generate Loss Exceedance Curve charts to show the potentially catastrophic nature of cyber-related loss events. These charts also show, in dollars, how alternative control improvements reduce the probability of material financial impact of loss events.

Figure 3: This example of a Loss Exceedance Curve chart shows how selected alternative control improvements will reduce the probabilities of dollar losses exceeding three thresholds shown as vertical lines.

Simply claiming a particular control improvement will reduce risk by X% is not sufficient. As my teachers used to say, “Show me the work!” What are your underlying assumptions? Have you evaluated lower-cost controls? How do they compare to the ones you are proposing?

Are there any controls we can eliminate to save money? Can we negotiate lower prices on controls we need for compliance but don’t significantly reduce the risk of a cyber event?

The Moneyball for Cybersecurity Analogy

I am not the first to use the Moneyball analogy for cybersecurity. It has been used to focus on cybersecurity workforce development. Since Moneyball was about player selection, clearly Moneyball can and should be applied to cybersecurity team selection and development.

We take Moneyball a step further by applying it to processes and technologies as well as people, i.e. all controls. It was also used by a cyber insurance company.

Let me know what you think!

The post Moneyball for Cybersecurity appeared first on IT Solutions Provider - IT Consulting - Technology Solutions.

]]>
/blog/moneyball-for-cybersecurity/feed/ 0
Using Performance Controls to Address Cybersecurity’s Achilles Heel /blog/using-performance-controls-to-address-cybersecuritys-achilles-heel/ /blog/using-performance-controls-to-address-cybersecuritys-achilles-heel/#respond Thu, 21 Mar 2024 12:45:00 +0000 https://dev.wei.com/blog/usinga-performance-controls-to-address-cybersecurityaes-achilles-heel/ See Bill Frank’s biography and contact information at the end of this article. [Note: This is an updated version of the original article posted on March 21, 2024. I replaced...

The post Using Performance Controls to Address Cybersecurity’s Achilles Heel appeared first on IT Solutions Provider - IT Consulting - Technology Solutions.

]]>

See Bill Frank’s biography and contact information at the end of this article.

[Note: This is an updated version of the original article posted on March 21, 2024. I replaced the term “Governance” Controls with “Performance” Controls to eliminate any confusion with the NIST Cybersecurity Framework 2.0 use of the term “Governance.”

I focus here on automated controls that monitor and measure the “performance” of “Defensive” controls that directly block threats or at least alert on suspicious activities.

How well are your cybersecurity controls performing? Measuring control efficacy is challenging. In fact, under-configured, misconfigured, and poorly tuned controls, as well as variances in security processes are the Achilles Heels of cybersecurity programs.

A mismatch between risk reduction potential and performance results in undetected threats (false negatives) as well as an excessive number of false positives. This leads to an increase in the likelihood of loss events.

All controls, whether people, processes, or technologies, can be categorized in one of two ways – Defensive or Performance.

  • Defensive Controls: These are controls that block threats or at least detect and alert on suspected activities. Effective Defensive Controls directly reduce the likelihood of loss events.
  • Performance Controls: These are indirect controls that measure the performance of Defensive Controls, highlight Defensive Control deficiencies, and/or evaluate the maturity of Defensive Controls’ configurations. Performance includes, but is not limited to, offensive security controls.

Most controls are easily categorized. Firewalls and EDR agents are examples of Defensive Controls. We categorize Offensive Controls as Performance because their purpose includes testing the efficacy of Defensive controls.

Vulnerability management (discovery, analysis, and prioritization) is a Performance Control because vulnerabilities, whether in security controls, application code, or infrastructure, are a type of control deficiency.

Patching is a Defensive Control because patched vulnerabilities prevent threats targeting those vulnerabilities from being exploited.

Manual Performance- Human Penetration Testing

Attempting to conduct Performance functions manually is time-consuming, limited in scope, and error prone. Human Penetration Testing has been the go-to Performance Control for decades. However, only the very largest organizations can afford to fund a Red Team to provide anything close to continuous testing.

Most organizations hire an outside firm to perform pentesting. Due to high costs, the scope of human pentesting is limited. In addition, it is typically performed only once a year or once a quarter. Therefore, for most organizations, human pentesting is little more than a checkbox exercise.

Note that human pen testers use a variety of tools to address many of the standard and repetitive tasks associated with pentesting. However, in general, these tools are not revealed to the client.

Have said that, I am not here to denigrate human pen testing. There are surely many pen testers that have deep expertise and creativity that goes beyond what any automated tool can provide. This is why bug bounty programs are popular.

The cybersecurity market has responded to the need for automated Performance Controls. Since no two organizations are the same, my goal for this article is to describe different types of Performance Controls to help you decide which approach is right for you.

Automated Performance Controls

There are five types of automated Performance Controls I will discuss:

  1. Attack Simulation
  2. Risk-based Vulnerability Management
  3. Metrics
  4. Security Control Posture Management
  5. Process Mining.

Note that since virtually all of these tools are SaaS platforms, factors including costs, support and training, community, data security, and compliance must always be evaluated!

Read: WEI Remains Ahead Of The Cybersecurity Moving Target

1. Attack Simulation

Attack Simulation is my simplified term that covers a variety of vendors who use terms like Automated Penetration Testing, Breach and Attack Simulation, and Security Control Validation.

The one thing they all have in common is executing simulations of known threats against deployed controls. However, the vendors in this space use a variety of architectures to accomplish their goals.

The key factors to consider when evaluating Attack Simulation tools are (1) the number of agents that are required or recommended, (2) integrations with deployed controls, (3) the degree to which the simulation software mimics adversarial tactics, techniques, and procedures (TTPs), (4) the vendor’s advice on running their software in a production environment, (5) firewall / network segmentation validation, (6) threat intelligence responsiveness, and (7) the range and quality of simulated techniques and sub-techniques.

Agents. The number of agents needed for internal testing. This ranges from only one agent needed to start the test to the requirement for agents on all on-premise workstations and workloads. No agents may be needed for testing cloud-based controls.

Defensive Control Integrations. Integrating Attack Simulation tools with Defensive Controls enables blue/purple teamers to better understand how a control reacted to a specific technique generated by the attack simulation tool.

Simulation. An indicator of how close a vendor gets to simulating real attackers is its approach to discovering and using passwords to execute credentialed lateral movement. Are clear-text passwords taken from memory? Are password hashes cracked in the vendor’s cloud environment (or on the vendor’s locally deployed software)? Adversaries use these techniques regularly, your attack simulation tool should too.

Production / Lab Testing. Attack Simulation vendors vary in their recommendations regarding running their tools in production vs lab environments. Of course, it’s advisable to perform initial evaluations in a lab environment first. But to get maximum value from an attack simulation tool, you should be able to run it in a production environment.

Firewall / Network Segmentation. There is a special case for testing firewall/intrusion detection efficacy. Agents may be deployed on each side of the firewall. This allows for validating firewall policies in a production environment without running malware on any production workstations or workloads.

Threat Intelligence Responsiveness. New threats, vulnerabilities and control deficiencies are discovered with alarming regularity. How quickly does the attack simulation vendor respond with safe variations for you to test against your controls? Do you need to upgrade the tool, or just deploy the new simulated TTPs?

Range and Quality of techniques and sub-techniques. Attack simulation vendors should be able to show you their supported MITRE ATT&CK techniques and sub-techniques. As to quality of those techniques and sub-techniques, it’s very difficult to determine. The data generated via the Integrations with deployed controls surely helps. We recommend testing at least two similarly architected tools in your environment to determine the quality of their attack simulations.

2. Risk-based Vulnerability Management

Vulnerability management is a cornerstone of every cybersecurity compliance framework, maturity model, and set of best practice recommendations. However, most organizations are overwhelmed with the number of vulnerabilities that are discovered, and do not have the resources to remediate all of them.

In response to this triage problem, vendors developed a variety of prioritization methods over the years. Despite its limitations, the Common Vulnerability Scoring System (CVSS) is the dominant means of scoring the severity of vulnerabilities. However, even NIST itself states that “CVSS is not a measure of risk.” Furthermore, NIST states that CVSS is only “a factor in prioritization of vulnerability remediation activities.”

Risk-based factors for vulnerability management include the following:

Business Context. What is the criticality of the asset in which the vulnerability exists? For example, production systems vs development systems.

Likelihood of exploitability. A combination of threat intelligence and factors associated with the vulnerability itself determine the likelihood that a vulnerability will be exploited. is an example of this approach.

Known Exploited Vulnerabilities. The Cybersecurity & Infrastructure Security Agency (CISA) maintains the Vulnerabilities on the KEV list should get the highest priority for remediation.

Asset Location. What is the location of the asset with the vulnerability in question? Internet-facing assets get the highest priority.

Compensating Defensive Control. Is there a Defensive Control that can prevent the vulnerability from being exploited?

3. Metrics

Modern Defensive Controls generate large amounts of telemetry that can be used to monitor their performance and effectiveness. Automating metrics reporting enables continuous monitoring and measuring the performance of a larger number of deployed controls.

While automated cybersecurity performance management platforms are not always considered an alternative to Attack Simulation and Risk-based Vulnerability Management solutions, they do have the advantage of being less intrusive because they are passive. All they need is read-only access to the Defensive Controls. There are no agents to deploy and no risk of unplanned outages.

The key factors when evaluating automated metrics solutions include the following:

Scope of Coverage. The range of metrics based on your priorities such as vulnerability management, incident detection and response, compliance, and control performance.

Integrations. Does the metrics solution vendor support integrations to your controls? If not, are they willing to add support for your controls? Will they charge extra for that?

Reporting flexibility. How flexible is the report building interface? What, if any, constraints are there to generate the reports you want? Can you build customized dashboards for different users? Is trend analysis supported?

Ease-of-Use. How easy is it to generate custom reports?

Scalability and Performance. Given the amount of data you want to retain, how fast are the queries/reports generated?

4. Security Control Posture Management

All security controls need to be configured and maintained to meet individual organization’s policy requirements, threat profile, and risk culture. The amount of time and effort needed to initially implement the controls and then keep them up to date varies depending on the control type and the functionality provided by the vendor.

Firewalls are at or close to the top of the list of controls requiring the most care and feeding. Therefore, it’s not surprising that the first security control configuration management tools were created two decades ago to improve firewall policy (rule) management. These tools eliminate unused and overlapping rules, and improve responsiveness to the steady stream of requests for changes, additions, and exceptions.

Security Information and Event Management (SIEM) systems are also at or near the top of the list of controls requiring extensive care and feeding. One critical aspect of a SIEM’s effectiveness is the extent of its coverage of MITRE ATT&CK techniques and sub-techniques. This also maps back to the SIEM’s sources of log ingestion. Furthermore, SIEM vendors provide hundreds of rules which generally need to be tailored to the organization.

To reduce the level of effort needed to tune SIEMs, consider tools that evaluate SIEM rule sets and provide assistance to detection engineers.

The variety of tools available for managing security control configurations will continue to grow, encompassing additional types such as endpoint agents, email security, identity and access management, data security, and cloud security.

5. Process Mining

Process mining is a method used to analyze and optimize business processes by collecting and analyzing event logs generated by information systems. These logs contain details about process execution, such as the sequence of activities, the time taken to complete each activity, and the resources involved. Process mining algorithms use this data to automatically generate process models that visualize how a process is executed in reality, as opposed to how it is expected to be executed.

While process mining is not a new concept, it is new for cybersecurity processes. For cybersecurity process mining to be useful, logs must be collected from non-security sources as well as cybersecurity controls.

Process mining is actually a separate class of higher-level analysis and measurement. All the others, with the exception of security operations platforms (SIEMs) here are testing, measuring, or obtaining data on individual controls. Having said that, at present, processing mining does not specifically measure the effectiveness of defensive controls.

An example of a common cybersecurity process use case is user on-boarding and off-boarding. To perform this analysis, the process mining tool must integrate with human resource systems in addition to authentication and authorization systems.

In addition to (1) improving compliance to defined processes, process mining will (2) expose bottlenecks, (3) reveal opportunities for additional process automation, and (4) make it easier for stakeholders to understand how processes are executed using visual representations of the processes.

While scalability, performance, and integrations are important, the way processes and variances are rendered in the user interface and the way you can interact with them is critical to understand the causes of variances and opportunities for improvement.

Individual vs. Aggregate Control Effectiveness

Having reviewed the types of Performance Controls available to monitor and measure Defensive Control efficacy, it’s worth noting that they all monitor and measure control effectiveness individually.

The processing mining folks might disagree with the above statement in the sense that they aggregate multiple control functions by the processes in which they play a role. However, process mining does not actually measure the efficacy of the individual controls in processes. It focuses on improving the effectiveness of processes.

While there is no doubt about the value of discovering and remediating deficiencies in individual controls, there is another function needed from a risk management perspective. That is calculating Aggregate Control Effectiveness. How well does your portfolio of Defensive Controls work together to reduce the likelihood of a loss event?

Aggregate Control Effectiveness must consider attack paths into and through an organization. A Defensive Control that has strong capabilities and is well configured will not reduce risk as much as anticipated if it is on a path that does not see many threats or is on a path with other strong controls.

In addition to discovering and prioritizing Defensive Control deficiencies, a Performance Control measurement program will improve the accuracy and precision of Aggregate Control Effectiveness calculations.

My next article will address the issue of Aggregate Control Effectiveness and its relevance to risk management. Stay tuned!

Next Steps: WEI provides enterprises with increased visibility at all touch points of the IT estate, and that includes at the edge and applications within the data center. How can we help your enterprise with its current and future cybersecurity architecture? Contact our experts today to get started.

About The Author

Bill Frank has over 24 years of cybersecurity experience. At present, as Chief Client Officer at Mr. Frank is responsible for leading Monaco Risk’s cybersecurity risk management engagements. In addition, he collaborates on the design of Monaco Risk’s cyber risk quantification software used in client engagements.

Mr. Frank is one of two inventors of Monaco Risk’s patented Cyber Defense Graph. It is the core innovation for Monaco Risk’s cyber risk quantification software which enables a more accurate estimate of the likelihood of loss events.

Prior to Monaco Risk, Mr. Frank spent 12 years assisting clients select and implement cybersecurity controls to strengthen cyber posture. Projects focused on controls to protect, detect, and respond to threats across a wide range of attack surfaces.

Prior to his consulting work, Mr. Frank spent most of the 2000s at a SIEM software company where he designed a novel approach to correlating alerts from multiple log sources using finite state machine-based, risk-scoring algorithms. The first use case was user and entity behavior analysis. The technology was acquired by Nitro Security who in turn was acquired by McAfee.

Bill Frank’s contact information:

The post Using Performance Controls to Address Cybersecurity’s Achilles Heel appeared first on IT Solutions Provider - IT Consulting - Technology Solutions.

]]>
/blog/using-performance-controls-to-address-cybersecuritys-achilles-heel/feed/ 0
Why You Should Choose Dell For Your AI Initiatives /blog/why-you-should-choose-dell-for-your-ai-initiatives/ /blog/why-you-should-choose-dell-for-your-ai-initiatives/#respond Tue, 07 Sep 2021 12:45:00 +0000 https://dev.wei.com/blog/why-you-should-choose-dell-for-your-ai-initiatives/ Regardless of the industry, all enterprises are looking for strategies to improve their organizations and stay competitive in the market. However, when it comes to technological innovation, there’s one piece...

The post Why You Should Choose Dell For Your AI Initiatives appeared first on IT Solutions Provider - IT Consulting - Technology Solutions.

]]>
AI initiatives, Dell AI solutions, AI, AI solutions, AI strategy

Regardless of the industry, all enterprises are looking for strategies to improve their organizations and stay competitive in the market. However, when it comes to technological innovation, there’s one piece in particular that often blocks the way forward: the human element. While computers may still lack the creativity that sets the human mind apart, artificial intelligence (AI) has become an increasingly important part of modern computing, especially when it comes to processing and managing data.

This has never been more apparent than over the last year and half. In the midst of a global pandemic, enterprises utilizing AI in ways that reduced in-person hours reported that repetitive, day-to-day tasks, especially within the data center architecture, faired the storm far better than other organizations. AI was also a critical component when it came to implementing and managing remote workforces.

A conducted by IBM shows that as a result of the global pandemic, many enterprises accelerated the adoption of AI in the workplace However, some have yet to implement AI in a significant way, citing barriers to adoption such as a lack of proper knowledge amongst the IT team and difficulties identifying where AI solutions will make the most impact within the digital environment.

If these challenges sound familiar, ‘s AI solutions can help. Keep reading to find out how!

Determine Your AI Strategy in 5 Steps

The first step to implementing any new technology is building your strategy. According to Dell, there are five steps that are essential to every artificial intelligence project.

  1. Define your use case. How are you going to utilize AI? What are your goals? How will you measure success? Are your goals and implementation strategy feasible?
  2. Assess data. What data will be involved in your AI initiatives? Where does that data live? How will it be accessed and how does it need to be prepared for your project?
  3. Identify model. Determine which AI model will work best for your use case. Which tools are needed to create, train, and prove this model?
  4. Optimize software ecosystem. How will you move your model into production? What is your development process going to look like?
  5. Determine IT environment. Your desired outcomes will determine the type of deployment that will work best, whether that’s private, public, edge, or hybrid cloud. You should also determine your latency and throughput requirements, as well as any other specifications.

By following these five steps, you can be sure that you will be properly prepared when it comes time to actually implement your AI strategy.

Dell’s AI Solutions Can Help You Find Success

Once your plan is in place, it’s time to begin your implementation journey. Finding the right vendor to partner with is just as important as planning your strategy, so it’s something that should be carefully considered.

that support digital transformation and innovation efforts, Dell can assist you in all your AI initiatives, from the implementation phase to ongoing maintenance and security efforts.

As a Dell customer, you’ll also have access to the , Dell’s dedicated group of computer scientists, data scientists, engineers, and subject matter experts, all of whom are constantly pushing the boundaries and exploring new ways of utilizing these technologies.

Dell’s portfolio of AI and data solutions is also available in either direct or as-a-service management models, letting you choose the option that best suits your needs and budget.

If you’re looking for a partner to join you on your journey to AI implementation, look no further than Dell Technologies.

Are You Interested In Dell’s AI Solutions?

is committed to helping enterprises transform the way they do business. To support this, innovation is at the forefront of all their solutions, enabling you to transform your infrastructure and accelerate the innovation you need to stay competitive. If you’re ready to make AI work for your business, contact WEI to learn more about how we can help architect the right solution to power your AI initiatives. We have expertise across all Dell systems and can help build the right solution to achieve your desired business outcomes.

NEXT STEPS: Many of our customers choose to run their business on Dell VxRail. Find out the TOP TEN reasons why in our tech brief about the benefits of VxRail below. 

The post Why You Should Choose Dell For Your AI Initiatives appeared first on IT Solutions Provider - IT Consulting - Technology Solutions.

]]>
/blog/why-you-should-choose-dell-for-your-ai-initiatives/feed/ 0