Evaluating Endpoint Security Platforms? Start Here.
Over the past six months, much has changed in Cisco AMP for Endpoints. We have added new capabilities in the product, and increased our presence within the industry as a true name in next generation endpoint security. Industry analysts have taken notice too: 2018 was the first year multiple analyst firms requested Cisco AMP for Endpoints participate in their reports.
As trusted advisors to the security world, analysts are very close to the pulse of the security industry. In this year’s Endpoint Protection Platform (EPP) Magic Quadrant, Gartner defined what makes a best-in-class endpoint protection platform based on what they believe the market is looking for. They then picked the product leaders to participate in their Magic Quadrant. After the vendors were selected to participate, a date was provided by which the selected vendors were required to submit data. Gartner then spent nearly four months assessing all the submitted data before publishing the report.
Cisco’s AMP for Endpoints debuted as a visionary in this year’s EPP MQ, affirming that the market has changed. Comparing the criteria Gartner choose year-over-year, it is evident the market needs have evolved. Gartner stated:
“This is a transformative period for the EPP market, and as the market has changed, so has the analysis profile used for this research.”
While the endpoint security market is rapidly evolving, Gartner’s Magic Quadrant is a common starting point for many organizations to begin researching for their own needs and used as a barometer for where the market has been over the last 6-12 months.
In addition to participating in the Gartner EPP MQ, Cisco also participated in both the 2018 Forrester Endpoint Suites Wave and the 2018 Forrester Endpoint Detection and Response (EDR) Wave.
To evaluate each solution, Forrester selected a set of specific criteria to define product performance. This method focuses on particular uses cases. When you are referencing these reports, be sure to evaluate whether or not these use cases apply to your organizational needs and requirements.
For example, take the use case of “threat hunting”. Sure, it’s possible to evaluate vendors specifically against limited elements that amount to threat hunting, such as searching for web shells in an environment – but is it the only example, and is it even the best example? Perhaps not. Scoring a vendor solution based on limited capability doesn’t lead to a complete evaluation.
Focusing on very specific examples can show individual strengths in products but can’t accurately be used to evaluate the product holistically. What happens if we use the same approach to evaluate cars? If we were testing the acceleration capabilities of a car, we would want to know how it does from 0-100mph not just 40-60mph. That prompts the bigger question: if the criteria in itself is not complete, how complete can the results be?
Beyond just the inherent subjectivity in assessment methodology, the segregation of EPP and EDR evaluations prove that when it comes to even the latest analyst reports, they are still based on past, and possibly outdated, information. If we take a look at this year’s EPP and EDR Waves reports from Forrester, there are some striking similarities. How many vendors appear in both? Nearly half the vendors appear in both, so even their own reports highlight the evolution of this market. Endpoint security products can no longer be evaluated individually as EPP or EDR. This is the approach AMP for Endpoints takes. Rather than focusing on being an EDR or an EPP solution, our vision is to be the most holistic endpoint security product, building a solution based on what the market and customers need, and are demanding. Organizations are looking for one, single, endpoint security solution that provides the most important capabilities to address their use cases.
Though analyst reports have their shortcomings, as all time-stamped research does, they are still a valid source of information when researching products. Evaluations like Magic Quadrants and Waves don’t base results on empirical testing, so they must be balanced with objective reports and cold hard facts. This is where results from testing houses that evaluate vendors based on how a technology performed in a given situation come in. These types of evaluations offer a benchmark against which products can be compared.
Though analyst reports should be a piece in the decision-making process, there should always be a balance, a ying to a yang. Reports such as Gartner’s Magic Quadrant and Forrester’s Wave reports may help understand players in the endpoint security space, but results need to be balanced with objective testing. In fact, Gartner states,
“The best way to determine a product’s fit for an organization is to run a thorough proof of concept (PoC) in a live environment using ‘real’ attack scenarios.”
(Gartner, Understand the Relative Importance of AV Testing in EPP Product Selection, Ian McShane, May 11, 2018)
We agree and recommend balancing industry analyst reports with an objective proof of concept deployment along with cold hard facts of efficacy testing.