Once upon a time, it seemed no organization had its finger on the pulse of technology more than research consulting firm Gartner. Global corporations, technology companies, and the investment community have trusted Gartner for insights and analysis across a range of market sectors. But today, that leading position has eroded, and the merits of one of the company’s most widely known and used tools, the “Magic Quadrant,” has lost its relevance.
This is not a new issue. As Diginomica’s Den Howlett previously said, “The trouble with the grids is that they are fiction masquerading as fact,” pointing to the fact while research-based and extensively reviewed internally, it is, in the end, an opinion. Howlett continued, “As I like to say from time to time — opinions are like a**holes; we all have one, and most of them stink. And no more so than in the technology industry.”
I agree. A new model is long overdue.
Gartner’s Magic Quadrant (MQ) is a series of market research reports that rely on proprietary qualitative data analysis to illustrate market trends, direction, maturity, and a comparison of qualifying market vendors (more on this later).
Updated every one-to-two years, MQs include a visual snapshot that plots vendors into a two-dimensional matrix based on their completeness of vision and ability to execute. Each evaluated vendor is placed into one of four quadrants: Leaders, Challengers, Visionaries, and Niche Players.
The top right of Gartner’s MQ is most desirable — that coveted position of market dominance. That was yesterday. Today, all of us — Gartner customers and industry insiders — know better. A persistent lack of transparency, arbitrary qualifiers, and an outdated process have both undermined its credibility and hurt end users.
Antiquated Approach
There has long been criticism of the Gartner MQ for its proprietary and almost secretive methodology. In fact, they’ve been sued twice. In 2009, ZL Technologies challenged the legitimacy of Gartner’s Magic Quadrant rating system, alleging unfair competition, among other claims. That case was dismissed and was followed by another in 2014, which has yet to be decided.
The cases demonstrate how high these companies perceived the stakes. It used to be that inclusion in an MQ was a prerequisite to a pitch invitation or prospect proof-of-concept demonstration. Is that true any longer? I’m not so sure. The market has matured to understand the inherent problems with Gartner’s approach.
In addition to being inherently subjective, Gartner requires robotic process automation (RPA) vendors to meet stringent specific criteria, including revenue, operating budget, and client number thresholds, for inclusion and even consideration in a Magic Quadrant.
If a vendor misses even one guideline, they are excluded. The problem with this is that these aren’t the same criteria that companies use to evaluate the technology. Many businesses don’t care about a vendor’s size or revenue numbers per se. They care about the quality, innovation, and applicability of products, the quality of customer support and professional services, and what real customers have to say about the product.
While a rigid methodology is a must, the failure to see beyond categories and numbers is narrow-minded and does not serve the market well.
Outdated Methodology
Another comment, and it’s an important one, is how Gartner slices and dices the market. Markets and technologies evolve. Business process management (BPM) gave way to robotic process RPA, which is itself morphing to include digital transformation, AI, and machine learning.
The RPA of today is not the RPA of two years ago. Other analyst firms are working to keep pace with the advancements. Informed and opinionated, Horses for Sources (HfS) casts a critical eye, not just on industry research but on the realities of RPA itself, which is not always as vendors or analysts portray it.
Several firms, including NelsonHall, Zinnov, and Everest, have already produced vendor landscape reports on process discovery and process mining within the context of automation. But Gartner still looks at all these separately. Why? These technologies should be evaluated in concert with one another. Convergence is real and applies to RPA as well as many other technologies. Not considering them together is archaic thinking. For a firm that is supposed to be focused on what’s next, Gartner is stuck in the past.
Hyperautomation: Buzz or Bust?
There is a lot of hype and discussion around hyperautomation, loosely defined as advanced technologies such as robotic process automation, AI, machine learning, process discovery, process mining, and analytics that are brought together to amplify process automation. It starts with robotic process automation at its core and expands automation capability with additional tools.
Gartner itself has pontificated on its promise, naming automation in its Top 10 Strategic Technology Trends for 2020. At Kryon, we agree that it’s a great concept with an impressive set of tools and capabilities, and we’ve been innovating around this intersection of technologies for some time. So why are vendors in the RPA MQ still only measured strictly on their RPA revenue and existing technology?
Many industry influencers aren’t sure of hyperautomation and its impact. There are points to be made on either side of the argument, but if you’re going to tout its attributes, as Gartner is, then apply it to the MQ produced for the sector. It just makes sense.
Evolve to Thrive
In light of questionable, outmoded methodologies and potential conflicts of interest, independent challengers to big-name analyst firms are emerging. A slew of new analyst firms is now in the market to deliver independent, curated user reviews and vendor ratings to help companies validate and understand a product’s efficacy before purchasing.
While this may not be the answer, even Gartner took note and, in 2015, purchased Nubera business app discovery network that helps enterprises make decisions about technology purchases by using crowdsourced user reviews and other content. Yet, this acquisition didn’t help Gartner capture more market share in the analyst industry — in fact, Gartner’s market share slipped six points after this acquisition, showing that other firms are gaining traction.
Enterprises and other end users increasingly have other options, turning to smaller analyst firms with passionate, knowledgeable experts. HfS, for example, came in at number five on Enterprise Management 360’s list of the Top 10 Most Influential Tech Analyst Firms in 2018 and was described as “Another small and nimble tech analyst, HfS is highly regarded in the industry and has recently won many plaudits and awards.”
There is growing recognition that familiar names don’t matter as much as the value of the information and insights being shared. Analysts from 451 or Bloor Research, for example, are highly informed, independent, and passionate about technology. Toptal’s Pete Privateer recently reacted to the broadening market: “Gartner, Forrester, and IDC are the 800-pound gorillas, but 451 Research, HfS Research, CXP Group, Frost and Sullivan, Ovum, and countless others also play a significant role in shaping how customers perceive a company and its products.”
Gartner is falling prey to what so many companies do over time — complacency and an inability, or unwillingness, to evolve. It isn’t agile. It isn’t measuring itself by some of the same criteria it’s using to evaluate technology providers. The value of technology should be determined by its innovation and contributions, not numbers and arbitrarily assigned thresholds.
The Magic Quadrant used to be relevant. It never told the whole story, nor was it universally fair in its approach, but the omissions and conflicts are now too glaring to ignore. All of us — vendors, the investment community, and enterprises alike — are losing out because of it.
Design Your Criteria
There are likely several RPA vendors that could be the right fit for a buyer armed with the Magic Quadrant, but they weren’t included in the MQ for one reason or another. A new trailblazer with an exceptional new product won’t be included yet; neither will those who focus on mid-market customers rather than multinational enterprises.
The best solution is to conduct your own research and consider your own comparison benchmarks. Which company’s customers are truly satisfied? How long has the vendor been in business? Identify and assess the criteria that are important to you and your business. When the rubber meets the road, aren’t a company’s innovation, reputation, service, and cultural fit as important (if not more important) than revenue, size, and existing client metrics?
My advice is to assess the market based on what really matters to you and the success of your organization.
Apparently Gartner is paid by companies that are regularly landing at top5 spots, plus the quadrant is bs since at least 5 years..
HfS on the other hand is launching their own OneOffice initiative which is a bizzare attempt to get into advisory business.
What to do? don’t listen to noise, there is no ‘independent’ source of information although it would be such a nice reality