Related Posts
The newest vendor evaluation report methodology in the industry analyst world is Signal, from The Futurum Group. In a space packed with the Gartner Magic Quadrant, Forrester Wave, IDC MarketScape, Omdia Universe, Everest PEAK Matrix, Nucleus Value Index, Aragon Research Globe, and more, you’d be forgiven for reacting to a new entrant with caution at best or skepticism at worst.
But Signal came in hot, promising to disrupt the status quo and redefine vendor evaluations in the era of Artificial Intelligence. Futurum says Signal reports will …
To better understand the Signal methodology, I spoke with two leaders from Futurum:
Whether as an author or a participant, O’Brien and Shimmin’s experience with evaluation methodologies is extensive – they’ve gone through more of these reports than either of them wants to count. But all of that blood, sweat, and tears will be forgotten noise if Signal delivers on its potential. As they answered a variety of questions – about the AI in the system, about vendor participation, about the role of G2’s data – the enthusiasm Futurum has to make Signal successful was audible.
Futurum Signal Reports

Q: What motivated Futurum to launch Signal?
Dan: We approached this with a lot of empathy. From my experience on the vendor side [IBM], my teams annually participated in hundreds of evaluations a year, including about 50 MQs. I’ve been through the ringer and seen pretty much every flavor from the 40-some companies that offer vendor evaluations. Fundamentally, the existing models are slow, manual, backward-looking, and resource-intensive for vendors. Many of them lack distribution and a real audience. A typical report takes months to produce, and it’s often outdated by the time it publishes. The data used is often historical, and the process is grueling for vendors – consuming weeks of work and budget. At Futurum, we had the wild idea that we could leverage the great data that vendors make available to buyers in the market, rather than relying on data fed to analysts by the vendors. With the maturing of AI technology, we felt the timing was perfect to build something from the ground up that would be predictive, dynamic, and effortless — something that could be the “signal through the noise.”
Brad: From my perspective as a longtime analyst, the traditional model was simply broken. I’d have to block off six months of my life to shepherd one of these comparative reports, and the result would be a static artifact that was often irrelevant by the time it was released. It was the same for the vendors – a huge investment of time and energy for something that was, as a practitioner, of little comparative value over time. My big “aha” moment was realizing we could build a living, comparative entity that’s not just a snapshot in time. Instead of looking backward at the industry through a narrow lens, we wanted to look forward – predicting a vendor’s value proposition in the next six to 12 months. This is a massive shift, and AI is what makes it possible, allowing us to take in all the signals as they happen, almost in real time, and constantly update the analysis.
Q: I’m sure Signal was a group effort to develop. Tell me about the key stakeholders across Futurum who made it happen.
Dan: This was a true group effort, with a diverse crew bringing different perspectives. Daniel Newman [CEO & Chief Analyst] provided the high-level vision for how AI would disrupt the research industry and how we could be a part of that disruption.
I brought the perspective of a vendor who has been on the other side of these evaluations, so I understood what is out there – what works, and what doesn’t. Brad is the visionary behind the AI methodology, focusing on how we could do this differently and at scale. He really figured out how to use AI as a research partner, pointing it at the backward-looking work and freeing up analysts to focus on the predictive nature of the report.
Tiffani Bova [Chief Strategy & Research Officer] contributed heavily to the vision and execution and brought her research rigor from her years as a Gartner Fellow, ensuring we maintained a high standard of quality. Alex Smith [VP, Channels Research & Practice Operations] brought a lot of the thinking around the ecosystem and interoperability — the idea that a new product needs to work seamlessly with what a buyer already has. Deepak Surana [Chief Product Officer] runs our Futurum Intelligence Platform and provides the technical expertise and engineering needed to make this scalable based on all of our proprietary and partner data.
It was a cross-functional team that all brought their unique perspectives to the table.
Q: One of the most notable characteristics of Signal is that it will be predictive – it will be forward-looking, not outdated upon publication. Why is that important? What does that mean, practically speaking? How will you accomplish that?
Dan: Practically speaking, it’s necessary because buyers are making smart decisions today for products they’ll use months or even years from now. If you’re up and running on a new tool in six months, you’re pretty happy. But by the time you’re truly operational, the vendor’s roadmap is six months to a year ahead of what was on the table when you made your decision. So ultimately, you’re not just buying a product; you’re buying a roadmap. We need to help clients answer, “What decision can I make today that will make me look really smart two to three years from now?”
Brad: We accomplish this with “context engineering,” not just prompt engineering. We use a number of frontier models — not just one — to see patterns in a massive corpus of real-time data. This data includes everything from public news and financial filings to our own proprietary research and partner datasets, like G2 reviews. The AI then asks itself 34 predefined questions across our five core assessment areas. If it encounters a lack of context, it’s designed to recognize the missing information and go get it, rather than hallucinating an answer. The human analyst then provides the final layer of magic, either confirming the AI’s predictions or adjusting them with their expert insights into market dynamics.
Q: Is Signal predictive about the technology category as a whole, not just the vendors in it?
Dan: Yes. We’re trying to predict where the market is going based on buyer needs and how those needs align with vendor roadmaps. The analysis is predictive about the vendors’ likelihood of success, but it’s grounded in a forward-looking view of the entire market. This helps buyers and investors understand the trajectory of the market and the vendors within it.
Q: Which AI tools are you using to support Signal? What do you get from them? What are the characteristics of a “frontier” model?
Brad: We designed the system to be model-agnostic, so we’re using a number of frontier-scale models from names you know, like OpenAI’s ChatGPT and Google’s Gemini.
A frontier model is multimodal, meaning it can process text, images, and sound. Many frontier models have a massive context window of over a million tokens, allowing them to take in a huge amount of data in a single prompt. This enables them to see patterns in both its training data and the contextual data carried within its prompt. This is data that a human could easily miss, and it’s what makes the system so powerful.
Dan: Our proprietary data stack, combined with these powerful models, augmented by G2’s leading voice of the customer data, allows us to deliver insights with unprecedented accuracy and speed.

The Signal Radar visualizes each vendor across the methodology’s five parameters.
Q: Leveraging G2’s library of customer reviews is a shrewd move. How does that database contribute to the Signal methodology? Reviews are typically focused on present-day or historical circumstances. So they probably don’t help with the predictive modeling?
Dan: Our exclusive, multi-year partnership with G2 is a key differentiator for Signal. We’re the only analyst firm with this kind of deep, structured access to their data. G2 has more than 3 million verified reviews across over 2,000 software categories. This gives us an unmatched dataset of the voice of the customer.
Brad: While the reviews themselves are historical, the AI can see trends in them that can be predictive. For example, if we see a clear trend in reviews about a product’s usability improving or a specific feature becoming more important to users, the AI can extrapolate that forward. This tells us not just what’s happening now, but where the market is going, allowing us to predict where a vendor’s product will be in six months. A great example of this is when you look at how customers describe a product — if they say it’s a “pain in the *ss to use,” that’s a powerful signal that the company needs to invest in usability to remain competitive.
Q: Where are the Futurum analysts in the loop? Where are they most influential?
Brad: The human analyst is like the bread of the sandwich, with the AI as the meat of the sandwich. At the outset, the analyst defines the market, identifies the vendors and products, and defines the rubrics. They set the weights for each of the five core assessment areas based on their expert knowledge of what’s important in the market today and where it’s heading. The analyst also sets axioms, which are hard truths about a vendor’s track record that act as guideposts for the AI. At the end, the analyst reviews the AI’s output. This is where they ensure the final product adheres to the Futurum “voice” and they correct any material factual errors. They can also challenge the AI’s conclusions and ask it to reassess its findings. This oversight ensures that the report remains accurate, credible, and grounded in human expertise.
Q: How will you guard against false or misleading claims made by vendors? The questionnaires and briefings of traditional evaluations help analysts identify hyperbole.
Brad: So much of a market is built on unstated truths about vendors. We capture this in the comprehensive data that the AI uses, which includes our analysts’ meeting notes and other ephemeral assets. The axioms that an analyst sets for each vendor are another critical guidepost. For example, if a vendor has a history of overpromising and underdelivering, that is captured in an axiom that the AI will use to be more skeptical of their forward-looking claims.
Dan: We have a deep understanding of these companies and their track records. We’ve seen when they make promises and deliver on them, and we’ve also seen when they don’t. We use this historical context as a risk-adjusted weighting to assess the probability of them executing on their forward-looking statements.
Q: Will Signal evaluate a vendor’s entire business? Or just that vendor’s capabilities in the specific market of the report?
Brad: It’s both, and that’s a key differentiator. The report focuses on the specific product or offering within a market. But when you buy that product, you are also buying the company behind it. Our five measures, especially Strategic Vision and Business Value, are tied to the overall health of the vendor. For example, a company might have a rock-solid product, but if they are on their third CEO in two years, that’s information a buyer needs to take into account.
Dan: As a buyer, you’re not just acquiring a piece of software; you’re inheriting a company’s leadership team, its business model, investors, and track record. A leading product might be great, but if the company behind it has a lot of internal turmoil, that’s a signal a buyer needs to take into account.
Q: How often will the formal Signal report update? What about the interface that Futurum clients can see?
Dan: The formal, public-facing reports will likely be refreshed at least quarterly. However, the insights our clients can see in the Futurum Intelligence Platform are updated much more dynamically. The AI is running 24/7, ingesting new data as it becomes available. If a major acquisition is announced or a significant product launch occurs, we can run a new evaluation in a matter of hours or days and provide updated insights to our clients.

The Signal Heat Map shows how all of the vendors (one per column) score in the methodology’s five parameters.
Q: When a Signal report makes a prediction that proves to be inaccurate, how will you address that?
Brad: The system is designed to be self-correcting. If the AI makes an error and hallucinates, it’s typically because of a lack of context. The model will become aware of that, and as new data comes into the platform, it will seek out the missing information and refine its answer until it reaches an acceptable level of confidence. This is a living evaluation engine that learns and adapts in real-time. It’s not a static PDF that needs to be manually corrected; it’s the data that gets updated, which in turn fixes inaccurate predictions and hallucinations over time.
Q: The Signal process won’t include direct input from vendors via a lengthy questionnaire. Why take this approach? Already we’ve seen some vendors celebrate this idea; other vendors jeer the idea.
Dan: I think it’s a little uncomfortable for them to not feel like they’re influencing the report and driving information flow. Ultimately, though, Signal production would slow down by months if the vendors were part of the process. We took this approach to make the process effortless for vendors and to provide faster, more dynamic insights to buyers. The traditional 300-row Excel spreadsheet is a waste of vendor resources and a key reason why legacy reports are so slow. We believe that if a vendor is effectively engaging with the market and its target buyers … and if the Analyst Relations team is doing its job of engaging with analysts … the vendor’s information will be captured in the data we’re ingesting. For vendors who are apprehensive, our message is that if you’re doing the right things to enable your customers, partners, and the analyst community, the model will take care of the rest.
Q: The natural follow-up question: Say I’m Acme vendor, and I don’t agree with something that’s in the Signal report. What do I do? What kind of process will Futurum have for vendor review?
Dan: We’re not doing formal vendor reviews as part of the publishing process. Of course, we certainly want to get it right and we will quickly remedy any factual errors, but we’re not going to debate opinions. If a vendor wants to escalate, the process will be to have a conversation with the analyst. No resolution there? It’ll go to our Chief of Research. No resolution there? It’ll go to me, in an ombudsman-type role to look at our process, look at the data we use, see if there’s anything that we materially got wrong, and if so, make it right. And if there’s nothing factually wrong, we’ll have to agree to disagree.
Q: What about distribution of Signal reports? And options to promote them?
Dan: Our clients can access the Futurum Intelligence Platform alongside all our other research. For non-clients, anyone can sign up [here] to receive the first Signal report [on Data Intelligence Platforms]. Going forward, we’ll make the full version of Signal reports available to end-users via our ever-growing distribution lists and through our partnership with G2. And you’ll see a lot of the imagery in our social and broadcast media – the goal is that these are highly visible reports in the market. A vendor that wants to use a Signal as part of GTM campaigns can license the report. We have three license options available – at $15K, $30K, and $50K. For $50K, you get the full report and all subsequent updates. For $30K, you get the market-level section of the report, plus your vendor section, which we will expand upon as part of your license to build out a deeper, more comprehensive write-up. For $15K, the “Spotlight Vendors” (vendors who don’t fully qualify for the report, but are featured as disruptors and up-and-comers to watch) get the market-level report plus a custom one-page write-up that highlights their company and solution.
Q: Think ahead a year … what does success look like for Signal?
Dan: One year from now, success means that we have disrupted a stagnant industry by providing an evaluation that is predictive and dynamic. It means we will have expanded our categories to cover the most meaningful segments of the market where buyers are investing heavily in their tech stacks. It means we have an engaged end-user base who trusts the Signal methodology to help them with their buying decisions. It means vendors are using our reports as a source of competitive intelligence to understand where the market is going and their relative strengths and gaps vs. their peer set. Overall, I would say that success for Signal means that the vendor evaluation landscape will look a lot different, because we’ve challenged established norms and shown the community that there is a better way.

The Comparison Bubble is effectively the medal podium of the report. The vendors will be spread out in their respective zones. There will be no incremental distinctions based on X,Y coordinates.
Interested in learning more about how Spotlight keeps a pulse on the analysts firm landscape? Contact our team today.