About Platform (Middleware) Benchmark

Back in the day, discussing middleware was a simple conversation, because we all assumed that the middleware was a database. Benchmarking databases was also a rather obvious task: metrics were related to IO performances, query speed, data storage size, backups, rollbacks etc. In today's world, bringing conversation about cloud/edge middleware is not that trivial, as set of tools and even the feature set depends on many variables.

We recognize that writing this article was a tricky business, as many metrics and vertical requirements are not necessarily exact science, which leaves our readers with healthy scepticism about results and motivation for this exercise. As one of the pioneers of Automation and Low Code, we tried, the best to our knowledge, to synthesize our experience from working closely with our customers over the past decade. As in every conversation, the common ground of a conversation starts with the set of propositions that we have agreed to treat as true or at least relevant in the conversational context. Therefore we invite everyone to comment or participate in this conversation further. So please send us a note or ask for the call - we listen!


Platform Middleware Capabilities

For this benchmark exercise, we focused on a set of features that are typically required for a given vertical, and then looked back at which middleware suite can bring us as close as possible to the final goal - getting the solution out.

  • Rules Engine (how easy it is to express the business logic). For more information, please check our white paper: The Big Book of IoT Automation
  • Time Series Analysis (database and analytics on top).
  • Event Processing (events such as motion, door open/close etc.)
  • Stream Processing (capability to respond to streams as a data source)
  • Machine Learning / AI capabilities. See the Waylay solution Bring your own model
  • Data Visualization
  • APIs & serverless, how easy is to add additional APIs into your solution
  • Zero-touch-automation (configure logic once, run always). From provisioning perspective, is not the same level of complexity to operate few hundreds of assets or few milions.
  • Scalability
  • Explainability (is it clear what is going on once the application is in operations?)
  • Integration Simplicity (how easy is to set up the tool chain, how easy is to create an application on top?). This metric reflects the overall cost and time (to market) to create an end to end solution
  • Data Ingestion Layer, responsible for protocol normalization and optionally payload transformation. See the link
  • LPWAN support (Sigfox, LoRa, NBIoT), normally this is done by setting up webhooks for data forward (and sometimes uplink config update), so any solution that allows for webhooks should be fine. It often requires payload and TLS decoding (preferable with reusable libraries). See the Waylay solution LPWAN integration
  • Device Management (firmware, identity, security etc.)
  • Edge Ready (can exactly the same tooling be use at the edge). Check the Waylay solution TinyAutomator

Please note that middleware platforms are always complemented by other tools and applications in order to create end to end solutions.

Platform Requirements per Vertical


Please hover over the Requirements legend in order to view middleware requirements for one vertical. On the radar chart, the scale for each metric goes from 0 to 100%. indicating the relative percentage of that feature required for a given vertical.

Platform Scores

How to read this chart?

Please hover over the Platform Scores legend in order to see the middleware score. On the radar chart, the scale for each metric goes from 0 to 100%, indicating the relative percentage of that feature being available for a given platform. If the value of a metric is above 0 but less than 100%, that means that additional effort and/or tools are required to bring it to 100%, but sometimes even that is not possible.


Platform Score per Vertical

How to read results?

Result is computed as a total sum (overlap) between the vertical requirement and the middleware for each metric, presented as the percentage (0-100%). Please note that in most of the cases middleware platforms are complemented by other tools and applications in order to create end to end solutions anyway. Therefore this score indicates how much work is still required in the middleware before customers can build the end solution on top (mobile app, customer facing app etc.). Every score less than 50-60% indicates that additional tools and applications will be required to close the automation gap first.


Formula is defined as a total score sum per vertical, using this calculation (and finally presented in %):

           //for each metric of a given middleware
           const PENALTY =  1 / num_metrics
           var value = industry_value - middleware_metric.value
           if(industry_value === 1 && value > 0.5) {
             score -= PENALTY * value
           } else {
             score += industry_value * middleware_metric.value
           }
           //compute the final percentage ...
        

Q&A

How was the total score computed?

Formula is defined as a total score sum per vertical, using this calculation (and finally presented in %):

             //for each metric of a given middleware
             const PENALTY =  1 / num_metrics
             var value = industry_value - middleware_metric.value
             if(industry_value === 1 && value > 0.5) {
               score -= PENALTY * value
             } else {
               score += industry_value * m.value
             }
        

Why didn't you include feature X in the benchmark?

There are things like end user management, or UI widgets etc that are necessary to build final applications, but we didn't see them as a middleware requirement as these are often one-off features implemented by end customers. But we could be wrong, tell us.

Why did you skip vertical X?

That's a great question. For instance, you may argue that insurance or market automations are verticals. Yes there are, but we are simply not active in them, hence we didn't include them. That doesn't mean we will not at one time, but we simply don't understand either the vertical enough, or players in that vertical right now.

Why did you skip a competitor X for a vertical Y?

Another great question. For each vertical, first and foremost, you have vertical application providers, not middleware providers. They match 100% of what they think you need. If that is the case with you, no need to search any further.

Why did you skip a middleware competitor X?

We skip them all, except the big two. We are not able to follow every competitor closely enough and therefore we would be getting at risk of under or over reporting each of them. Exceptions are AWS and Azure, where we follow very closely what they do, as in most of the cases, the customer choices are: AWS or Azure, "do it myself", or vendor X/Y/Z. We mostly find ourselves facing the first (big two) or second (do it myself) as a competitor.

How are you sure that you got a metric for that competitor right?

We are not, we might be wrong. But we also listen to what our customers have tried in the past, and where the weak spots and our strengths are, otherwise, why would anyone select Waylay? Interesting observation is that often a particular middleware vendor's weak spot is compensated by skilled (and pricey) ISV, so in the end, a customer gets a final solution that meets her needs (of course time to market or the price is something you add on top).

Why you didn't include the price/TCO in this score?

This is hard to calculate, even if the vendors like AWS/Azure are doing (transparent) volume charging (and discounts). Unless you know the exact customer use case, it is hard to compute that cost. That's a big issue. Another one is that the integration cost is inversely proportional to the `Integration Simplicity` metric, but that alone depends on the time and integration/expert skills of a particular ISV. Still it is safe to say, the bigger the gap in final benchmark score, the end solution will be more costly and will take longer to build. Integration business is still one of the most profitable software businesses. Waylay is cutting the integration (plumbing) cost close to zero, while allowing customers and ISVs to add value in creating business applications on top very fast.

Why did you put Kafka-Influx-Grafana in the list?

This is a shortcut for saying "do it yourself". Of course you can easily add there mongo, redis, but let us not forget you still need a security/access layer on top etc... Sky's the limit what you can build this way, so is the time and price.

Didn't you also use similar components to build Waylay?

Yes indeed! But we do it for living (and for a long time) and around our own patented automation technology, which brings all these things together in the most comprehensive/usefull way.

Why is Node RED in the list?

Many people asks us about the comparision with Node RED, and that's the only reason we have put it here. Node RED is a great tool to put in the edge gateway (for protocol and payload aggregation). Many gateways are using it. We are not into this market (we are into edge automation), but that is another concept all together.

Why didn't Waylay invest more in Device Management?

In our view, that battle is already won. We see a few things: either Azure/AWS IoT, or sometimes players like Cumulocity or people doing all sorts of things with OPC/UA or MQTT bridges - custom made. Waylay also provides MQTT broker and device identity mgmt., and some customers are using it, but we stop short of building firmware, this is not our thing. Having said that, we invest heavily in Data Ingestion Layer, which enables us to consume data directly from all IoT platforms out there. In that sense, we can 'waylay' other platforms and seamlessly get data and digital twin information into our system and the rest of magic happens Waylay side.

Why did you score Edge Ready so low for AWS/Azure?

AWS and Azure have very strong edge presence, no doubt about that. Having said that, most of these offerings are either focused on edge gateways (for protocol and payload aggregation and data push into the cloud), or on running ML models at the edge. In our view, and that gets constantly repeated in discussions with our customers, customers are looking for local edge automation processing, which ideally would have exactly the same framework paradigm (coding and logic reuse) both in cloud and edge solutions.

How is this article and approach different from platforms featured in Gartner catogories and reports (RPA, Low Code, Automation...)?

We tried a bottom up approach, looking at infra required to build applications. If you look at any slide on the cloud infra from the distance, you will observe many things repeating: lambda's, dynamoDB, SNS, S3, AWS Step Functions, Event Bridge, Hubs, Azure Functions, Mongo, Elastic, Redis etc... So the question for us was, if all these things are required to build modern apps, what sets one offer apart from another? How do customers make choices in the end? That's what this article wants to answer, without getting too much into the Gartner way of looking into the world, we tried the same but from another angle.

How can I get in touch with Waylay and discuss it further?

You can always reach out to us on this link. We’re happy to help.