Technical Series: Deployment Architecture and Performance
In our last post we discussed Decision Engine Performance and how SMARTS provides different engines that are optimized to their specific application. In this post we will cover how deployment architecture choices impact performance.
SMARTS provides you higher level support for implementing your decision management system than just a decision engine. In particular, it provides you with support for building decision services for micro-services as well as other service approaches.
Delivered in either repository-based or decision-based Docker containers or Virtual appliances, SMARTS decision services add to the decision engine the following, among other features:
Support for Secure Service Invocations (typically JSON over HTTPS)
Decisions may be invoked through services in an authenticated (access tokens) context. Many users may invoke the service concurrently, using any client technology that can interact with services. Sparking Logic provides Java, .NET standard, Python 3, NodeJS SDKs to facilitate the client implementation.
Support for Horizontal and Vertical Scalability
Decision engines executing within the decision service will leverage all cores available to them within the installation. Adding more cores results in the ability for the engine to support more concurrent executions, and the scalability is typically linear.
You may also deploy multiple microservice installations behind a load balancer. You will typically do that using an orchestration technology and leveraging the load balancer technologies available within your environment (on-premise or cloud). SMARTS allows you to have multiple instances leveraging their own replicated repositories or leveraging an external repository, all implementing the same set of services. These instances are deployed behind a load balancer and provide you with scalability by adding more instances. You may also add on-demand instances (with replicated or delegated repositories) to cope with elastic loads. SMARTS automates all the process of keeping all those services in sync and updating them as you change the decision logic.
Support for High Availability
SMARTS also allows you to have redundancy in your decision services. Having multiple instances with replicated repositories removes single points of failure. You can have an instance taken out, the rest of the replicated instances can continue with the load .
Support for No-downtime Hot Swap of Decision Logic with Full Traceability
SMARTS provides multiple levels at which you can swap decision logic. At the highest level, your lifecycle manager, without any IT intervention, can change the release of the decision logic being executed with one click, and no downtime. SMARTS will load the new release and hot swap it atomically if there is no problem. You can configure the strategy to take in case the new release is not loadable or has compilation problems: continue using the previous one and notify, stop providing the service, etc. Of course, you can also hot swap your decision logic using other orchestration mechanisms, but those tend to involve IT.
Support for Ready-To-Execute Decision Logic
SMARTS allows you to specify when a decision service is declared to be ready to receive invocations. Typically, you want to make sure that is only the case when it is actually loaded, compiled and cached in memory, so that the first invocations hitting it do not pay the price of an update
In addition to providing support for all these performance related features, SMARTS does it all in a secure and auditable way. Decision services are configured to use read-only project releases, and the information of what release is used on any service invocation is returned to the invoker.
Finally, you should also focus on the business decision performance. We’ll discuss that topic in our next blog post.