Our Best Practices Series has focused, so far, on authoring and lifecycle management aspects of managing decisions. This post will start introducing what you should consider when promoting your decision applications to Production.
Make sure you always use release management for your decision
Carole-Ann has already covered why you should always package your decisions in releases when you have reached important milestones in the lifecycle of your decisions: see Best practices: Use Release Management. This is so important that I will repeat her key points here stressing its importance in the production phase.
You want to be 100% certain that you have in production is exactly what you tested, and that it will not change by side effect. This happens more frequently than you would think: a user may decide to test variations of the decision logic in what she or he thinks is a sandbox and that may in fact be the production environment.
You also want to have complete traceability, and at any point in time, total visibility on what the state of the decision logic was for any decision rendered you may need to review.
Everything they contributes to the decision logic should be part of the release: flows, rules, predictive and lookup models, etc. If your decision logic also includes assets the decision management system does not manage, you open the door to potential execution and traceability issues. We, of course, recommend managing your decision logic fully within the decision management system.
Only use Decision Management Systems that allow you to manage releases, and always deploy decisions that are part of a release.
Make sure the decision application fits your technical environments and requirements
Now that you have the decision you will use in production in the form of a release, you still have a number of considerations to take into account.
It must fit into the overall architecture
Typically, you will encounter one or more of the following situations
• The decision application is provided as a SaaS and invoked through REST or similar protocols (loose coupling)
• The environment is message or event driven (loose coupling)
• It relies mostly on micro-services, using an orchestration tool and a loose coupling invocation mechanism.
• It requires tight coupling between one (or more) application components at the programmatic API level
Your decision application will need to simply fit within these architectural choices with a very low architectural impact.
One additional thing to be careful about is that organizations and applications evolve. We’ve seen many customers deploy the same decision application in multiple such environments, typically interactive and batch. You need to be able to do multi-environment deployments a low cost.
It must account for availability and scalability requirements
In a loosely coupled environments, your decision application service or micro-service with need to cope with your high availability and scalability requirements. In general, this means configuring micro-services in such a way that:
• There is no single point of failure
○ replicate your repositories
○ have more than one instance available for invocation transparently
• Scaling up and down is easy
Ideally, the Decision Management System product you use has support for this directly out of the box.
It must account for security requirements
Your decision application may need to be protected. This includes
• protection against unwanted access of the decision application in production (MIM attacks, etc.)
• protection against unwanted access to the artifacts used by the decision application in production (typically repository access)
Make sure the decision applications are deployed the most appropriate way given the technical environment and the corresponding requirements. Ideally you have strong support from your Decision Management System for achieving this.
Leverage the invocation mechanisms that make sense for your use case
You will need to figure out how your code invokes the decision application once in production. Typically, you may invoke the decision application
• separately for each “transaction” (interactive)
• for a group of “transactions” (batch)
• for stream of “transactions” (streaming or batch)
Choosing the right invocation mechanism for your case can have a significant impact on the performance of your decision application.
Manage the update of your decision application in production according to the requirements of the business
One key value of Decision Management Systems is that with them business analysts can implement, test and optimize the decision logic directly.
Ideally, this expands into the deployment of decision updates to the production. As the business analysts have updated, tested and optimized the decision, they will frequently request that it be deployed “immediately”.
Traditional products require going through IT phases, code conversion, code generation and uploads. With them, you deal with delays and the potential for new problems. Modern systems such as SMARTS do provide support for this kind of deployment.
There are some key aspects to take into account when dealing with old and new versions of the decision logic:
• updating should be a one-click atomic operation, and a one-API call atomic operation
• updating should be safe (if the newer one fails to work satisfactorily, it should not enter production or should be easily rolled back)
• the system should allow you to run old and new versions of the decision concurrently
In all cases, this remains an area where you want to strike the right balance between the business requirements and the IT constraints.
For example, it is possible that all changes are batched in one deployment a day because they are coordinated with other IT-centric system changes.
Make sure that you can update the decisions in Production in the most diligent way to satisfy the business requirement.
Track the business performance of your decision in production
Once you have your process to put decisions in the form of releases in production following the guidelines above, you still need to monitor its business performance.
Products like SMARTS let you characterize, analyze and optimize the business performance of the decision before it is put in production. It will important that you continue with the same analysis once the decision is in production. Conditions may change. Your decisions, while effective when they were first deployed, may no longer be as effective after the changes. By tracking the business performances of the decisions in production you can identify this situation early, analyze the reasons and adjust the decision.
In a later installment on this series, we’ll tackle how to approach the issue of decision execution performance as opposed to decision business performance.
Let’s continue with our series on best practices for your decision management projects. We covered what not to do in rule implementation, and what decisions should return. Now, let’s take a step back, and consider how to think about decisions. In other words, I want to focus on the approaches you can take when designing your decisions.
Think about decisions as decision flows
The decision flow approach
People who know me know that I love to cook. To achieve your desired outcome, recipes give you step by step instructions of what to do. This is in my opinion the most natural way to decompose a decision as well. Decision flows are recipes for making a decision.
In the early phases of a project, I like to sit down with the subject matter experts and pick their brain on how they think about the decision at hand. Depending on the customer’s technical knowledge, we draw boxes using a whiteboard or Visio, or directly within the tool. We think about the big picture, and try to be exhaustive in the steps, and sequencing of the steps to reach our decision. In all cases, the visual aid allows experts who have not prior experience in decision management design to join in, and contribute to the success of the project.
What is a decision flow
In short, a decision flow is a diagram that links decision steps together. These links could be direct links, or links with a condition. You may follow all the links that are applicable, or only take the first one that is satisfied. You might even experiment on a step or two to improve your business performance. In this example, starting at the top, you will check that the input is valid. If so, you will go through knock-off rules. If there is no reason to decline this insurance application, we will assess the risk level in order to rate it. Along the way, rules might cause the application to be rejected or referred. In this example, green ball markers identify the actual path for the transaction being processed. You can see that we landed in the Refer decision step. Heatmaps also show how many transactions flow to each bucket. 17% of our transactions are referred.
Advantages of the decision flow approach
The advantage of using this approach is that it reflects the actual flow of your transactions. It mirrors the steps taken in a real life. It makes it easy to retrace transactions with the experts and identify if the logic needs to be updated. Maybe the team missed some exotic paths. maybe the business changed, and the business rules need to be updated. When the decision flow links to actual data, you can use it also as a way to work on your strategies to improve your business outcome. If 17% referral rate is too high, you can work directly with business experts on the path that led to this decision and experiment to improve your outcome.
Think about decisions as dependency diagrams
A little background
In the early days of my career, I worked on a fascinating project for the French government. I implemented an expert system that helped them diagnose problems with missile guidance systems. The experts were certainly capable of layout of the series of steps to assess which piece of equipment was faulty. However, this is not how they were used to think. Conducting all possible tests upfront was not desirable. First, there was a cost to these tests. But more importantly, every test could cause more damage to these very subtle pieces of engineering.
As it was common back then in expert systems design, we thought more in a “backward chaining” way. That means that we reversed engineered our decisions. We collected evidences along the way to narrow down the spectrum of possible conclusions.
If the system was faulty, it could be due to the mechanical parts or to the electronics onboard. If it was mechanical, there were 3 main components. To assess whether it was the first component, we could conduct a simple test. If the test was negative, we could move on to the second component. Etc.
In the end, thinking about dependencies was much more efficient than a linear sequence, for this iterative process.
The dependency diagram approach
Today, the majority of the decision management systems might pale in sophistication compared to this expert system. But the approach taken by experts back then is not so different from the intricate knowledge in the head of experts nowadays in a variety of fields. We see on a regular basis projects that seem better laid out in terms of dependencies. Or at least, it seems more natural to decompose them this way to extract this precious knowledge.
What is a dependency diagram
A dependency diagram starts with the ultimate decision you need to make. The links do not illustrate sequence, as they do in the decision flows. Rather, they illustrate dependencies obviously, showing what input or sub-decision needs to feed into the higher level decision. In this example, we want to determine the risk level, health-wise, of a member in a wellness program. Many different aspects feed into the final determination. From a concrete perspective, we could look at obesity, blood pressure, diabetes, and other medical conditions to assess the current state. From a subjective perspective, we could assess aggravating or improving factors like activity and nutrition. For each factor, we would look at specific data points. Height and weight will determine BMI, which determines obesity.
Similarly to the expert system, there is no right or wrong sequence. Lots of factors help make the final decision, and they will be assessed independently. One key difference is that we do not diagnose the person here. We can consider all data feeds to make the best final decision. Branches are not competing in the diagram, they contribute to a common goal. The resulting diagram is what we call a decision model.
Advantages of the dependency diagram approach
Dependency diagrams are wonderful ways to extract knowledge. As you construct your decision model, you decompose a large problem into smaller problems, for which several experts in their own domain can contribute their knowledge. When decisions are not linear, and the decision logic has not yet been documented, this is the right approach.
This approach is commonly used in the industry. OMG has standardized the notation under the “DMN” label, which stands for Decision Model and Notation. This approach allows you to harvest knowledge, and document source rules.
Choose the approach that is best for you
Decision flows are closest to an actual implementation. In contrast, dependency diagrams, or decision models, focus on knowledge. But they feed straight into decision management systems. In the end, think about decisions in the way that best fits your team and project. The end result will translate into an executable decision flow no matter what.
Now that we covered some basics about what not to do when writing rules, I would like to focus on what to do. The first aspect that comes to mind is the very basics of making decisions: the decision you make.
It is a common misconception that decision services make only one decision. There is of course a leading decision that sets a value to true, to a string or compute a numerical value. Examples are:
- approving a loan application
- validating an insurance claim
- flagging a payment transaction as fraudulent
A decision service typically makes that key decision, but it will often come with other sub-decisions. For example:
- risk assessment…
- best mortgage product…
- payment amount…
- likely type of fraud…
Making decisions or not
In my career, I have seen projects in many different industries. They applied to totally different types of decisions. I heard a few times, now and then, the desire to simplify the decision outcome. Just making a decision ‘in some cases’ is not enough though. Don’t assume that no decision equates to the opposite decision. This over-simplification is not a good design. Your decision services need to make decisions.
Business rules often flag a negative outcome, such as a decline decision, a suspicion of fraudulent activity, etc. It might be tempting to only respond when the adverse action occurs. If the person is not declined or referred, then he or she is approved. Why state it explicitly, when it is an obvious truth?
Your decision service, in such a design, would return “Declined” when the applicant is declined, and nothing when he or she is not.
There is no wrongdoing in flagging bad situations. I personally believe strongly in affirmatively stating the positive outcome as well. Your service eventually determines that your application is declined. It should respond that the application was approved as well, when applicable. It seems intuitive to focus on this negative outcome if that is what your decision service is designed to do. But it is also possible that the applicant could not be vetted due to lacking information.
Don’t let any doubt as to what your decision service decides.
Why is it useful?
You could assume that ‘no bad decision’ is equivalent to a ‘good decision’ of course. I find it brittle though. When an unexpected situation arises, assuming you reached a positive outcome can hurt your business. If your decision logic failed in the middle, don’t let it go unnoticed.
Don’t infer from this statement that I do not trust business rules. Clearly, business rules will faithfully reproduce the decision logic you authored. I am concerned about data changing over time. Established decision logic can change over time too, and often will.
Do not take for granted that all possible paths are covered. Today, your rules might check that all the mandatory pieces of information are provided. Let’s assume your decision service requires a new piece of data tomorrow. If the business analyst in charge forgets to check for it up front, you will reach a no decision when this data is missing. You will end up with many approved applications that were invalid to start with.
In simple terms, I highly recommend preparing for the unexpected, by not allowing it to happen without notice. Always state clearly the decision you reach. Add a rule to set the status to “Approved” if the application is not “Declined” or “Referred”. If and when you encounter these no decision situations, just update your decision logic accordingly. Plus, you significantly increase your chances to catch these problems at QA time. You don’t want to scramble after deployment of your decision logic into Production. That is the 101 rule about making decisions.
Back from vacations, I am ready to tackle a new fun project, in addition to a handful of customer projects and an upcoming product launch. I have never taken the time to write best practices on rules authoring, despite the many requests. Let’s make the time! It seems that some best practices are totally obvious to us, old timers in the decision management industry. But there is very little literature out there to help business analysts get ready to design the best business rules.
Over the course of the series, my goal is to document some guidelines that all business analysts should know to excel at rules authoring. I will start with the easiest recommendations that we have rehashed for close to 20 years now.
So what should you consider when writing rules?
Regardless of the product you use to write rules, the syntax will likely cover some basic capabilities that will get you in trouble. I considered removing these constructs from our SMARTS product when we got started. There are a few use cases that benefit from them though, so we left them… with the caveat that you should only use them if you really need them.
Do not use OR
A rule typically consists of a set of conditions, and a set of actions. These conditions are AND-ed or OR-ed. For example, you could check that:
– an applicant is a student
– and that he or she is less than 21 years-old.
Alternatively, you could check that:
– an applicant is a student
– or that he or she is less than 21 years-old.
Though these sentences make sense in English, we frown upon the second one in the industry. We highly recommend you use only AND, and no OR in your decision logic.
First of all, confusion stems from mixing and matching ANDs and ORs. It is like order of operations in math: it can give you a headache. Without parenthesis, an expression can become very confusing. For example, if you check that:
– an applicant is a student
– or that he or she is less than 21 years-old
– and that he or she is from California…
Would you expect the rule to fire for a 25 year-old student in Vermont? Furthermore, while each engine has a specific order of operations, this may not be consistent across products.
In addition to the confusion it creates, OR makes it difficult to track the business performance of each scenario. Rules provide a powerful capability to report on how many times they executed (fired), and possibly more decision analytics depending on the rules engine you use. If you bundle the State requirement with the Age requirement, you will not be able to take advantage of these analytics. This is an important aspect not to underestimate.
Why do we have OR? I think it is mostly historical. It was there when we started with Expert Systems. There are also complex rules that only look for a few alternatives for which OR could be useful.
Most of the time, this can be dealt with more elegantly by using the IN keyword. If your rule check for the state of California or the state of Vermont, then just check that the state is in California, Vermont as a list.
Do not use ELSE
Historically, rules have had the ability to include an ELSE statement. When the set of conditions is not correct, the set of actions in the ELSE statement is executed instead of the set of actions in the THEN statement.
Again, in English, that makes perfect sense. In rules, it is overwhelming. For example, if a rule checks that the applicant is at least 17 in the state of California to be eligible, the else statement will execute for an applicant of any age in any other state.
There are a few use cases in which the condition set is black & white. When it is true, something happens; when it is not true, something else happens.
In the majority of the use cases, you have more than just one rule that defines the behavior the rules look for. In that case, you are likely considering having an ‘ELSE’ statement that is global to the rule set. I have seen rule sets that needed that complex negation of all of the exceptions identified by the rules. This is not fun to write and maintain.
Alternatively, the best design here would use default actions if they exist in your system. A default action is typically defined at the rule set level and applies when none of the rules fire.
If your rules engine does not have a default action, I recommend using a 2-step process. First, execute the rules that deal with the exceptions. They will likely modify a status or log something (create an error, add a promotion, etc.). Second, in a separate rule set, deal with the situation in which none of the rules have fired by looking at the status they would have changed.
Do not use PRIORITIES
Here is another historical artifact. Priorities allow a rule to jump ahead of all the other rules, and execute first. This may be needed when you leverage the RETE agorithm, which uses inferences, and therefore rules execute continuously mostly irrelevant of order.
But in most cases, you do not need inference. The order of the rules, as they appear in the rule set, dictates the rule execution order. If you temper with priorities, the order of execution changes. This may create confusion as the next business analyst on your project might miss this little detail, and wonder for a long time why a rule that is true does not fire, or why it fires out of order.
The best thing to do if rules need to fire in a certain order is to re-order them so that they show in the order they need to be considered. If your logic is more complex, go for simplicity of design: decompose your decision into steps. One rule set might identify all the products or offers that the applicant is eligible for, while another step ranks them, prioritize them, selects the best.
Let me quote Nicolas Boileau-Despreaux, whose words inspire my rules writing!
“What is conceived well is expressed clearly,
And the words to say it will arrive with ease.”
Long Term Care Group (LTCG) is a leading provider of business process outsourcing services for the insurance industry. They are the largest third party long term care insurance provider offering underwriting, policy administration, clinical services, as well as claims processing and care management for America’s largest insurance companies. Insurers rely on LTCG for these services due to LTCG’s deep expertise in long term care portfolios, which require specialized knowledge and processes. LTCG continually invests in the people, processes, and technology to maintain their leadership position in the industry.
Several years ago LTCG developed and implemented an automated claims adjudication process using Sparkling Logic SMARTS as the decision engine. Prior to this initiative more than 90,000 claims per month were processed manually by LTCG’s team of claims examiners. LTCG wanted to reduce the time their claims examiners needed to spend researching and making a claims decision in order to maintain the highest levels of customer satisfaction.
Long term care insurance is unique in that benefits are coordinated by care coordinators who create a plan of care to help policyholders leverage the benefits covered by their policy based on clinical guidelines that direct care needs over time. Due to the unique nature of long-term care needs, LTCG wanted to balance the use of technology with their emphasis on human touch to ensure the best possible care and coverage for policyholders.
The first automated claims adjudication system was developed in 6 months using an agile methodology and Sparkling Logic SMARTS. The Scrum team was able to iterate on the business rules and logic quickly thanks to the simplicity and power of the SMARTS user interface and software architecture.
Download the LTCG Case Study to learn more.
Digital Disruption + Risk Management
Digital Disruption is at the top of every banking and insurance CEO’s agenda in 2017: how to become the disrupter and avoid getting disrupted. Across all credit-driven financial services firms, the pressure is intense with new market players emerging in all realms creating new expectations from customers.
Credit Risk Management and Decisioning are emerging as key scenarios that are ripe opportunities for digital disruption for two primary reasons.
First, the impact of credit risk decision management and compliance is significant to the bottom line and incremental improvements to processes are no longer enabling lenders and insurers to keep pace.
McKinsey reports that, “In 2012, the share of risk and compliance in total banking costs was about 10 percent; in the coming year the cost is expected to rise to around 15 percent… banks are finding it increasingly difficult to mitigate risk…To expand despite the new pressures, banks need to digitize their credit processes.” Top performing firms not only need to eliminate inconsistent approaches to credit analysis that expose them to unnecessary risk. To leap frog, they need to develop a systematic approach based on the integration of new data sources and credit-scoring approaches rather than relying solely on the historical performance indicators.
Second, risk management is, by its very nature, a data-driven discipline well positioned to take advantage of the massive advancements in analytics technologies at the new levels of scale enabled by cloud computing. This is dramatically lowering the cost of all solutions related to credit risk management for small to mid-sized financial services institutions, including FinTech startups that can enter the market quickly with limited barriers to entry.
What is the Opportunity in 2017?
Banks and Insurers can manage increasingly complex data under a higher volume of business rules. At the same time, they can apply an agile management framework of rules and data to take advantage of market opportunities in real-time. This is now possible at a fraction of the cost and time to implement compared to even five years ago. Our partnership with firms like Equifax is paving the way for the next wave of digital disruption in the financial services industry in scenarios like credit risk management and fraud detection.
The Equifax Story
Equifax has offered their leading, cloud-based decision management solution called InterConnect to their global customers for many years. The InterConnect solution “automates account opening, credit-risk decisioning, cross-selling and fraud mitigation during the account acquisition process.”
In 2016, Equifax was looking for ways to help their customers capture new opportunities in their credit risk management and decisioning process by strengthening one of the core components of their InterConnect platform: the Rules Editor.
Equifax’s customers were looking for enhanced support in defining, testing and optimizing business rules. Even more importantly, they needed to rapidly seize competitive advantage through the agile implementation of new business rules and automated optimization strategies based on real-time results, as well as the development of test data for repeated use to enable greater consistency and scale.
Equifax turned to Sparkling Logic as a key partner to fulfill these requirements for InterConnect. Sparkling Logic’s decision management engine powers the enhanced Rules Editor. One specific strategy that was not previously possible was the testing and implementation of Champion and Challenger credit decisioning strategies.
Before Sparkling Logic, customers struggled to compare two or more decisioning strategies at the same time. With Challenger and Champion strategies now enabled in the enhanced Rules Editor, new strategies (“Challengers”) can be developed, tested, and deployed simultaneously with existing strategies (“Champions”). Winning strategies are immediately applied to new decisions after the initial test period. Additional revenue is now captured that would have been lost while you waited for one test after another to play out.
What’s Next? How do you replicate this model to leap frog your digital disruption strategy?
While your competitors are busy applying incremental improvements to their portfolio management strategies and using historical performance data to drive crediting decisions, you have the opportunity to leap frog. This is possible when you immediately capture available revenue opportunity by applying an automated decision management engine to your credit decisioning processes.
Largest P2P Lending Market in the World
Fintech is a hot topic around the globe and China is no exception. The Chinese peer-to-peer lending market is the largest in the world exceeding $150 Billion in 2015. The 2,595 Chinese P2P lending platforms, counted at the end of 2015, have cumulatively brokered 1.37 trillion yuan according to a report in China News. These numbers are particularly significant since they came from true peers, small investors with little institutional money powering the sector.
Challenges of Skyrocketing Growth
Although P2P lending in the US is heavily regulated, Chinese platforms operated without regulatory safeguards until 2016. This unregulated environment fueled growth but also resulted in a significant number of failed platforms (896 in 2015) and in some less than credible platforms defrauding unwary investors.
Another issue facing the industry is the lack of credit reporting agencies and FICO scores that exist in developed markets like the US. According to PIIE (Petereson Institute for International Economics) Chinese lending platforms use alternative approaches such as reviewing bank statements to identify sources of borrowing that don’t turn up in credit records, verifying whether or not a borrower pays his or her phone bill, and in some cases, platforms even send employees to check on physical assets in person.
Decision Management Addresses Challenges of P2P Platforms
Recently Jin Xu, from Sparkling Logic’s Chinese partner, Xinshu Credit, presented at the Global Internet Finance Summit 2016 in Shanghai. Jin Xu discussed some of the challenges faced by P2P lending companies and how Sparkling Logic helps companies, such as Weshare Finance, address these challenges:
- Labor costs, especially for IT engineers, are rising in China. Decision management platforms, like SMARTS Decision Manager, reduce development time and time to market when compared to traditional systems developed using code.
- SMARTS enables business and risk analysts to manage lending decisions with minimal IT support, resulting in a less costly, more agile solution.
- As new fraud schemes continuously arise, SMARTS allows companies to rapidly respond in implementing fraud prevention measures.
- Most P2P lenders require external data to evaluate risk. SMARTS enables the implementation of pre-screening rules to avoid requesting unnecessary and costly external data for ineligible borrowers.
- As data accumulates, SMARTS predictive analytics capability allows companies to extract knowledge from historical data to improve lending decisions.
Weshare Finance, a leading Chinese FinTech company, recently selected SMARTS to revamp its loan processing system. WeShare Finance was founded in March 2014 and is a standing council member of the Association of Internet Finance China. Weshare focuses on providing and cash and installment services to individuals with the motto “mobile inclusive makes life better”. Within 60 seconds Weshare Finance can make a remiitance into a user’s bank account, and is called the handheld ATM of young people.
Modernization of Application Business Logic
Today, legacy modernization initiatives are everywhere. The need to modernize systems in order to transform businesses is often a requirement, and the stakes can be high. An enterprise may feel extreme business pressure to operate like its smaller more agile competitors. Getting to that agile “state” can be a huge endeavor when a 7-10+ year old legacy system has to be modernized as part of the process. There are a number of companies offering products, services and methodologies related to legacy modernization. I’ve added some links to resources at the end of this post.Modernizing a legacy application might include many facets – modernizing the user interface, architecture, API, database, and core business logic. In this post, I’m going to focus on one aspect of modernization – recreating core business logic in a new system or service.
You can think of “core business logic” as the decisions that a system makes as it processes transactions. For example, the core business logic of a legacy insurance underwriting application, might include decisions such as whether or not the applicant is eligible for coverage, what is the level of risk, whether to approve, deny or refer, and what to charge for premiums. Modernization of those decisions would involve a recreation of the business logic that’s in the legacy system. But on a modern, agile platform that can meet today’s business needs.
According to James Taylor, from Decision Management Solutions, “ By focusing on the decisions represented in the core business logic you are likely addressing the most costly to maintain aspect of a legacy system. Decision-making is often high-change with new regulations, new policies, competitive response and evolving consumer/market conditions driving a need to make changes. For example, the way a fee is calculated, the way an application is validated, the way a claim is checked for eligibility – these kinds of decisions, hidden in the core business logic of legacy systems, result in big maintenance bills and long periods where the system works incorrectly or inconsistently with the business need.”
With a focus on the core business logic, how do you approach modernization?
Code Conversion Legacy Modernization Approach
Often, for modernizing business logic functionality, organizations, and their consulting partners, will take a “code-focused” approach. Using various tools, the migration is accomplished by assessing the legacy software code, analyzing and uncovering core business logic, applying conversion tools to create modernized code, and finally, testing for functionality and errors.
Code Conversion – Limitations
Although this is a viable way to approach the modernization of business logic, it has two significant limitations:
Today’s platforms are different
Older system architectures are less functionally modular than today’s microservice architectures. Most legacy systems were built as monolithic applications or client-server systems. The business logic in these systems was likely based upon procedural or object-oriented design and scattered throughout functions or methods in COBOL, C, or C++ code. (Yes, today some systems developed in “modern languages” like Java and C++ are now considered legacy!) The concept of organizing business logic around decisions and services simply did not exist when these applications were developed.
Today’s preferred approach to core business logic is to define decisions in a decision management platform, where the decision rules will be organized according to how the enterprise thinks of the business problem. Decisions are then deployed to a decision service and integrated into modern architectures through a REST API.
Legacy systems weren’t built for change
The legacy system has undoubtedly been enhanced a number of times over an extended period – incrementally. By incrementally, I mean that changes made to the legacy system were based on the delta between the system in its then current state, and some new functionality that more accurately reflected what the business needs were at that time.
If a legacy system had been in use for, say, a 7-year period, with 2 incremental updates per year, those 14 incremental changes in functionality were always constrained by the systems previous state. Legacy systems simply weren’t built to support change and provide the ability to change in unexpected ways.
The result is that the code inside these legacy applications is complex – more complex than if it was built from scratch to have the same functionality it has today. It’s not just spaghetti code- it’s spaghetti on top of spaghetti! Converting that functionality to a modern platform using code-conversion tools alone can bring all the extra, unneeded complexity into the new platform.
Decision Management – Data-Driven Legacy Modernization Approach
Today’s modern decision management platforms use data and analytics to enable the process of improving operational decisions. Looking at historical results from the legacy system, adjusting rules inside a decision, and then running tests comparing the results, is an effective way to transform legacy business logic into modern decision services.
Using insurance underwriting as an example, you could look at the applicants that were approved by the legacy system (i.e. legacy system historical results) and compare them to the applicants that would be approved using a baseline set of business rules in a decision management system. Analyzing mismatches between the two sets of results could drive the discovery of which rules are missing or need to be adjusted to produce matching results.
For example, you might discover that 25% of the differences in approval status are due to differences in risk level. This insight leads you to focus on adding and/or modifying your risk related rules. Once the legacy code and the rules are assigning the same risk level, the overall mismatches have been reduced by 25%. Repeating this analyze-improve step will reduce your mismatches until the results from the modernized business logic exactly match those from the legacy system.
Decision Management – Benefits
The modernized business logic, expressed as business rules in a decision management platform, doesn’t follow the complex path of how the legacy system was created or updated over years; but it does get the same results. It’s a simpler, and easier to maintain and extend, representation of the business logic.
Also, using a decision management platform to hold and maintain the modernized business logic brings with it the standard agility benefits of decision management:
- Easy to change
- Understood and often managed by Business Analysts
- Change management control through versions and releases
- Highly scalable deployment
Summary and Resources
Using this “data-driven” approach to recreate the core business logic of a legacy system on a modernized platform can complement, and in many cases, replace a code conversion approach to modernization. At a minimum, in the large and complex world of legacy modernization, it can be an important part of the toolkit and methodology to achieve success.
Below, I’ve listed some resources for legacy modernization, and some for Decision Management. Please let me know about others we should include.
Legacy Modernization Resources:
- Software Modernization
- Taking a modern approach to modernizing legacy applications
- Approaches to application modernization
Decision Management Resources: