In this post, we present how Sparkling Logic continues its involvement in the DMN standard, through its graphical tool SMARTS Pencil, which business analysts use to model business decisions by drawing a diagram to form a decision process.
DMN, a bit of historyThe Decision Model and Notation (DMN) was formally introduced by the Object Management Group (OMG) as a v1.0 specification in September 2015. Its goal was to provide a common notation understandable by all the members of a team whose goal is to model their organization’s decisions.
The notation is based on a simple set of shapes which are organized in a graph. This allows the decomposition of a top-level decision into more, simpler ones, whose results must be available before the top-level decision can be made. These additional decisions themselves would be decomposed, and so on and so forth until the model reaches a more complete state. In addition, the implementation of the decisions can be provided, notably in the form of decision tables (which is also a very common means of representing rules).
The normalization of the graphical formalism (the DMN graph) and of the way the business logic is implemented (e.g., decision tables) allows teams to talk about their decisions, using diagrams with a limited set of shapes.
Sparkling Logic was one of the early vendors to provide a tool to edit (and execute) these decision models: Pencil Decision Modeler. It was released in January 2015, before the standard was officially approved.
Since then, the DMN standard evolved significantly, by adding new diagram elements, new constructs and new language features, while clarifying some of the existing notions. It is now at version 1.3. And we didn’t rest on our laurels either: in SMARTS Ushuaia, we made Pencil Decision Modeler part of SMARTS, as a first-class feature and added full compliance to DMN 1.3! This post describes how SMARTS supports DMN 1.3.
BasicsDMN 1.3 still defines the building blocks which were in the original standard and which I mentioned in Talking about decisions.
As a recap:
- A Decision determines its output based on one or more inputs; these inputs may be provided by an input data element, or by another decision
- An input data is information used as input by one or more decisions, or by one or more knowledge sources
- A business knowledge model represents knowledge which is encapsulated, and which may be used by one or more decisions, or another business knowledge model. This knowledge may be anything which DMN does not understand (such as a machine learning algorithm, a neural network, etc.) or a DMN construct (called a “boxed expression”, see below)
- A knowledge source represents the authority for a decision, a business knowledge model, or another knowledge source: this is where the knowledge can be obtained (be it from a written transcription or from someone)
These blocks are organized in a graph and the links between them are called requirements.
What’s new in SMARTS’ DMN Support
More building blocksIn DMN 1.3, the following elements may also be added to a graph:
- A decision service exposes one or more decisions from a decision model as a reusable element (a service) which might be consumed internally or externally
- A group is used to group several DMN elements visually (with whatever semantics may be associated with the grouping)
- A text annotation is a shape which contains a label and can be attached to any DMN element
Custom types and variablesInput data, decision and business knowledge model elements all have an associated variable, which is of a given type (string, number etc., or custom). A variable is a handle to access the value directly passed by an input data element, or calculated by the implementation of a decision or a business knowledge model, from within the decision implementation.
Custom types may be defined to group multiple properties under a single type name (with structure) or to allow variables which will hold multiple values (arrays).
Boxed ExpressionsA few constructs are available to provide an implementation for a decision or a business knowledge models; they are termed boxed expressions since such expressions are shown in boxes which have a normalized representation. The following types of boxed expressions are available in DMN 1.3:
- Literal expression: this is a simple expression which can use the available variables to calculate a result
- Context: this is a set of entries, each combining a variable and a boxed expression. Each entry in the context can use the variables of the entries defined before it, which is like using “local variables” in some languages
- Decision table: this is a tabular representation where rows (called rules) provide the value of outputs (supplied in action columns), depending on the value of inputs (supplied in condition columns)
- Function: a function can be called using an invocation, by passing arguments to its parameters. The result of a function is the result of the execution of its body (which is an expression that can use the values of the passed parameters). A Business knowledge model can only be implemented by a function
- Invocation: this is used to call a function by name, by passing values to the function’s parameters
- List: this is a collection of values calculated from each of the boxed expressions in the list
- Relation: this is a vertical list of horizontal contexts, each with the same entries
In addition to these, SMARTS defines an additional boxed expression, called the rule set. This is a set of named rules, where each rule is composed of a condition (an expression evaluating inputs) and action (an expression providing some values to outputs).
Helping Industry AdoptionWith SMARTS Ushuaia, decision models are first-class citizens. The full compliance with DMN 1.3 means that all the DMN elements and boxed expressions, as well as the ability to interchange diagrams with other tools, are part of the package.
As is usual, any model can be tested and executed in the same context as your SMARTS decision –a decision is never made in isolation, and a model is never used in isolation either. And of course, you will benefit from the great tooling we provide.
Finally, we at Sparkling Logic strongly believe that decision management technologies should be put in the hands of all business analysts. This is why we are part of the DMN On-Ramp Group, whose mission is to provide a checklist to help customers find the DMN tool to suit your needs, educate and raise awareness about DMN, and help with DMN compliance. For a great presentation of the group, check out here.
AboutSparkling Logic is a Silicon Valley company dedicated to helping businesses automate and improve the quality of their operational decisions with a powerful digital decisioning platform, accessible to business analysts and ‘citizen developers’. Sparkling Logic’s customers include global leaders in financial services, insurance, healthcare, retail, utility, and IoT.
Sparkling Logic SMARTSTM (SMARTS for short) is a cloud-based, low-code, AI-powered business decision management platform that unifies authoring, testing, deployment and maintenance of operational decisions. SMARTS combines the highly scalable Rete-NT inference engine, with predictive analytics and machine learning models, and low-code functionality to create intelligent decisioning systems.
Marc Lerman is VP of User Experience at Sparkling Logic. You can reach him at firstname.lastname@example.org.
If you envision modernizing or building a credit origination system, an insurance underwriting application, a rating engine, or a product configurator, Sparkling Logic can help. Our SMARTS digital decisioning platform automate decisions by reducing manual processing, accelerating processing time, increasing consistency, and liberating expert resources to focus on new initiatives. SMARTS also improve decisions by reducing risk and increasing profitability.
Tags: business rules • decision automation • decision management • decisioning • DMN • RPA • rule authoring • SMARTS
Regardless of its market, in order to thrive, a business must manage its daily operations through well-defined, executed, and controlled processes and decisions. The top performers automate both with business process management (BPM) and business decision management (BDM).
Generally speaking, BPM is the set of technologies that automate processes of an organization, that is to say the stages through which the organization passes. Often, the stages include data entry, transformation, and enrichment. Then data activation, exploitation, and reporting. On the other hand, BDM is the set of technologies that automate the multitude of operational decisions of an organization. The number often varies from hundreds to thousands, and sometimes millions of times a day.
As is always the case in technology, there is no clear distinction between BPM and BDM. Suppliers of both technologies operate in both markets, and companies combine them to maximize the benefit of each technology. The purpose of this article is to draw a line between the two technologies through an example from subscription-based businesses such as Amazon Prime, Netflix, and WSJ, and a second example from life, property, and causality insurances.
Processes and decisions
Subscription-based businesses such as those listed above take the form of a billing relationship with a starting subscription event, a continuous servicing or insuring process, and an eventually un-subscription event.
Figue 1: Simplified lifecycle of subscription to a media
Figure 1 presents a high-level view of the processes and decisions that take place behind the websites of Amazon Prime, Netflix, and WSJ. Figure 2 does the same for the life, property, and causality insurances. When you take a close look to the diagrams, you may wonder what a process is and what is a decision. In fact, both are taking place within these two examples.
Figure 2: Simplified lifecycle of subscription to an insurance
The key difference
The processes are stable enough in the sense that they do not change, because they are independent of each situation and indeed of each data. Whether it is a media or an insurance, the company always starts by prospect acquisition, then customer activation, and then customer management.
On the other hand, the decisions change too often, depending on the situation captured through data. Take the subscription to a streaming media (Figure 1). Ending the contract can be a tricky decision. Should the media end the contract the date of the subscriber’s call to cancel? Or should it try to propose a discount to keep the subscriber? Should the media recall subscribers who have not paid their last bill? A week or two weeks after the due date? These are the typical decisions that separate the media from each other. Each media has its own specific way to investigate customers when the relationship has ended.
Now, take the subscription to a car insurance (Figure 2). The insurance, like any other insurance competing for the same policies, searches to augment its business. But at the same time, it wants to reduce its risks. So decisions are more complex than in the case for a media. The subscriber may not be eligible for different reasons and to decide, third-party data may be needed. Also, the subscriber may present different risks depending on the age, car, and accident history. So the insurance has to take another series of decisions: Compute a risk level, then a price that hedges the risk.
In short, the processes are what make a company belonging to an industry, the decisions are what make the company unique in this industry.
BPM, BDM, and standardization
Under the pressure of competition on the one hand and regulation on the other, many service companies have realized the importance of separating decisions from processes. Until recently, they were nested. Leading this trend, the Object Management Group (OMG) has published two recommendations (BPMN for the former, DMN for the latter), thus accelerating the emergence of BPM and BDM as two different yet complementary technologies.
In practice, BPM technologies are appropriate when the problem is centered on a document that must be co-signed by different stakeholders. For example, a loan contract that passes from the Sales to Finance, then to the Legal department, then back to the Sales, and finally to the client.
On the other hand, BDM technologies are more appropriate when the problem is centered on operationalizing decisions. For instance, evaluating the eligibility for a loan, accepting or rejecting the application, and so on.
- Regardless of its market, in order to thrive, a business must manage its daily operations through well-defined, executed, and controlled processes and decisions
- Processes are what make a company belonging to an industry, decisions are what make it unique in this industry
- BPM is the set of technologies that automate processes, that is to say the stages through which the organization passes. BDM is the set of technologies that automate the daily decisions
- In practice, BPM technologies are appropriate when the problem is centered on a document that must be co-signed by different stakeholders. BDM technologies are more appropriate when the problem is centered on operationalizing decisions
The statements in this article belong solely to the author. The article was not reviewed nor endorsed by any company or organization mentioned. You can send your comments to the author at email@example.com.
Back in my early product management days, I looked at several tools for requirement capture. I found quite a few good solutions for product requirements, but nothing I really liked for capturing source rules. When working on business rules or decision management project, I leaned towards Spreadsheets and Word documents. And then, DMN was created!
With the DMN standard (Decision Model and Notation), we finally have a notation that works with a powerful underlying methodology. I really like that the notation forces you, the business analyst, into thinking about the ultimate decision(s) in a structured way. Instead of thinking exhaustively about all the rules that exist in your business, the methodology encourages you to decompose your big decision into smaller sub-decisions. This iterative process is very friendly, and very easy to share with your colleagues.
In our upcoming webinar, on April 11, we will introduce the DMN methodology. We will illustrate actual use cases using our Pencil Decision Modeler.
Where do you start? Do you upload a predefined object model? Or do you develop it with your decision logic?
Object Model First
It is our experience that, in the vast majority of the projects, object models already exist. The IT organization defines and maintains them. This makes perfect sense, since the object model is the contract for the decision service. We need to know all features of the application before processing it. The invoking system also needs to know where to find the decision and all related parameters.
The object model, or data model, or schema, really defines the structure of the data exchanged with the decision service. Some sections and fields will play the role of input data. Some will be output. The business rules will determine or calculate those.
In our world, at Sparkling Logic, we call the object model the form. When you think about the application as data, the form represents the structure specifying what each piece of data means. For example, Customer Information is a section; and first name, last name and date of birth are fields in this section.
While business rules are based on these fields, the field definition typically belong to the system. The system will produce the transaction payload, aka the transaction data, and receive it back after the rules execute and produce the final decision.
To summarize it, the ownership of the object model lies with the IT organization, since they are responsible for making the actual service invocation.
Modifying the Object Model
Does that mean that we cannot make changes to this object model? Absolutely not. Augmenting the object model with calculations and statistics is expected. The customer info will likely include a date of birth, but your business rules will likely refer to the age of the person. It is common practice to add an Age field, that is easily calculated using a simple formula. More fields could be added in the same fashion for aggregating the total income of all co-borrowers, or for calculating the debt to income ratio.
In most systems, these calculations remain private to the decision service. As a result, the IT organization will not even know that they exist.
Quite a similar mechanism exists to add business terms to the form. In order to complement your business concepts in the form, Business terms constitute an additional lingo that is shared across project. for example, you might want to define once and for all what your cut-off values are for a senior citizen. Your business term could even specify cut-off values per state. Your rules will not have to redefine those conditions. They can simply refer to the business term directly: ‘if the applicant is a senior citizen and his family status is single’. Each project leveraging that form will reuse the same terminology without having to specify it again and again.
Like calculations, business rules can use business terms, but IT systems will not see them.
It eventually happens that variables might need to be created. That’s okay. There is no issues with introducing intermediate calculations in order to simplify your business rules. Although these fields will be visible to IT, they can be ignored. As intermediate variables, the system might not even persist these values in the database of record.
When is the Object Model provided?
It is ideal to start your decision management projects with an established object model. Uploading your data is most definitely the very first step in your project implementation. This is true regardless of whether you have actual historical data, or are building data sample for unit testing your rules as you go.
The reason you want your object model established prior to writing rules is quite simple, frankly. Each time you modify the object model, rules that depend on the affected portions of the object model (or form in our case) will need refactoring.
Granted, some changes are not destructive. If that is your case, you can absolutely keep extending your object model happily.
Some changes only move sections within the form. As long as the type of the affected fields remain the same, your rules will not need rewriting. The only exception being for the rules that use full path rather than short names. If you rule says “age < 21", you will be okay whether the age field is located. If your rule says "customer.age < 21", then you will have to modify it if age moves to a different section.
And finally some changes are quite intrusive. If you go from having one driver in the policy, to multiple drivers, all driver rules will have to account for the change in structure. You will have to decide if the age rule is applicable to all drivers, any driver in the policy, or only to the primary driver. This is where refactoring can become a burden.
The more established the object model is, the better suited you will be for writing rules.
One point I want to stress here too is that it is important for the IT team and the business analyst team to communicate and clearly set expectations on the fields of the object model. Make sure that:
- Values are clearly documented and agreed upon: CA versus California, for example
- You know which fields are used as input: if state appears in several addresses, know which one takes precedence for state requirements
Sorry for this quick tangent… This is where we see the most of ‘rules fixing’ spent!
When do Rules own the Object Model?
It is rare, but it happens. We see it mostly for green field projects. When the database of record does not exist, and there is no existing infrastructure, new projects might have the luxury of defining their own object model. When there is none, all options are on the table: have data modelers define the object model, or proceed with capturing it as you capture your business rules.
In these cases, we see the DMN standard (decision modeling and notation) leveraged more often than not. As business analysts capture their source rules in a tool like Pencil, its glossary gets assembled.
For those of you not familiar with DMN, let me summarize the approach. The decision model representation guides the business analyst through the decomposition of the decision logic. Let’s say that you want to calculate Premiums. You will need to establish the base rate, and the add-on rates. For the base rate, you will need to know details about the driver: age, risk level, and location. You will also need to know details about the car: make, model and year. Your work as a business analyst is to drill down over the layers of decisioning until you have harvested all the relevant rules.
The glossary is the collection of all the properties you encounter in this process, like age, risk level, location, model, make, year, etc. Input and output properties are named in the process. You can also organize these properties within categories. When you have completed this effort, your glossary will translate to a form, your categories to sections, your properties to fields. In this case, your harvesting covers both decision logic and object model.
Besides minor additions like computations and variables, the object model is by and large owned and provided from the start by the IT organization. Only green field projects will combine rules and data model harvesting.
Let’s continue with our series on best practices for your decision management projects. We covered what not to do in rule implementation, and what decisions should return. Now, let’s take a step back, and consider how to think about decisions. In other words, I want to focus on the approaches you can take when designing your decisions.
Think about decisions as decision flows
The decision flow approach
People who know me know that I love to cook. To achieve your desired outcome, recipes give you step by step instructions of what to do. This is in my opinion the most natural way to decompose a decision as well. Decision flows are recipes for making a decision.
In the early phases of a project, I like to sit down with the subject matter experts and pick their brain on how they think about the decision at hand. Depending on the customer’s technical knowledge, we draw boxes using a whiteboard or Visio, or directly within the tool. We think about the big picture, and try to be exhaustive in the steps, and sequencing of the steps to reach our decision. In all cases, the visual aid allows experts who have not prior experience in decision management design to join in, and contribute to the success of the project.
What is a decision flow
In short, a decision flow is a diagram that links decision steps together. These links could be direct links, or links with a condition. You may follow all the links that are applicable, or only take the first one that is satisfied. You might even experiment on a step or two to improve your business performance. In this example, starting at the top, you will check that the input is valid. If so, you will go through knock-off rules. If there is no reason to decline this insurance application, we will assess the risk level in order to rate it. Along the way, rules might cause the application to be rejected or referred. In this example, green ball markers identify the actual path for the transaction being processed. You can see that we landed in the Refer decision step. Heatmaps also show how many transactions flow to each bucket. 17% of our transactions are referred.
Advantages of the decision flow approach
The advantage of using this approach is that it reflects the actual flow of your transactions. It mirrors the steps taken in a real life. It makes it easy to retrace transactions with the experts and identify if the logic needs to be updated. Maybe the team missed some exotic paths. maybe the business changed, and the business rules need to be updated. When the decision flow links to actual data, you can use it also as a way to work on your strategies to improve your business outcome. If 17% referral rate is too high, you can work directly with business experts on the path that led to this decision and experiment to improve your outcome.
Think about decisions as dependency diagrams
A little background
In the early days of my career, I worked on a fascinating project for the French government. I implemented an expert system that helped them diagnose problems with missile guidance systems. The experts were certainly capable of layout of the series of steps to assess which piece of equipment was faulty. However, this is not how they were used to think. Conducting all possible tests upfront was not desirable. First, there was a cost to these tests. But more importantly, every test could cause more damage to these very subtle pieces of engineering.
As it was common back then in expert systems design, we thought more in a “backward chaining” way. That means that we reversed engineered our decisions. We collected evidences along the way to narrow down the spectrum of possible conclusions.
If the system was faulty, it could be due to the mechanical parts or to the electronics onboard. If it was mechanical, there were 3 main components. To assess whether it was the first component, we could conduct a simple test. If the test was negative, we could move on to the second component. Etc.
In the end, thinking about dependencies was much more efficient than a linear sequence, for this iterative process.
The dependency diagram approach
Today, the majority of the decision management systems might pale in sophistication compared to this expert system. But the approach taken by experts back then is not so different from the intricate knowledge in the head of experts nowadays in a variety of fields. We see on a regular basis projects that seem better laid out in terms of dependencies. Or at least, it seems more natural to decompose them this way to extract this precious knowledge.
What is a dependency diagram
A dependency diagram starts with the ultimate decision you need to make. The links do not illustrate sequence, as they do in the decision flows. Rather, they illustrate dependencies obviously, showing what input or sub-decision needs to feed into the higher level decision. In this example, we want to determine the risk level, health-wise, of a member in a wellness program. Many different aspects feed into the final determination. From a concrete perspective, we could look at obesity, blood pressure, diabetes, and other medical conditions to assess the current state. From a subjective perspective, we could assess aggravating or improving factors like activity and nutrition. For each factor, we would look at specific data points. Height and weight will determine BMI, which determines obesity.
Similarly to the expert system, there is no right or wrong sequence. Lots of factors help make the final decision, and they will be assessed independently. One key difference is that we do not diagnose the person here. We can consider all data feeds to make the best final decision. Branches are not competing in the diagram, they contribute to a common goal. The resulting diagram is what we call a decision model.
Advantages of the dependency diagram approach
Dependency diagrams are wonderful ways to extract knowledge. As you construct your decision model, you decompose a large problem into smaller problems, for which several experts in their own domain can contribute their knowledge. When decisions are not linear, and the decision logic has not yet been documented, this is the right approach.
This approach is commonly used in the industry. OMG has standardized the notation under the “DMN” label, which stands for Decision Model and Notation. This approach allows you to harvest knowledge, and document source rules.
Choose the approach that is best for you
Decision flows are closest to an actual implementation. In contrast, dependency diagrams, or decision models, focus on knowledge. But they feed straight into decision management systems. In the end, think about decisions in the way that best fits your team and project. The end result will translate into an executable decision flow no matter what.
We recently had the pleasure of meeting with Simon Halloway from Bloor to give him an update on our decision management and business rule products. Bloor is an independent analyst and research company based in the UK and Simon is their Practice Leader on Process Management and Sensory Devices.
Simon’s focus area includes the intelligent automation of processes using sensors, so he was particularly interested to learn about how some of our customers use SMARTS to analyze sensor data to drive automated decisions.
We were happy to see that he wrote a report following our meeting. In the report, called “Sparkling Logic brings SMARTS to Decisions”, Simon covers how PENCIL Decision Modeler and SMARTS Decision Manager work together. He explains that decision models created in PENCIL can be executed and tested with data in SMARTS. PENCIL let’s business analysts capture and document business rules and decisions using the Decision Model and Notation (DMN) standard and the decision model can be tested and validated in SMARTS.
The report also highlights some of the unique features in SMARTS:
He concludes, “Sparkling Logic’s SMARTS is definitely a solution in Bloor’s view that should be considered if an organization is looking at decision management automation across any sector, whilst still providing all the necessary support for business rules management”.
Thanks Simon, we couldn’t agree more! Get a copy of the report here.
Who doesn’t? It is easy to use; most everyone has a copy on their computer; most everyone tracks work stuff (and sometimes personal stuff). I am guilty as charged when it comes to tracking everything… finance… kids school progress… even our scout outings and achievements… When it comes to work, I track data as well as logic in my many spreadsheets. I sometimes feel like I am a little obsessive, but I must admit, it is convenient.
The ‘problem’ with ease of use is that it encourages a proliferation of these corporate spreadsheets. I met an insurance company once that was dealing daily with tens of thousands of spreadsheets!
The question becomes:
- which one should I be looking at / editing?
- has it been approved?
- how to validate the content?
I am not saying that you should not use spreadsheets, since obviously I am a huge producer and consumer. My point is that we need to become smarter at managing these spreadsheets in the greater context of the business.
If the spreadsheet collects data, it is likely that the data was produced automatically out of another system. Granted, there are times when we need to look for the information and compile it by hand, but, if we are talking about thousands or more of records, I do hope that you have systems automating this process.
I’d like to focus more on the spreadsheets that you use to collect and document your business decisions. Phrased that way, does it make sense? Would it be more relevant to talk about your business requirements? I have seen a majority of companies use Excel for documenting their business requirements, especially when looking at business rules (because they are independent pieces of logic that don’t really fit any other document type). That is exactly why the DMN standard (Decision Modeling and Notation) is based on tables.
There are two main use cases I believe. Those that use a spreadsheet to document the requirements, and continue managing the evolution of their business rules in a business rules management system. And those that keep tracking changes in the spreadsheet going forward, providing this spreadsheet to the rules writer (who may or may not be the same person by the way).
Spreadsheets that document initial business requirements
The typical example of that would be a DMN decision model that you created by hand. Not that you need to use a standard to use a spreadsheet for rules requirements. Any form of spreadsheet would do. I have actually seen business rules architects use spreadsheets for rules harvesting since I started in this industry, decades ago.
Back then, spreadsheets were used to collaborate with business users. They could read and edit the text of the business rule requirements, and iterate until everybody agreed. Then the rules writer would translate the English version of the rules into the proper rules syntax. In the more modern days, more automated import capabilities have been introduced, but I still see a fair amount of projects that are coded by hand based on spreadsheet specifications.
The key in this process is that the business rules play a major role during rules harvesting, but it does not survive the implementation. The spreadsheet would typically be thrown away after the business rules have been implemented. From that point on, the subject matter experts would have access to the business rules and would be able to tweak them directly, without a need for the waterfall approach.
I am absolutely not condemning this approach. This is a great way to align the constituents before the actual implementation. If you are about to engage in such activities for your project, I would encourage you to look more specifically at the DMN standard as it embeds methodology in its format. Instead of starting with an empty spreadsheet, it guides you in the selection of the columns that make sense, while focusing on your end-goal (the decision). There are online videos that show you how our own DMN tool works (it is called Pencil), but you can also follow the same guidelines by hand on your own white board, or spreadsheet!
Spreadsheets that continue capturing changes in business rules
The other use case we see a lot refers to spreadsheets that tend to be much bigger and that capture simple or complex assignments. Think about rating tables. In the insurance industry, these pricing schemes can be very complicated. Complexity is measured here in the number of segmentations, and parameters that you need to look at. For example, geography would be a parameter, which you could segment at a very fine grain level, for each zipcode (there are 42,000 zipcodes in the US!!). The pricing would also depend on many factors like the number of accidents and violations, on the age of the driver, on the years of experience, on the classification of the car, etc.
While spreadsheets are convenient to capture, sometimes auto-calculate, they become significantly less easy to validate. With a quick eye check, or a quick calculation, you can verify that the rates are growing as expected, but actually validating with data is not that easy.
The main difference with the previous use case is that you may not want to throw away your spreadsheet. When a pricing revision happens, you want to be able to make simple edits in the overall spreadsheet, and get them approved independently of the implementation.
Some tools might give you the ability to import and export tables. I want to bring your attention to two potential issues that you need to consider ahead of your project implementation.
First, if the export of the table does not match the format of the original spreadsheet, the business users will not be able to easily connect the dots. That’s one thing to check early on.
The second important criteria is runtime performance. While importing a spreadsheet might be feasible, how large is yours? We have seen spreadsheets broken down into tens of thousands of rows, and even hundreds of thousands of rows! While they could turn into business rules, it is not a done deal that the implementation will be able to execute in a reasonable amount of time.
While I am big fan of business rules (of course), I recommend that you keep these spreadsheet as data that can be referred to by the rules. Keeping the large table in the database is an option with a big runtime cost (avoid making external calls if you do not have to). What I would personally do is to use lookup models. They are in-memory tables (as in ‘data’) that the rules can refer to. Because they are in memory, they are cached and indexed for performance. The beauty of keeping your spreadsheets as data is that data can be refreshed without a translation phase as we saw before. The business translation of what the spreadsheet means is defined once and for all in the interface of this lookup model. This is where you will define which column is input, which is output, and what they mean if it is not a simple test or assignment.
I don’t have an online video to show you yet, but we will have an insider webinar very soon. If you sign up for an Evaluation, you will be invited, and / or will be able to watch the recording at your convenience.
Rex Keith, from Equifax, and I also presented at BBC last year. If you have access to the proceedings, there will be some information there too.