Enabling Operational Excellence
Enabling Operational Excellence
Enabling Operational Excellence
Enabling Operational Excellence

TURNING OPERATIONAL KNOWLEDGE & COMPLIANCE INTO A COMPETITIVE EDGE

We systemize tacit knowledge into explicit knowledge

Blog Enabling Operational Excellence

Posts Tagged ‘decision-making systems’

Decision Analysis: Don’t Drown in Decisions

Consider the following examples of decisions arising in a regulated environment (acks James Taylor):

Government agency in Europe has regulations relating to Social Benefits. The agency itself must make decisions like “Is this person eligible for this benefit?” and “How much benefit, and for how long, is this person entitled to?”” that conform with those regulations.

An individual company might also have a decision like “Is this person entitled to paid leave to care for a sick child?” that is dependent both on company policy/practices and regulations that relate to social benefits (because they might require companies of a certain size to pay for this kind of leave).

My Analysis Indeed, these are operational business decisions. I’m not surprised, by the way, they are in effect about money. Of course money entails decisions! But here are two important observations. 1. Regulations typically include a great many one-off behavioral rules (“one-off” meaning following no particular pattern). Examples might include:
    • Payment of benefits shall cease immediately upon the death of the beneficiary.
    • Payment of benefits shall not be made to any beneficiary living outside the country for more than 9 months of a fiscal year.
    • Payment of benefits shall not be made directly to any minor.
Is decision analysis suited to capturing/interpreting such one-off rules? Is it meaningful to have a decision such as “Should a benefit payment be made?” Shouldn’t such behavior be presumed automatic (but traceable and adaptable)? A better option is to simply have a task or action such as “Make payment”. Violations of related business rules for any particular case produces exceptions, possibly to be addressed by appropriate messages and/or procedures invoked automatically. That’s how basic business behavior becomes autonomous, so the business can concentrate on matters requiring greater skill, experience or intelligence. If you’re not careful everything becomes a decision. Where do you draw the line? Isn’t there a difference in simply being competent vs. being smart in doing business?

Aside:  Business vocabulary is as always obviously important. For example, does “payment” mean “accrual of benefit”, “act of payment”, “amount of payment”, or something else? No small issue.

(2.) Consider the following (one-off) behavioral rule:

A non-citizen may cash a payment of benefits for a citizen only if married to that citizen and the citizen is not deceased.

What happens in the case of death or divorce, and then the non-citizen tries to cash a payment? There’s no organizational decision involved there. There’s a personal decision … the non-citizen can decide to try to violate the rule … but we don’t really care about that. We only care about detecting the violation. We really don’t need or want an (operational business) decision here, do we? Again, if you have a decision for every possible kind of violation of every behavioral rule, you’ll very quickly drown in decisions. Don’t go there!  

Continue Reading

Point-of-Knowledge Architecture: True Business Agility, Incremental Development, In-Line Training, and Real-Time Compliance

Excerpted from Business Rule Concepts: Getting to the Point of Knowledge (4th ed, 2013), by Ronald G. Ross, 162 pp, http://www.brsolutions.com/b_concepts.php Let me use an example to sketch the workings of business rules in smart architecture based on points of knowledge[1].  Refer to the Figure to visualize how the system works.

Aside: I have been using this same slide since 1994(!).

Suppose you have a process or procedure that can be performed to take a customer order.
  • An order is received.  Some kind of event occurs in the system.  It doesn’t really matter too much what kind of event this is; let’s just say the system becomes aware of the new order.
  • The event is a flash point — one or more business rules pertain to it.  One is:  A customer that has placed an order must have an assigned agent.
  • We want real-time compliance with business policy, so this business rule is evaluated immediately for the order.  Again, it doesn’t much matter what component in the system does this evaluation; let’s just say some component, service, or platform can do it.
  • Suppose the customer placing the order does not have an assigned agent.  The system should detect a fault, a violation of the business rule.  In other words, the system should become aware that the business rule is not satisfied by this new state of affairs.
  • The system should respond immediately to the fault.  In lieu of any smarter response, at the very least it should respond with an appropriate message to someone, perhaps to the order-taker (assuming that worker is authorized and capable).
What exactly should the error message say? Obviously, the message can include all sorts of ‘help’.  But the most important thing it should say is what kind of fault has occurred from the business perspective.  So it could start off by literally saying, “A customer that has placed an order must have an assigned agent.”  We say the business rule statement is an error message (or better, a guidance message). That’s a system putting on a smart face, a knowledge-friendly face, at the very point of knowledge.  But it’s a two-way street.  By flashing business rules in real-time, you have an environment perfectly suited to rapidly identifying opportunities to evolve and improve business practices.  The know-how gets meaningful mindshare.  That’s a ticket to continuous improvement and true business agility.

Smarter and Smarter Responses

Is it enough for the system simply to return a guidance message and stop there?  Can’t it do more?  Of course. For the order-taking scenario, a friendly system would immediately offer the user a means to correct the fault (again assuming the user is authorized and capable).  Specifically, the system should offer the user another procedure, pulled up instantaneously, to assign an appropriate agent.  If successful, the user could then move on with processing the order. This smart approach knits procedures together just-in-time based on the flash points of business rules.  It dynamically supports highly-variable patterns of work, always giving pinpoint responses to business events (not system events).  In short, it’s exactly the right approach for process models any time that applying know-how is key — which these days, is just about always! The Business Rules Manifesto (http://www.businessrulesgroup.org/brmanifesto.htm) says this:  “Rules define the boundary between acceptable and unacceptable business activity.”  If you want dynamic processes, you must know exactly where that boundary lies, and how to respond to breaches (at flash points) in real time. Is that as smart as processes can get?  Not yet.  Over time, the business rules for assigning appropriate agents might become well enough understood to be captured and made available to the system.  Then when a fault occurs, the system can evaluate the business rules to assign an agent automatically.  At that point, all this decision-making gets tucked very neatly under the covers.  Even if the business rules you can capture are sufficient for only routine assignments, you’re still way ahead in the game. Smart architecture based on business rules is unsurpassed for incremental design, where improvement:
  • Focuses on real business know-how, not just better GUIs or dialogs.
  • Continues vigorously after deployment, not just during development.
  • Occurs at a natural business pace, not constrained to software release cycles.
The Manifesto says it this way:  “An effective system can be based on a small number of rules.  Additional, more discriminating rules can be subsequently added, so that over time the system becomes smarter.”  That’s exactly what you need for knowledge retention, as well as to move pragmatically toward the knowledge economy.  Business rules give you true agility.

Continue Reading 1 Comment

A Playful Riff on “Decide”

A person close to the DMN (Decision Model Notation) standard recently wrote:

I can’t see how you can object to the idea that decisions can be automatic, or used for detection, unless you maintain that decisions can only be taken by people?

  My Response Putting theological questions aside, in the beginning there was man. Well, people. Well, animals and people. As far as science is currently aware, there is nothing else in the universe that can “decide” something. Well, let’s put quantum mechanics aside. How things get “decided” there is just plain weird. That’s not human scale anyway (as far as science is currently aware). My point is that the concept “decide” makes absolutely no sense unless you acknowledge that “deciding” is a human concept. People decide stuff (or decide when things have been “decided”.) When Machines “Decide” Can machines “decide” things? Of course. Can they often “decide” things better than humans? Of course. Can they often “decide” things instead of people? Of course. Would you call what it is the machines do in such cases “deciding” if there were no people who could do the thing we call “deciding” in the first place? Of course not. “To decide” is fundamentally a human characteristic. If you try to remove the “human” sense of “to decide” from the verb, it’s not how the average person would understand it. This sense comes across clearly in the real world definition of “decide” [MWUD]: to dispel doubt on. When Machines “Doubt” Can machines “doubt”? I’ll let the philosophers decide that (yes decide). I’ll just say this: I doubt (yes doubt) it would be called “doubt” unless people experienced “doubt” in the first place. So when you use the word “decide”, even for what machines are doing, use it for things that people would call “decide”. If you want to use the word “decide” for machines in some other way – for things that people wouldn’t call “decide” in the real world – then please, just plainly admit you’re in systemland, not in peopleland.

Continue Reading

Why Business Rules Will Always Remain in Structured Natural Language

I was reading a fascinating article in The Economist about how robots, including military drones and driverless automobiles, increasingly need ethical guidance[1]. What does that have to do with business rules, you ask? Read on … In the next five years, software systems will begin to appear that bypass programming going more or less straight from regulations, contracts, agreements, deals, certifications, warranties, etc. (written in English or other natural language) to executing code. Think about the economics of the equation! If for no other reason (and there are many others), you’ll quickly see the why a snowballing migration to such platforms is inevitable. And these tools will do the same for business rules based on business policies. I said more or ‘more or less’ above because the tool will have to make certain assumptions about the meaning of what it ‘reads’. For example, if I say, “a person must not be married to more than one other person” most of us would probably assume that means “at a given point in time”. But automated tools could easily be held responsible for making the wrong interpretation. It should therefore err on the safe side, and at the very least, log all its reasoning. That’s where the article comes in. Concerning robots that make liability-laden decisions, it contends that principles are needed …

“… to determine whether the designer, the programmer, the manufacturer or the operator is at fault if an autonomous drone strike goes wrong or a driverless car had an accident. In order to allocate responsibility, autonomous systems must keep detailed logs so that they can explain the reasoning behind their decisions when necessary.” [emphasis added]

That explanation better be in a form that humans (and lawyers too) can actually read. That means structured natural language. The article went on to make the following astute observation …

“This has implications for system design: it may, for example, rule out the use of artificial neural networks … decision-making systems that learn from example rather than obeying predefined rules.”

Right! Where there is social liability, there will always be natural language. P.S. To vendors: If your meaning of ‘business rule’ doesn’t compel you toward this debate, then you’re simply not really doing ‘business rules’(!).


[1] “Morals and the Machine”, The Economist, June 2, 2012, p. 15

Continue Reading