04.04.2016

# Using R Server in SQL Server 2016 to calculate the expected value for CRM leads (Monte Carlo method)

Technical Value

SQL Server 2016

In this post I’m demonstrating a use case for the new R Server (formerly known as Revolution R Enterprise) in SQL Server 2016 to calculate the expected value for CRM leads using a Monte Carlo approach.

To keep things simple, let’s assume we have the following leads in our CRM:

For example, the chance for winning lead number 7 with \$1,000,000 is 10%. So what is the amount of incoming orders we can plan with (assuming the probabiltity for the individual lead is correct)? A common approach is to use a weighted sum (sum over probability times value), which is easy to calculate in T-SQL:

1. select sum([Probability]*[Value]) ExpectedValue from CRM

While this approach works well with a large number of leads of similar size, for the example above we have to realize that \$100,000 of the \$256,000 result from the relatively unlikely win of lead number 7. And in fact, we could win or loose this lead which means a value of 0 or 1 million but nothing in between. So this approach may be misleading with skewed data.

Another approach is to use a threshold and only count the leads with a probability above the threshold. The query would look somewhat like this:

1. select sum([Value]) ExpectedValue from CRM where [Probability]>=.7

Here we’re only counting leads with a probability of at least 70%. We just need to be sure not to understand the threshold of 70% as a probability here. It would be wrong to interpret the result in a way like “with a probability of 70% we can expect incoming orders of at least \$52,000”. The reason is that each lead can be a win or loss independently from the other leads.

So, what could be a more realistic method to estimate the expected value of the leads from above? One idea could be to simulate cases where each lead can be converted in an order or not at the individual probability of the lead. If we run say 100,000 such simulations we can look at the distribution of the results to get a better understanding of the resulting total. This approach is called Monte Carlo method. While we could implement this in T-SQL (for example look at an older blog post of mine about Monte Carlo in T-SQL ), it’s easier to do so in R and with the new R Server capabilities in SQL Server 2016 we can better use this to do the calculation (see here for the basics about T-SQL stored procedures in R ).

Let’s start with the resulting procedure code before I go into more details:

1. EXEC sp_execute_external_script
2. @language = N'R'
3. , @script = N'
4. set.seed(12345)
5. eval<-function() {sum(ifelse(runif(min = 0, max=1, n=nrow(mydata))<=mydata\$Probability, mydata\$Value,0))}
6. r<-replicate(100000,eval())
7. q<-quantile(r,  probs = c(0.1, 0.5, 1, 2, 5, 10,25,50,100)/100)
8. result<-data.frame(q)
9. result\$quantile<-rownames(result)
10. '
11. , @input_data_1 = N'select  CRMID, Probability, Value from CRM'
12. , @input_data_1_name=N'mydata'
13. , @output_data_1_name = N'result'
14. WITH RESULT SETS ((
15.   [value] float
16.   ,quantile nvarchar(10)
17. ));

The R script itself is marked in blue here. I runs 100,000 random experiments on our input data. In each experiment, 7 (the number of rows in our dataset) evenly distributed random values. Only if the random value is below the given probability of the lead (which happens more rarely the smaller the value of the given probability is) the value is accounted. We then calculate quantiles and return the result as a SQL table.

Here is the result of this T-SQL statement:

How do we read this result? Here are some examples:

• Line 6: In 10% of the cases, the value was below \$52,000 and, consequently, in 90% of the cases, the value was above \$52,000
• Line 2:  In 99.5% of the cases that value was above \$15,000
• Line 5: In 95% of all cases the value was above \$37,000

Or, in other words, at a confidence level of 90% we can assume to result in a value of at least \$52,000 here. So this approach does not only give a single value but allows you to understand the expected result based on a given confidence.

Of course, T-SQL would not be a good choice to develop and test even a small R script as the one above. Usually when working with R you’re following a more interactive approach. I suggest developing the script in an interactive R tool like RStudio. In order to do so, I’m using same simple wrapper code to provide the data set from SQL Server as shown below:

1. with f10 AS
2.
3. (
4.
5. SELECT * FROM
6.
7. ( VALUES
8.
9.   (0),(1),(2),(3),(4),(5),(6),(7),(8),(9)
10.
11. ) AS B (f)
12.
13. )
14.
15. , f1000 AS
16.
17. (
18.
19. SELECT
20.
21. f = a.f+b.f*10+c.f*100
22.
23. FROM f10 a
24.
25. CROSS JOIN f10 b
26.
27. CROSS JOIN f10 c
28.
29. )
30.
31. , f1000000 AS
32.
33. (
34.
35. SELECT
36.
37. f= a.f + b.f*1000
38.
39. FROM f1000 a
40.
41. CROSS JOIN f1000 b
42.
43. )
44.
45. INSERT INTO dbo.stammdaten
46.
47. (name, vorname, jahr)
48.
49. SELECT
50.
51. nachname = NCHAR(CAST(65+26 * RAND( ABS(CHECKSUM(CAST(NEWID() AS VARCHAR(100)))) ) AS INT))
52.
53. + NCHAR(CAST(65+26 * RAND( ABS(CHECKSUM(CAST(NEWID() AS VARCHAR(100)))) ) AS INT))
54.
55. + NCHAR(CAST(65+26 * RAND( ABS(CHECKSUM(CAST(NEWID() AS VARCHAR(100)))) ) AS INT))
56.
57. + NCHAR(CAST(65+26 * RAND( ABS(CHECKSUM(CAST(NEWID() AS VARCHAR(100)))) ) AS INT))
58.
59. ,vorname =
60.
61. NCHAR(CAST(65+26 * RAND( ABS(CHECKSUM(CAST(NEWID() AS VARCHAR(100)))) ) AS INT))
62.
63. + NCHAR(CAST(65+26 * RAND( ABS(CHECKSUM(CAST(NEWID() AS VARCHAR(100)))) ) AS INT))
64.
65. + NCHAR(CAST(65+26 * RAND( ABS(CHECKSUM(CAST(NEWID() AS VARCHAR(100)))) ) AS INT))
66.
67. + NCHAR(CAST(65+26 * RAND( ABS(CHECKSUM(CAST(NEWID() AS VARCHAR(100)))) ) AS INT))
68.
69. ,jahr = CAST( 2000+16*RAND( ABS(CHECKSUM(CAST(NEWID() AS VARCHAR(100)))) ) AS INT)
70.
71. FROM f1000000

Again, the code in blue is the final R code which we can copy over to our T-SQL function for production. The RStudio environment allows us to interactively develop the script. The surrounding code loads the data into a data frame with the same result as in our T-SQL sp_execute_external_script call. The method used by SQL Server is much more efficient, however for testing purposes the ODBC call is sufficient.

For example, we can plot a histogram in R:

1. hist(r,breaks=50)

This shows the distribution of our cases. For example, there are no cases that end with a value of between 500,000 and 1 million. Or we could plot a density function for the distribution:

1. h<-hist(r, plot = F, breaks=1000)
2. plot(x=h\$breaks[-1], y=(100000-cumsum(h\$counts))/100000, type="l", ylim=c(0,1))
3. cumsum(h\$counts)

Conclusion

While  R is mostly known for machine learning and advanced statistical calculations it may also be useful for simple simulations like the one above where we analyzed the distribution of leads in CRM and calculated an expected value based on a confidence. Doing the same in T-SQL would result in quite a lot of SQL code which in turn makes it more difficult to read and understand the procedure (compared to our short R script).

Another option would be to put the same code in a CLR library but then we would have to deploy the library separately instead of keeping the code in the database. However, developing the R code with SQL Server tools like Management Studio is not much fun. Instead, we used a short wrapper around the code to develop and test the code in an interactive R GUI like RStudio.

Teilen auf