Cara menggunakan partial correlation python numpy

Partial correlation measures the strength of a relationship or degree of association between two variables, while controlling/removing the effect of one or more other variables.
Partial correlation is used when correlation coefficient will give misleading results if there is another, confounding, variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient.

Cara menggunakan partial correlation python numpy

Like the correlation coefficient, the partial correlation coefficient may take values in the range from –1 to 1 wiyh exactly the same interpretations:
- the value 1 tells a perfect positive linear relationship.
- the value – 1 tells a perfect negative correlation.
- the value 0 tells that there is no linear relationship.


Cara menggunakan partial correlation python numpy

Partial correlation - removing the effect of other variables:



import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.stats as stats

# pip install pingouin
import pingouin as pg


# raw correlations
rmg = .7
rsg = .8
rms = .9

# partial correlations
rho_mg_s = (rmg - rsg*rms) / ( np.sqrt(1-rsg**2)*np.sqrt(1-rms**2) )
rho_sg_m = (rsg - rmg*rms) / ( np.sqrt(1-rmg**2)*np.sqrt(1-rms**2) )

print(rho_mg_s)
print(rho_sg_m)

OUT:
-0.07647191129018778
0.5461186812727504

Partial correlation calculation - 3 datasets:



N = 76

# correlated datasets
x1 = np.linspace(1,10,N) + np.random.randn(N)
x2 = x1 + np.random.randn(N)
x3 = x1 + np.random.randn(N)

# let's convert these data to a pandas frame
df = pd.DataFrame()
df['x1'] = x1
df['x2'] = x2
df['x3'] = x3

# compute the "raw" correlation matrix
cormatR = df.corr()
print(cormatR)

# print out one value
print(' ')
print(cormatR.values[1,0])

# partial correlation
pc = pg.partial_corr(df,x='x3',y='x2',covar='x1')
print(' ')
print(pc)

Cara menggunakan partial correlation python numpy

Visualizing the matrices - correlation VS partial correlation:



fig,ax = plt.subplots(1,2,figsize=(6,3))

# raw correlations
ax[0].imshow(cormatR.values,vmin=-1,vmax=1)
ax[0].set_xticks(range(3))
ax[0].set_yticks(range(3))

# add text 
for i in range(3):
    for j in range(3):
        ax[0].text(i,j,np.round(cormatR.values[i,j],2), horizontalalignment='center')

        
        
# partial correlations
partialCorMat = df.pcorr()
ax[1].imshow(partialCorMat.values,vmin=-1,vmax=1)
ax[1].set_xticks(range(3))
ax[1].set_yticks(range(3))

for i in range(3):
    for j in range(3):
        ax[1].text(i,j,np.round(partialCorMat.values[i,j],2), horizontalalignment='center')


plt.show()

Cara menggunakan partial correlation python numpy

Cara menggunakan partial correlation python numpy



Cara menggunakan partial correlation python numpy


Last update on August 19 2022 21:51:44 (UTC/GMT +8 hours)

NumPy Statistics: Exercise-9 with Solution

Write a NumPy program to compute cross-correlation of two given arrays.

Sample Solution:-

Python Code:

import numpy as np
x = np.array([0, 1, 3])
y = np.array([2, 4, 5])
print("\nOriginal array1:")
print(x)
print("\nOriginal array1:")
print(y)
print("\nCross-correlation of the said arrays:\n",np.cov(x, y))

Sample Output:

Original array1:
[0 1 3]

Original array1:
[2 4 5]

Cross-correlation of the said arrays:
 [[2.33333333 2.16666667]
 [2.16666667 2.33333333]]

Python-Numpy Code Editor:

Have another way to solve this solution? Contribute your code (and comments) through Disqus.

Previous: Write a NumPy program to compute the covariance matrix of two given arrays.
Next: Write a NumPy program to compute pearson product-moment correlation coefficients of two given arrays.

Python: Tips of the Day

Combining Lists Using Zip:

  • Takes multiple collections and returns a new collection.
  • The new collection contains items where each item contains one element from each input collection.
  • It allows us to transverse multiple collections at the same time.
name = 'abcdef'
suffix = [1,2,3,4,5,6]
zip(name, suffix)
--> returns (a,1),(b,2),(c,3),(d,4),(e,5),(f,6)


Cross correlation is a way to measure the degree of similarity between a time series and a lagged version of another time series.

This type of correlation is useful to calculate because it can tell us if the values of one time series are predictive of the future values of another time series. In other words, it can tell us if one time series is a leading indicator for another time series.

This type of correlation is used in many different fields, including:

Business: Marketing spend is often considered to be a leading indicator for future revenue of businesses. For example, if a business spends an abnormally high amount of money on marketing during one quarter, then total revenue is expected to be high x quarters later.

Economics: The consumer confidence index (CCI) is considered to be a leading indicator for the gross domestic product (GDP) of a country. For example, if CCI is high during a given month, the GDP is likely to be higher x months later.

The following example shows how to calculate the cross correlation between two time series in Python.

Example: How to Calculate Cross Correlation in Python

Suppose we have the following time series in Python that show the total marketing spend (in thousands) for a certain company along with the the total revenue (in thousands) during 12 consecutive months:

import numpy as np

#define data 
marketing = np.array([3, 4, 5, 5, 7, 9, 13, 15, 12, 10, 8, 8])
revenue = np.array([21, 19, 22, 24, 25, 29, 30, 34, 37, 40, 35, 30]) 

We can calculate the cross correlation for every lag between the two time series by using the ccf() function from the statsmodels package as follows:

import statsmodels.api as sm

#calculate cross correlation
sm.tsa.stattools.ccf(marketing, revenue, adjusted=False)

array([ 0.77109358,  0.46238654,  0.19352232, -0.06066296, -0.28159595,
       -0.44531104, -0.49159463, -0.35783655, -0.15697476, -0.03430078,
        0.01587722,  0.0070399 ])

Here’s how to interpret this output:

  • The cross correlation at lag 0 is 0.771.
  • The cross correlation at lag 1 is 0.462.
  • The cross correlation at lag 2 is 0.194.
  • The cross correlation at lag 3 is -0.061.

And so on.

Notice that the correlation between the two time series becomes less and less positive as the number of lags increases. This tells us that marketing spend during a given month is quite predictive of revenue one or two months later, but not predictive of revenue beyond more than two months.

This intuitively makes sense – we would expect that high marketing spend during a given month is predictive of increased revenue during the next two months, but not necessarily predictive of revenue several months into the future.

Additional Resources

How to Calculate Autocorrelation in Python
How to Calculate Partial Correlation in Python
How to Calculate Point-Biserial Correlation in Python