Using the family of functions we defined a couple days ago of the form $f_n\left(x\right) = \sum^n_{k=0} c_{k,n} x^k e^{-\frac{x^2}{2}}$ we can produce some nice numerically stable methods for generating the basis and taking the derivative in spectral fashion. First, the code, so that you can verify it works.
import numpy as np
import matplotlib.pyplot as plt
def basis_gen(t,N):
w = np.exp(-(t**2)/2)/(np.sqrt(np.sqrt(np.pi)))
B = np.zeros(shape=(N,len(w)))
B[:2,:] = np.vstack((w,-np.sqrt(2)*t*w))
for k in range(2,N):
B[k,:] = np.sqrt(1-(1/k))*B[k-2,:]
if k%2:
B[k,:] -= t*np.sqrt(2/k)*B[k-1,:]
else:
B[k,:] += t*np.sqrt(2/k)*B[k-1,:]
return B
def differential_matrix(N):
H = np.diag((0.5-np.arange(N-1)%2)*np.sqrt(np.arange(1,N)*2),k=1)
H -= H.T
return -H
dt = 0.001
N = 101
t = np.arange(-8,8,dt)
B = basis_gen(t,N)
D = differential_matrix(N)
f = np.sin(t)
a = np.linalg.lstsq(B.T,f)[0]
a[-3:] = 0
plt.figure()
plt.plot(t,B.T@a,label='input function')
plt.plot(t,B.T@(D@a),label='Derivative')
The basis generation follows neatly from our prior recurrence relation, but the differential matrix was more complicated. Largely, it follows from:$$\frac{d}{d x}\sum^n_{k=0} c_{k,n}x^k e^{-\frac{x^2}{2}} = \sum^n_{k=0} c_{k,n} \left(k x^{k-1} - x^{k+1}\right) e^{-\frac{x^2}{2}}$$Rearranging terms we see:$$\frac{d f_n\left(x\right)}{dx}-x f_n\left(x\right) = \sum^n_{k=0} c_{k,n} k x^{k-1} e^{-\frac{x^2}{2}}$$Using the identity for $f_{n}\left(x\right) = \left(-1\right)^{n-1} \cdot \sqrt{\frac{2}{n}} \cdot x \cdot f_{n-1}\left(x\right) + \sqrt{1-\frac{1}{n}} \cdot f_{n-2}\left(x\right)$ we can calculate the $x \cdot f_{n}\left(x\right)$ term, and eventually via some careful elimiation, produce:$$\frac{d f_n\left(x\right)}{dx} = -1^{n} \frac{1}{\sqrt{2}} \left(\sqrt{n} \cdot f_{n-1}-\sqrt{n+1} \cdot f_{n+1}\right)$$I also want to touch on one more trick. If we didn't have this algebra available to us, we could note that $\frac{d}{dx}\left(x^n e^{-\frac{x^2}{2}}\right) = n x^{n-1} e^{-\frac{x^2}{2}} - x^{n+1} e^{-\frac{x^2}{2}}$. We could easily construct a differentiation matrix $D$ so that we could differentiate a vandermonde-type matrix, $V$, of these monomials. In our case we note that our family of functions, $F$, can be found by multiplying such a matrix by its associated matrix of coefficients, $VC = F$. Therefore, $VDC$ would be equal to the matrix of derivatives of our functions. This is inefficient though, and hard to work with. Instead, what we want is a differentiation matrix $H$ such that $VCH$ is equal to the derivatives of our functions. As $VDC=VCH$, and since $C$ is invertible we can see $H = C^{-1}DC$, which we can then numerically evaluate if needed, or even use to work out the analytic result.
There is a key note. The last coefficient should be 0 when a differentiation operation is undertaken. This padding is needed as the powers of $x$ involved increase by $1$ for every differentiation. I would also strongly recommend being careful with using the inner product trick to fit coefficients. If your sample range isn't large enough you won't well approximate the integral over the whole of the reals!