I tried running both incremental PCA(n_batches = 100) and randomized PCA on MNIST(digit classification) dataset to reduce the dimensions from 784 to 154. But when I calculated the mean squared error between the reduced sets I am getting an error of 39650.73953377666. Is it normal to have such huge errors between the two datasets?
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version = 1)
X_mnist, y_mnist = mnist["data"], mnist["target"]
rnd_pca = PCA(n_components = 154, svd_solver = "randomized")
X_reduced_mnist = rnd_pca.fit_transform(X_mnist)
from sklearn.decomposition import IncrementalPCA
n_batches = 100
inc_pca = IncrementalPCA(n_components = 154)
for batch in np.array_split(X_mnist, n_batches):
inc_pca.partial_fit(batch)
X_reduced = inc_pca.transform(X_mnist)
mean_squared_error(X_reduced_mnist, X_reduced)
39650.73953377666
question from:
https://stackoverflow.com/questions/65886674/huge-mean-squared-error-between-reduced-datasets-obtained-after-incremental-pca 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…