Increasing the number of input features leads to the curse of dimensionality. In K-NN, distance calculations lose their effectiveness in high-dimensional spaces. With more dimensions, data points disperse, resulting in similar distances between them. This hinders K-NN's ability to identify genuine nearest neighbors, thus diminishing accuracy. Furthermore, increased dimensions elevate computational and storage demands, potentially slowing down K-NN. Options (A), (C), and (D) are incorrect as they propose improvements or advantages that are not realized. Consequently, K-NN experiences reduced prediction accuracy when faced with numerous input variables due to the curse of dimensionality.