This study investigates the performance of the Mish activation function in a CNN-BiGRU model for intrusion detection, comparing it with the widely used ReLU activation. The results show that Mish consistently outperforms ReLU across multiple datasets, enhancing the model's accuracy and effectiveness in identifying cyber threats. The findings contribute valuable insights into the role of activation functions in deep learning for cybersecurity applications.