This study examines the performance of the Mish activation function in a CNN-BiGRU model for intrusion detection, comparing it with the commonly used ReLU function across multiple datasets. Results indicate that Mish outperforms ReLU, enhancing the model's accuracy and effectiveness in identifying cyber threats. The paper contributes to the understanding of activation functions in deep learning models, particularly in security applications, emphasizing the need for further investigation in this domain.
Related topics: