Fiber-optic communication systems form the backbone of the worldwide telecommunication networks, providing for more than 99% of the global data traffic. The upcoming deployment of 5/6G networks, online services like 4k/8k HDTV, and the development of the Internet of Things concept, connecting billions of active devices, as well as the high-speed optical access networks, impose progressively higher and higher requirements on the underlying optical network infrastructure. With current network infrastructures approaching almost unsustainable levels of data traffic, network operators and system suppliers are now looking for ways to respond to these demands while also maximizing the returns on their investments. Thus, researchers are interested in examining the solutions that allow the vendors/operators to replace or install the lowest number of network components possible while not degrading but rather improving further the quality of service.

To improve the performance of optical fiber systems, it is important to mitigate the detrimental impact of linear and, most importantly, nonlinear transmission impairments that cap the systems’ throughput. Up to now, numerous “conventional” digital signal processing algorithms have been proposed and studied for improving the optical lines’ performance, i.e., for the so-called optical channel equalization (which is just another name for impairment mitigation). Yet very recently, a different paradigm for channel equalization has started to emerge, owing its attraction to the recent breakthroughs achieved in the field of artificial intelligence in general, and machine learning, in particular. The main contributing success factor has been the new efficient algorithms and hardware capabilities for training deep artificial neural networks. In a latest work (JSTQE 28(4), article number 7600223, 2022), Pedro and co-authors underline that, despite several recognized advantages and benefits of using neural networks in optical transmission equalization, there are still many challenges, underwater stones, and dilemmas that can seriously hinder the success of a neural network in performing the desired task. In this research, authors provide an overview of typical misunderstandings and misinterpretations occurring when applying neural network-based methods to channel equalization in coherent optical communications and present the respective recommendations and direct solutions to the aforementioned difficulties. These results can foster new concepts and techniques in communications, allowing the researchers to avoid known neural network-based equalizer design pitfalls, such that future investigators can engineer efficacious superior devices. When dealing with the process of designing machine learning solutions for signal processing in general, it is expected that the questions raised in this work will be of interest not only to optical communication experts, but also to the general scientific community, including computer scientists, physicists, applied mathematicians, and others.