Over the past 40 years, neurobiology and computational neuroscience has proved that deeper understanding of visual processes in humans and non-human primates can lead to important advancements in computational perception theories and systems. One of the main difficulties that arises when designing automatic vision systems is developing a mechanism that can recognize - or simply find - an object when faced with all the possible variations that may occur in a natural scene, with the ease of the primate visual system. The area of the brain in primates that is dedicated at analyzing visual information is the visual cortex. The visual cortex performs a wide variety of complex tasks by means of simple operations. These seemingly simple operations are applied to several layers of neurons organized into a hierarchy, the layers representing increasingly complex, abstract intermediate processing stages. In this Research Topic we propose to bring together current efforts in neurophysiology and computer vision in order 1) To understand how the visual cortex encodes an object from a starting point where neurons respond to lines, bars or edges to the representation of an object at the top of the hierarchy that is invariant to illumination, size, location, viewpoint, rotation and robust to occlusions and clutter
and 2) How the design of automatic vision systems benefit from that knowledge to get closer to human accuracy, efficiency and robustness to variations.