Abstract:
Dolphinlike sonar signals are used to obtain synthetic aperture images of complicated, three-dimensional (3-D) objects from transmitter--receiver positions that are above the plane of effective target rotation. This geometry allows 3-D information to be incorporated into a SAS image. The 3-D information can be represented as a set of spatially registered 2-D (x,y) pixel maps, where each map corresponds to a different focal point in elevation (z). Similar spatially registered environmental maps are used for sensor fusion in the mammalian superior colliculus and in the optic tectum of reptiles and fish. The constant-z maps can be combined to improve the 2-D representation of a 3-D object. For example, the image at a given x,y position can be focused by using pixels from the constant-z map that maximize a local focus criterion at position x,y in the map. Alternatively, a surface recognition criterion yields the z coordinate of the surface of a 3-D object at a given x,y point. The resulting 3-D representation of the object's surface corresponds to a visionlike representation. These representations can be combined with other spatially registered maps from the visual domain for bionic sensor fusion.