The visual system is the part of the central nervous system which gives organisms the ability to process visual detail, as well as enabling the formation of several non-image photo response functions. It detects and interprets information from visible light to build a representation of the surrounding environment. The visual system carries out a number of complex tasks, including the reception of light and the formation of monocular representations; the buildup of a nuclear binocular perception from a pair of two dimensional projections; the identification and categorization of visual objects; assessing distances to and between objects; and guiding body movements in relation to the objects seen. The psychological process of visual information is known as visual perception, a lack of which is called blindness. Non-image forming visual functions, independent of visual perception, include the pupillary light reflex (PLR) and circadian photoentrainment.
This article mostly describes the visual system of mammals, humans in particular, although other "higher" animals have similar visual systems (see bird vision, vision in fish, mollusc eye, and reptile vision).
Maps, Directions, and Place Reviews
System overview
Mechanical
Together the cornea and lens refract light into a small image and shine it on the retina. The retina transduces this image into electrical pulses using rods and cones. The optic nerve then carries these pulses through the optic canal. Upon reaching the optic chiasm the nerve fibers decussate (left becomes right). The fibers then branch and terminate in three places.
Neural
Most end in the lateral geniculate nucleus (LGN). Before the LGN forwards the pulses to V1 of the visual cortex (primary) it gauges the range of objects and tags every major object with a velocity tag. These tags predict object movement.
The LGN also sends some fibers to V2 and V3.
V1 performs edge-detection to understand spatial organization (initially, 40 milliseconds in, focusing on even small spatial and color changes. Then, 100 milliseconds in, upon receiving the translated LGN, V2, and V3 info, also begins focusing on global organization).
V2 both forwards (direct and via pulvinar) pulses to V1 and receives them. Pulvinar is responsible for saccade and visual attention. V2 serves much the same function as V1, however, it also handles illusory contours, determining depth by comparing left and right pulses (2D images), and foreground distinguishment. V2 connects to V1 - V5.
V3 helps process 'global motion' (direction and speed) of objects. V3 connects to V1 (weak), V2, and the inferior temporal cortex.
V4 recognizes simple shapes, gets input from V1 (strong), V2, V3, LGN, and pulvinar. V5's outputs include V4 and its surrounding area, and eye-movement motor cortices (frontal eye-field and lateral intraparietal area).
V5's functionality is similar to that of the other V's, however, it integrates local object motion into global motion on a complex level. V6 works in conjunction with V5 on motion analysis. V5 analyzes self-motion, whereas V6 analyzes motion of objects relative to the background. V6's primary input is V1, with V5 additions. V6 houses the topographical map for vision. V6 outputs to the region directly around it (V6A). V6A has direct connections to arm-moving cortices, including the premotor cortex.
The inferior temporal gyrus recognizes complex shapes, objects, and faces or, in conjunction with the hippocampus, creates new memories. The pretectal area is seven unique nuclei. Anterior, posterior and medial pretectal nuclei inhibit pain (indirectly), aid in REM, and aid the accommodation reflex, respectively. The Edinger-Westphal nucleus moderates pupil dilation and aids (since it provides parasympathetic fibers) in convergence of the eyes and lens adjustment. Nuclei of the optic tract are involved in smooth pursuit eye movement and the accommodation reflex, as well as REM.
The suprachiasmatic nucleus is the region of the hypothalamus that halts production of melatonin (indirectly) at first light.
Human Vision System Video
Structure
- The eye, especially the retina
- The optic nerve
- The optic chiasma
- The optic tract
- The lateral geniculate body
- The optic radiation
- The visual cortex
- The visual association cortex.
These are divided into anterior and posterior pathways. The anterior visual pathway refers to structures involved in vision before the lateral geniculate nucleus. The posterior visual pathway refers to structures after this point.
Eye
Light entering the eye is refracted as it passes through the cornea. It then passes through the pupil (controlled by the iris) and is further refracted by the lens. The cornea and lens act together as a compound lens to project an inverted image onto the retina.
Retina
The retina consists of a large number of photoreceptor cells which contain particular protein molecules called opsins. In humans, two types of opsins are involved in conscious vision: rod opsins and cone opsins. (A third type, melanopsin in some of the retinal ganglion cells (RGC), part of the body clock mechanism, is probably not involved in conscious vision, as these RGC do not project to the lateral geniculate nucleus but to the pretectal olivary nucleus.) An opsin absorbs a photon (a particle of light) and transmits a signal to the cell through a signal transduction pathway, resulting in hyper-polarization of the photoreceptor.
Rods and cones differ in function. Rods are found primarily in the periphery of the retina and are used to see at low levels of light. Cones are found primarily in the center (or fovea) of the retina. There are three types of cones that differ in the wavelengths of light they absorb; they are usually called short or blue, middle or green, and long or red. Cones are used primarily to distinguish color and other features of the visual world at normal levels of light.
In the retina, the photo-receptors synapse directly onto bipolar cells, which in turn synapse onto ganglion cells of the outermost layer, which will then conduct action potentials to the brain. A significant amount of visual processing arises from the patterns of communication between neurons in the retina. About 130 million photo-receptors absorb light, yet roughly 1.2 million axons of ganglion cells transmit information from the retina to the brain. The processing in the retina includes the formation of center-surround receptive fields of bipolar and ganglion cells in the retina, as well as convergence and divergence from photoreceptor to bipolar cell. In addition, other neurons in the retina, particularly horizontal and amacrine cells, transmit information laterally (from a neuron in one layer to an adjacent neuron in the same layer), resulting in more complex receptive fields that can be either indifferent to color and sensitive to motion or sensitive to color and indifferent to motion.
Mechanism of generating visual signals: The retina adapts to change in light through the use of the rods. In the dark, the chromophore retinal has a bent shape called cis-retinal (referring to a cis conformation in one of the double bonds). When light interacts with the retinal, it changes conformation to a straight form called trans-retinal and breaks away from the opsin. This is called bleaching because the purified rhodopsin changes from violet to colorless in the light. At baseline in the dark, the rhodopsin absorbs no light and releases glutamate which inhibits the bipolar cell. This inhibits the release of neurotransmitters from the bipolar cells to the ganglion cell. When there is light present, glutamate secretion ceases thus no longer inhibiting the bipolar cell from releasing neurotransmitters to the ganglion cell and therefore an image can be detected.
The final result of all this processing is five different populations of ganglion cells that send visual (image-forming and non-image-forming) information to the brain:
- M cells, with large center-surround receptive fields that are sensitive to depth, indifferent to color, and rapidly adapt to a stimulus;
- P cells, with smaller center-surround receptive fields that are sensitive to color and shape;
- K cells, with very large center-only receptive fields that are sensitive to color and indifferent to shape or depth;
- another population that is intrinsically photosensitive; and
- a final population that is used for eye movements.
A 2006 University of Pennsylvania study calculated the approximate bandwidth of human retinas to be about 8960 kilobits per second, whereas guinea pig retinas transfer at about 875 kilobits.
In 2007 Zaidi and co-researchers on both sides of the Atlantic studying patients without rods and cones, discovered that the novel photoreceptive ganglion cell in humans also has a role in conscious and unconscious visual perception. The peak spectral sensitivity was 481 nm. This shows that there are two pathways for sight in the retina - one based on classic photoreceptors (rods and cones) and the other, newly discovered, based on photo-receptive ganglion cells which act as rudimentary visual brightness detectors.
Photochemistry
The functioning of a camera is often compared with the workings of the eye, mostly since both focus light from external objects in the field of view onto a light-sensitive medium. In the case of the camera, this medium is film or an electronic sensor; in the case of the eye, it is an array of visual receptors. With this simple geometrical similarity, based on the laws of optics, the eye functions as a transducer, as does a CCD camera.
In the visual system, retinal, technically called retinene1 or "retinaldehyde", is a light-sensitive molecule found in the rods and cones of the retina. Retinal is the fundamental structure involved in the transduction of light into visual signals, i.e. nerve impulses in the ocular system of the central nervous system. In the presence of light, the retinal molecule changes configuration and as a result a nerve impulse is generated.
Optic nerve
The information about the image via the eye is transmitted to the brain along the optic nerve. Different populations of ganglion cells in the retina send information to the brain through the optic nerve. About 90% of the axons in the optic nerve go to the lateral geniculate nucleus in the thalamus. These axons originate from the M, P, and K ganglion cells in the retina, see above. This parallel processing is important for reconstructing the visual world; each type of information will go through a different route to perception. Another population sends information to the superior colliculus in the midbrain, which assists in controlling eye movements (saccades) as well as other motor responses.
A final population of photosensitive ganglion cells, containing melanopsin for photosensitivity, sends information via the retinohypothalamic tract (RHT) to the pretectum (pupillary reflex), to several structures involved in the control of circadian rhythms and sleep such as the suprachiasmatic nucleus (SCN, the biological clock), and to the ventrolateral preoptic nucleus (VLPO, a region involved in sleep regulation). A recently discovered role for photoreceptive ganglion cells is that they mediate conscious and unconscious vision - acting as rudimentary visual brightness detectors as shown in rodless coneless eyes.
Optic chiasm
The optic nerves from both eyes meet and cross at the optic chiasm, at the base of the hypothalamus of the brain. At this point the information coming from both eyes is combined and then splits according to the visual field. The corresponding halves of the field of view (right and left) are sent to the left and right halves of the brain, respectively, to be processed. That is, the right side of primary visual cortex deals with the left half of the field of view from both eyes, and similarly for the left brain. A small region in the center of the field of view is processed redundantly by both halves of the brain.
Optic tract
Information from the right visual field (now on the left side of the brain) travels in the left optic tract. Information from the left visual field travels in the right optic tract. Each optic tract terminates in the lateral geniculate nucleus (LGN) in the thalamus.
Lateral geniculate nucleus
The lateral geniculate nucleus (LGN) is a sensory relay nucleus in the thalamus of the brain. The LGN consists of six layers in humans and other primates starting from catarhinians, including cercopithecidae and apes. Layers 1, 4, and 6 correspond to information from the contralateral (crossed) fibers of the nasal retina (temporal visual field); layers 2, 3, and 5 correspond to information from the ipsilateral (uncrossed) fibers of the temporal retina (nasal visual field). Layer one (1) contains M cells which correspond to the M (magnocellular) cells of the optic nerve of the opposite eye and are concerned with depth or motion. Layers four and six (4 & 6) of the LGN also connect to the opposite eye, but to the P cells (color and edges) of the optic nerve. By contrast, layers two, three and five (2, 3, & 5) of the LGN connect to the M cells and P (parvocellular) cells of the optic nerve for the same side of the brain as its respective LGN. Spread out, the six layers of the LGN are the area of a credit card and about three times its thickness. The LGN is rolled up into two ellipsoids about the size and shape of two small birds' eggs. In between the six layers are smaller cells that receive information from the K cells (color) in the retina. The neurons of the LGN then relay the visual image to the primary visual cortex (V1) which is located at the back of the brain (posterior end) in the occipital lobe in and close to the calcarine sulcus. The LGN is not just a simple relay station but it is also a center for processing; it receives reciprocal input from the cortical and subcortical layers and reciprocal innervation from the visual cortex.
Optic radiation
The optic radiations, one on each side of the brain, carry information from the thalamic lateral geniculate nucleus to layer 4 of the visual cortex. The P layer neurons of the LGN relay to V1 layer 4C ?. The M layer neurons relay to V1 layer 4C ?. The K layer neurons in the LGN relay to large neurons called blobs in layers 2 and 3 of V1.
There is a direct correspondence from an angular position in the field of view of the eye, all the way through the optic tract to a nerve position in V1. At this juncture in V1, the image path ceases to be straightforward; there is more cross-connection within the visual cortex.
Visual cortex
The visual cortex is the largest system in the human brain and is responsible for processing the visual image. It lies at the rear of the brain (highlighted in the image), above the cerebellum. The region that receives information directly from the LGN is called the primary visual cortex, (also called V1 and striate cortex). Visual information then flows through a cortical hierarchy. These areas include V2, V3, V4 and area V5/MT (the exact connectivity depends on the species of the animal). These secondary visual areas (collectively termed the extrastriate visual cortex) process a wide variety of visual primitives. Neurons in V1 and V2 respond selectively to bars of specific orientations, or combinations of bars. These are believed to support edge and corner detection. Similarly, basic information about color and motion is processed here.
Heider, et al. (2002) have found that neurons involving V1, V2, and V3 can detect stereoscopic illusory contours; they found that stereoscopic stimuli subtending up to 8° can activate these neurons.
Visual association cortex
As visual information passes forward through the visual hierarchy, the complexity of the neural representations increases. Whereas a V1 neuron may respond selectively to a line segment of a particular orientation in a particular retinotopic location, neurons in the lateral occipital complex respond selectively to complete object (e.g., a figure drawing), and neurons in visual association cortex may respond selectively to human faces, or to a particular object.
Along with this increasing complexity of neural representation may come a level of specialization of processing into two distinct pathways: the dorsal stream and the ventral stream (the Two Streams hypothesis, first proposed by Ungerleider and Mishkin in 1982). The dorsal stream, commonly referred to as the "where" stream, is involved in spatial attention (covert and overt), and communicates with regions that control eye movements and hand movements. More recently, this area has been called the "how" stream to emphasize its role in guiding behaviors to spatial locations. The ventral stream, commonly referred as the "what" stream, is involved in the recognition, identification and categorization of visual stimuli.
However, there is still much debate about the degree of specialization within these two pathways, since they are in fact heavily interconnected.
Horace Barlow proposed the efficient coding hypothesis in 1961 as a theoretical model of sensory coding in the brain.
The default mode network is a network of brain regions that are active when an individual is awake and at rest. The visual system's default mode can be monitored during resting state fMRI: Fox, et al. (2005) have found that "The human brain is intrinsically organized into dynamic, anticorrelated functional networks'", in which the visual system switches from resting state to attention.
In the parietal lobe, the lateral and ventral intraparietal cortex are involved in visual attention and saccadic eye movements. These regions are in the Intraparietal sulcus (marked in red in the adjacent image).
Development
Infancy
Newborn infants have limited color perception. One study found that 74% of newborns can distinguish red, 36% green, 25% yellow, and 14% blue. After one month performance "improved somewhat." Infant's eyes don't have the ability to accommodate. The pediatricians are able to perform non-verbal testing to assess visual acuity of a newborn, detect nearsightedness and astigmatism, and evaluate the eye teaming and alignment. Visual acuity improves from about 20/400 at birth to approximately 20/25 at 6 months of age. All this is happening because the nerve cells in their retina and brain that control vision are not fully developed.
Childhood and adolescence
Depth perception, focus, tracking and other aspects of vision continue to develop throughout early and middle childhood. From recent studies in the United States and Australia there is some evidence that the amount of time school aged children spend outdoors, in natural light, may have some impact on whether they develop myopia. The condition tends to get somewhat worse through childhood and adolescence, but stabilizes in adulthood. More prominent myopia (nearsightedness) and astigmatism are thought to be inherited. Children with this condition may need to wear glasses.
Adulthood
Eyesight is often one of the first senses affected by aging. A number of changes occur with aging:
- Over time the lens become yellowed and may eventually become brown, a condition known as brunescence or brunescent cataract. Although many factors contribute to yellowing, lifetime exposure to ultraviolet light and aging are two main causes.
- The lens becomes less flexible, diminishing the ability to accommodate (presbyopia).
- While a healthy adult pupil typically has a size range of 2-8 mm, with age the range gets smaller, trending towards a moderately small diameter.
- On average tear production declines with age. However, there are a number of age-related conditions that can cause excessive tearing.
Other functions
Balance
Along with proprioception and vestibular function, the visual system plays an important role in the ability of an individual to control balance and maintain an upright posture. When these three conditions are isolated and balance is tested, it has been found that vision is the most significant contributor to balance, playing a bigger role than either of the two other intrinsic mechanisms. The clarity with which an individual can see his environment, as well as the size of the visual field, the susceptibility of the individual to light and glare, and poor depth perception play important roles in providing a feedback loop to the brain on the body's movement through the environment. Anything that affects any of these variables can have a negative effect on balance and maintaining posture. This effect has been seen in research involving elderly subjects when compared to young controls, in glaucoma patients compared to age matched controls, cataract patients pre and post surgery, and even something as simple as wearing safety goggles. Monocular vision (one eyed vision) has also been shown to negatively impact balance, which was seen in the previously referenced cataract and glaucoma studies, as well as in healthy children and adults.
According to Pollock et al. (2010) stroke is the main cause of specific visual impairment, most frequently visual field loss (homonymous hemianopia- a visual field defect). Nevertheless, evidence for the efficacy of cost-effective interventions aimed at these visual field defects is still inconsistent.
Clinical significance
Cataracts
This is clouding of the lens. Although it may be accompanied by yellowing, clouding and yellowing can occur separately.
Presbyopia
The lens becomes inflexible (known as decrease in accommodation) tending to remain fixed at long-distance focus.
Glaucoma
This is a kind of blindness that begins at the edge of the field of vision and progresses inward. It may result in tunnel vision. Glaucoma typically involves the outer layers of the optic nerve, sometimes as a result of buildup of fluid and excessive pressure in the eye.
Other animals
Different species are able to see different parts of the light spectrum; for example, bees can see into the ultraviolet, while pit vipers can accurately target prey with their pit organs, which are sensitive to infrared radiation. The eye of a swordfish can generate heat to better cope with detecting their prey at depths of 2000 feet. Certain one-celled micro-organisms, the warnowiid dinoflagellates have eye-like ocelloids, with analogous structures for the lens and retina of the multi-cellular eye. The armored shell of the chiton Acanthopleura granulata is also covered with hundreds of aragonite crystalline eyes, denoted ocelli, which are capable of forming images.
History
In the second half of the 19th century, many motifs of the nervous system were identified such as the neuron doctrine and brain localization, which related to the neuron being the basic unit of the nervous system and functional localisation in the brain, respectively. These would become tenets of the fledgling neuroscience and would support further understanding of the visual system.
The notion that the cerebral cortex is divided into functionally distinct cortices now known to be responsible for capacities such as touch (somatosensory cortex), movement (motor cortex), and vision (visual cortex), was first proposed by Franz Joseph Gall in 1810. Evidence for functionally distinct areas of the brain (and, specifically, of the cerebral cortex) mounted throughout the 19th century with discoveries by Paul Broca of the language center (1861), and Gustav Fritsch and Edouard Hitzig of the motor cortex (1871). Based on selective damage to parts of the brain and the functional effects this would produce (lesion studies), David Ferrier proposed that visual function was localized to the parietal lobe of the brain in 1876. In 1881, Hermann Munk more accurately located vision in the occipital lobe, where the primary visual cortex is now known to be.
Source of the article : Wikipedia
EmoticonEmoticon