Сенсорные системы, 2021, T. 35, № 3, стр. 179-198

An overview of the visual acuity assessment. 1. Primary measures and various notations

G. I. Rozhkova 1*, M. A. Gracheva 1, G. V. Paramei 2

1 Institute for Information Transmission Problems RAS
127051 Moscow, B. Karetny, 19/1, Russia

2 Department of Psychology, Liverpool Hope University Hope Park
L16 9JD Liverpool, UK

* E-mail: gir@iitp.ru

Поступила в редакцию 17.03.2021
После доработки 8.04.2021
Принята к публикации 25.04.2021

Полный текст (PDF)

Аннотация

The paper presents a brief overview and analysis of various approaches to assessment of visual acuity (VA). It aims to explicate the reasons of disparity of opinions among experts on the methods of assessing VA by reflecting on advantages and shortcomings of individual methods and the corresponding VA notations. In the course of time, the number of methods and procedures of VA assessing was increasing; they became more diverse and complex. In parallel, the number of professionals who assessed VA for different purposes was also increasing. Such state of affairs has resulted in certain difference between the views on the interpretation of outcomes of VA measurement and to introducing different VA notations. Currently, however, reaching a consensus among experts is becoming crucial since numerous research projects require cooperation of professionals from different areas and, also, involve international collaboration. In search of the common ground for the consensus and the unified terminology and notation, it is reasonable to appraise the crux of the VA assessment problem and, as well, rationales for a great variety of current viewpoints on solving the problem in practice. An overview of contemporary approaches to assessment of VA points out that there exists the sole primary measure of VA that can be obtained by a direct measurement and expressed by the base unit of spatial metrics – the minimum angle of resolution (MAR, αm) – defined as the minimum angle at which two points are just perceived as separate. There exists one more measure, in place of MAR, – the critical spatial frequency (Fc) – which can be gauged directly when one employs gratings of varying spatial frequency to estimate the maximum (critical) spatial frequency above which periods of a grating can no longer be distinguished. It is reasonable to consider all other VA measures as secondary, or derived, since these are calculated as functions of αm. Introduction of various secondary measures, such as Snellen fraction, decimal units, logMAR, visual efficiency (VE), visual acuity rating (VAR), and others, was stipulated by the demand of developing alternative notations of VA, which are more convenient and comprehensible, than αm, for practitioners who assess VA in applied areas for various purposes, such as screening, diagnostics, monitoring, rehabilitation, disability determination, population statistical evaluations, designing of new VA tests, etc. We conclude that, in view of substantial differences of the purposes, requirements and criteria among experts in various areas, the quest for a unique measure of VA, which would be optimal in all contexts, is probably unresolvable, as is establishing of the ultimate, “gold standard”, practical method of VA assessment.

Key words: human vision, visual acuity, scales of measurement, MAR, Snellen fraction, decimal units, log- MAR, VAR

INTRODUCTION: ISSUES IN ASSESSMENT OF VISUAL ACUITY

In a broader sense, visual acuity (VA) is considered as the ability to detect and recognize small objects, and to discern their elements. A colloquial term implying a good VA is clear vision. It is hardly possible to ascertain when the necessity of VA assessment was realized and became practicable for the first time. It is known though that in ancient Persia more than a thousand years ago, there existed ‘the Test’, or ‘the Riddle’, that was used to gauge warriors’ eyesight based on viewing the constellation of Ursa Major (the Great Bear) on a clear night (Bohigian, 2008). ‘The Test’ was to discern, in the constellation handle, a double star: if one was able to see with the naked eye Alcor, faint companion of Mizar, one has passed the Test. Adopted by nomadic Arabs, ‘the Arabic Test’ was used in antiquity: in the Roman army it was necessary to pass this test to become an archer.

For centuries past, methods and procedures developed for gauging vision greatly increased in number and complexity in response to considerable variegation of human activities that required satisfactory VA, whereby its assessment became obligatory. It was a long way from testing the visual ability of an ancient warrior to discern the double stars in the night sky to the modern routine VA assessment by means of special test stimuli either presented as charts or generated on displays of computerized setups. In the last six decades, we observe a significant increase in the number of researchers, experts and practitioners, who assess VA for different purposes and are accustomed to different methods as well as to differing VA notations. The diversity of aims and purposes – and, accordingly, the methods used – resulted in considerable loss of consensus among the researchers performing VA assessment and analysis. Currently, however, such a consensus became crucial since many projects involve collaboration of professionals from various national institutions and, as well, international cooperation.

The claims of precise VA measurement and standardization of VA assessment appear in literature regularly since 1950s (e.g., Ogle, 1953; Sloan, 1980; Lovie-Kitchin, 1988; Elliott, Sheridan, 1988; Siderov, Tiu, 1999; Lovie-Kitchin, Brown, 2000; Beck et al., 2003; Rosser et al., 2003; Koskin, 2009; Rozhkova, Malykh, 2017; Rozhkova, 2018). Unexpectedly, the task of coordinating different approaches to VA assessment appeared to be not a simple one and, perhaps, not fully resolvable. To find the common ground for the consensus and unification of the terminology, it seems reasonable to begin with an analysis of the crux of the VA assessment task and the rationale behind a great variety of the existing views on its implementation in practice.

Regular procedures of VA assessment in an optometrist’s office, by viewing charts with letters of different size, are familiar to everyone. However, a universal and generally accepted method for quantitative assessment of VA has not been found so far. Moreover, progress in academic and clinical research makes it more and more unlikely that such generic method can be arrived at.

Despite considerable efforts, abundant theoretical and practical problems remain unresolved with regard to the best way of performing VA measurements. Along with an obvious complexity of some optical, physiological and psychological problems related to the vision function, one of the most likely reasons of this unfortunate situation is a significant variety of the objectives of professionals working in different areas. The issues of the VA assessment method and measures, which we consider below, can be categorized based on the answers to the following key questions:

– Who requires “a good vision”?

– What for does VA need to be assessed?

– What is a proper representation, or notation, of the VA assessment outcome?

– How will the VA data be used in practice?

WHO? In everyday life, a “clear vision” is crucial for any human regulatory function, such as self-care, orientation in space, successful communication, efficient education, professional performance, competitive sports, etc. It is apparent that proper information about one’s visual ability is required both for the persons themselves and for people interacting with them – parents, teachers, doctors, professionals, or designers of visually perceived products (books, movies, TV-broadcasts, social media), etc. Notably, from a functional viewpoint, different types of visual behavior imply different visual tasks, which vary in the required accuracy and speed. The visual tasks include detection and recognition of individuals and objects, determining their location and properties (size, shape, direction of movement), forecasting incidents and accidents (falls, collisions, fatal errors in recognizing dangerous objects), reading and writing, etc. It is apparent that a “good vision” could have different meaning in different instances, and a quantitative presentation of VA may require a variety of tests.

WHAT FOR? The answer to this question depends on the “users” of the outcomes of VA assessment – their purposes, the targeted populations and individuals. For infants and children, monitoring of VA is required to ensure appropriate conditions for normal maturation of their visual system and appropriate vision care in kindergartens and schools, and later in colleges and universities. Outcomes of population-based surveys serve, among others, for providing optimal lighting conditions at work and in studying places. In individuals, monitoring of VA is required for detecting a visual impairment, its appropriate diagnosis and effective vision correction, for certification of vision loss, or ascertaining the requirements of safe performance in vision-demanding activities (e.g., driving).

WHAT IS A PROPER REPRESENTATION, OR NOTATION, OF VA? We assume that professionals have to comprehend clearly the VA notation preferred by them and that the VA notation they use meets well their professional objectives.

HOW WILL THE VA DATA BE USED IN PRACTICE? The information on VA measures is used for abundant purposes, to name just a few. Management of any institution or company has to take VA measures into account for designing optimal lighting workplace conditions to prevent visual discomfort, visual strain, or asthenopia. In production of textbooks, the letter font type and size are supposed to be aligned with the VA population data of potential readers of various ages. At schools, teachers are expected to consider individual students’ VA for their optimal placing in a classroom. Specialist committees that certify visual impairment determine the degree of claimant’s disability and eligibility for the corresponding social benefits. National offices of statistics analyze changes in population VA caused by various ecological and/or socioeconomic factors. In each case, the employed tool and condition of its administration determine the VA notation. However, to be optimal, the choice of the VA notation is to be guided by the aims and objectives of the professionals.

The aim of the present overview is to reflect on the principles and procedures underlying VA assessment within a general framework of information processing in the human visual system. Various factors are considered that affect outcomes of the VA measurement, as well as certain preferences in the choice of the tools for its measurement stipulated by everyday needs and the tasks on the agenda of various professional communities.

The impetus for this paper was a recent ardent debate that raised critique of certain VA notations. The dispute’s pinnacle was the Editorial “The good (logMAR), the bad (Snellen), and the ugly (BCVA, number of letters read) of visual acuity measurement” in Ophthalmic and Physiological Optics by David B. Elliott (2016), its Editor-in-Chief. Noteworthy, the Editorial compared various currently available charts that imply different ways of obtaining VA outcomes, i.e., focusing on the VA assessment tools, but leaving the matter of VA notations aside. The debate also revealed that some critics of certain VA notations committed the fallacy of confusing the following concepts: units of measurement; scales of measurement; specification of the reference levels in the measuring tools; scaling of the primary measurement outcomes, etc.

By approaching the matter of VA assessment from these different conceptual vantages, we strive to present arguments that for VA assessment, one can hardly arrive at a “gold standard” of VA measurement or the “best” VA notation. Moreover, the choice of the most appropriate VA notation could vary dependent on the theoretical approach, conditions, and objectives of the VA assessment, namely:

– screening, monitoring, functional correction, survey;

– target population (infants, teenagers, school and university students; adults; healthy observers or individuals with visual impairment);

– specific setting (clinic, field study, or laboratory);

– resources available for academic and clinical projects, field work, epidemiological studies, or routine practice (funds; experts, practitioners and supporting workforce; equipment; time constraints);

– social significance of the obtained VA data (for medical, educational or economic purposes).

An auxiliary aim of the present paper is to provide a tutorial to those at the beginning of their research on visual acuity. The definitions and conceptual clarifications are intended to help embedding the issues of VA assessment in a broader theoretical context.

PRACTICAL ISSUES OF VISUAL ACUITY ASSESSMENT WITHIN A FRAMEWORK OF INFORMATION PROCESSING IN THE HUMAN VISUAL SYSTEM

Innumerable studies of vision, that employed various neurophysiological, psychophysical and behavioral methods, accumulated evidence of the inherent basis of the functioning of the human visual system, namely, that it has multiple specific mechanisms underlying performance in different visual tasks. Moreover, in the functioning of the visual system, visual processing is interrelated with the functioning of the oculomotor and accommodation systems that optimize viewing conditions.

Generally, comprehensive characteristics of visual abilities of an individual would require a thorough and extended investigation by employing many tests that would result in dozens of critical parameters, indices, and scores – the multidimensional quantitative foundation of vision diagnostics.

In accordance with a general comprehension of VA as an ability to detect and recognize small objects, vision scientists define VA as the measure of accuracy of the spatial analysis. In practice, for VA assessment a great variety of test stimuli are employed (also termed targets, test images, symbols, signs, or optotypes) and different visual tasks are used, which inevitably give rise to the problem of converting between the obtained measures.

The other issue is that different visual stimuli and designs employed in the VA tests implicate processing by different modules of the visual system. VA assessment, an unfolding process, depends on many conditions. Impairment of vision and a VA measurement outcome could be stipulated by many factors affecting any of the stages of visual processing. The implicated stages are optical image formation in the eye; transformation of the optical image into a neural retinal image on the eye fundus (more precisely, a set of neural images with selected information about different features of the retinal optical image); its transmission to the brain upstream the visual system; subsequent transformation and processing of the neural images within different brain visual areas; formation of a percept; analysis and recognition of the viewed object.

Main stages of the visual processing are illustrated in Fig. 1. At the input end, a test stimulus passes through the eye optics and activates the photoreceptor layer of the retina, the two initial processing stages that are common for any manner of assessing VA (except a specific interference method). At the output end, the observer’s responses to the test stimuli can be of two main types – either subjective (a verbal judgment, pointing, pressing a key etc.; e.g., Bach, 1996, 2007; Radner, 2017), or objective (ERG (Tehrani et al., 2015); recordings of eye movements (Wolin, Dillman, 1964; Hyon et al., 2010) or of brain activity (VEPs; e.g., Zheng et al., 2020), accompanying visual processing with no necessarily explicit participant’s action).

Fig. 1.

An illustration of the information processing flow in the human visual system during VA assessment.

In relation to VA assessment, it is important to bear in mind that, between the input and the output, in cases of different test images quite different neural pathways are activated that underlie processing of different image features. Various neurophysiological and psychophysical studies, as well as clinical data of patients with certain visual brain injuries provide evidence that a localized impairment can have a very selective effect on stimulus recognition. For example, a left occipito-parietal ischemic infarction may cause literal alexia, i.e., the condition that affects patient’s ability to recognize individual letters or numerals, while other visual recognition abilities remain unaffected.

The above implies that an outcome of VA measurement using a given set of test stimuli would only characterize the functional state of the pathways activated by these stimuli. In other words, the obtained results indicating, say, “normal” VA could testify only to normal functioning of a certain subnetwork of all visual pathways that subserves the certain type of test stimuli. In a general case, the structures forming this specific subnetwork could either belong to one functional unit – module – or include several modules. As indicated above, the eye optics and the photoreceptors in the retina are structures that are common for processing all types of the visual stimuli. This implies that impairment of the eye optics and/or a retinal pathology could affect VA measures regardless of the type of the test stimuli. One important exception is a specific case of grating stimuli created on the retina by means of the coherent interference technique bypassing the eye optics and, thus, enabling assessment of the so-called “retinal VA”. By comparing the outcomes of usual VA assessment by means of naturally projected stimuli with the values of retinal VA in patients, one can disentangle impacts of the optical and neuronal (retinal & post-retinal) impairments. To localize the structure (or structures) upstream in the visual system that cause VA loss is impossible without reaching for various other diagnostic methods of examination.

PARADIGMS AND PROCEDURES FOR THE VISUAL ACUITY ASSESSMENT

Accuracy required for VA assessment varies significantly depending on both the visual task and the assessment aim. The granularity of accuracy of VA assessment can be captured in terms of the measurement scales varying in power and precision: ordinal, interval or ratio (Stevens, 1946).

For everyday purposes, it is often sufficient to qualify VA using a coarse categorization in common terms: low (poor, weak), normal (good, fine), and excellent (supernormal, perfect). Such categorization is based on the individual’s visual abilities compared to other people from a representative population. The implicated visual abilities comprise the individual’s speed of visual search, dexterity (e.g., reaching and grasping, shooting, driving), and navigation accuracy, as well as presence/absence of asthenopia symptoms (visual discomfort, visual stress, visual fatigue, etc.). This type of VA notation, i.e., low–good–excellent, implies ranking the vision quality on an ordinal scale. It is apparent that such values are rather vague, fuzzy and subjective: for gauging the quality of vision, an assessor is guided by his/her (tacit) reference points and an (implicit) “yardstick”.

In clinical practice, however, precision of VA assessment is crucial for functional correction and monitoring of vision development, or for stimulation of the visual function during recovery from an injury: the higher the precision of the measurement, the earlier the anticipated VA changes (positive or negative) can be detected, which ensures sooner and more perceptible effects of a timely, patient-tailored treatment (therapeutic intervention). Clinical cases require accurate measuring of VA by means of standardized test procedures and presenting the results in a quantitative form (in conventional units cognized by professionals).

Quantitative clinical methods of VA assessment are based on estimating either the smallest angular size of the test stimuli detected/recognized by the testee, or the smallest test image components that can be resolved, i.e., can be seen as distinct. Different examination paradigms imply solving different visual tasks (Table 1). Some examples of the test stimuli are shown in Fig. 2.

Table 1.

Characteristics of the typical examination paradigms for VA assessment

Examples of the test stimuli Visual task Instruction for the observer Parameter to be measured
Single-point targets
Crumbs
Single thin lines
Detection To detect and indicate or to grasp the smallest target on a uniform background Threshold diameter Threshold line width (angular size)
Two-point targets
Teller gratings
Gabor patches
Resolution To distinguish one-point and two-point targets. To indicate position of a periodic structure Threshold angular distance between the points or critical spatial frequency
Tumbling E
Lea symbols
Letters
Recognition To identify and name the presented optotype from a given set Angular size of the critical parameter
Vernier stimuli Detection of a shift of the two well seen stimulus halves (Hyperacuity) To detect the shift and to determine its direction (left-right) Threshold angular size of the shift
Fig. 2.

Various types of the test stimuli for assessment of visual acuity and hyperacuity.

In this paper, we mainly consider clinical VA measures, conventionally termed resolution VA and recognition VA; detection acuity and Vernier acuity have lesser fields of application. We would also like to underscore that most paradigms of the VA assessment imply not just one visual task but a task composition (cf. Heinrich, Bach, 2013, on distinguishing the resolution and recognition in tests with modified Landolt C optotypes).

The notion of resolution is broadly used in physics and engineering. Originally, in physics, the term resolution was coined to characterize the quality of an optical device – its ability to produce clear image of the finest structural elements of the test pattern. Quantitative assessment of the quality of an optical device is accomplished by estimating its resolution threshold. The standard method for this is to estimate the minimum angular distance, αm, between two light test points, which allows detecting (either visually or by a light-sensitive instrument) the two luminance maxima and a luminance minimum, corresponding to the test points that are sufficiently well separated in the image produced by the device in question.

Lord Rayleigh (1879) proposed the criterion for calculating the resolution threshold based on the analysis of the light distribution in the two-point images created by an optical device. It is well known that, due to diffraction, the optical image of each light point consists of the central light spot and a series of dark and light annuli. The pattern of diffraction depends on the aperture size and the wavelength of light (with shorter wavelengths being diffracted at a greater angle than longer ones). According to Rayleigh, the resolution threshold β corresponds to the distance between the two near test points that is equal to the radius of the central spot. This radius β (in angular minutes) is given by the equation β = 1.22λ/D (where λ is wavelength, D is the aperture size). For such a distance, in case of two light sources with incoherent radiation, it was found that, in the midpoint between the two maxima, the light level is equal to 3/4 of the maximum. The application of the Rayleigh criterion to the human eye seems problematic for several reasons: e.g., calculation of the effective aperture from the pupil diameter; the required correction for optical properties of the eye media, etc. However, for realistic sizes of the pupil one can assume certain coarse approximations of the resolution threshold αm as quite likely.

Another way of assessing the quality of optical devices is to estimate the highest frequency of the spatial grating that can be reproduced by the device in question, or the critical spatial frequency, Fc. Apparently, in real conditions it is much easier to measure Fc than αm. This is illustrated schematically in Fig. 3 inspired by representation of a regular hexagonal lattice of photoreceptors in the foveal region of the human retina by Curcio et al. (1990). Fig. 3 shows images of two-point tests (a) and a test grating (b) projected onto a regular mosaic of photosensors. Variation of light sensitivity of individual photosensors is represented by cell lightness (on grayscale). In any real case, sensitivity of each photosensor varies due to internal noise, as well as a result of exposure to preceding stimuli. In turn, the variability of the photosensor sensitivity stipulates that visibility of the light test significantly depends on the position of its projection onto the photosensor array. The effect of the photosensor variability is stronger for single light points than for gratings since visibility (detectability) of the latter gain an advantage from summation of responses from multiple photosensors.

Fig. 3.

Schematic illustration of the projection of two-point test stimuli (a) and a test grating (b) onto a hexagonal array of photosensors varying in sensitivity. Light distribution in the optical images is represented by red; differences in light sensitivity of photosensors are represented in a grayscale (where lighter hexagons indicate higher sensitivity).

It is worth noting that in physics, rather than the resolution threshold, the broadly accepted measure of the quality of an optical device is resolving power, the value reciprocal to resolution threshold, 1/αm. The term was introduced to reconcile the physics terms with an intuitive idea of a better quality, whereby a better (higher) optical quality implies a higher resolving power, but a lower (smaller) resolution threshold. Unfortunately, the term resolution is often used without this clarification, which leads to confusion. In many publications, the term “high resolution” implies, in fact, high resolving power. For instance, the term resolution is used as the synonym of resolving power in computer science. It reflects the potential of either generating fine-grained images on a given display screen that consist of discrete elements (pixels), or printing fine pictures by a printer that produces images from tiny dye dots. It seems reasonable that a display resolution is characterized by the pixel density, the number of pixels per inch (ppi) in a line. Similarly, a printer resolution is expressed in dots per inch (dpi), i.e., the number of dots that can be printed in a 1-inch long line segment.

In ophthalmology and vision science, both notations, resolution threshold (αm) and its reciprocal value 1/αm, are used. However, in these disciplines the term resolving power has not been brought into broad use, and the measure 1/αm is termed visual acuity (VA). When using the term visual resolution, researchers quite often imply resolution threshold, unlike resolving power that is used in optics and computer science.

PRIMARY MEASURE OF VISUAL ACUITY AND HISTORICALLY FIRST VISUAL ACUITY NOTATION

Over time, the following procedures proved to be most feasible for the assessment of VA:

(1) determining the minimum size of the test stimuli that can be recognized at the chosen viewing distance;

(2) determining the maximum viewing distance at which certain test stimuli can be recognized.

In both cases, the outcome implies the identical spatial metrics and is expressed in the same measure – as the minimum angular size of the smallest elements comprising the test images that is sufficient for satisfactory recognition of the elements. In vision science, this angle is termed the minimum angle of resolution, MAR.

It so happened that the term MAR became used by ophthalmologists not only as the acronym but also as a mathematical quantity in algebraic formulae. For most algebraists, such denotation is a violation of the conventional rule to use one letter for one quantity, which sometimes causes misunderstanding and lengthening of formulae. (However, it is rather pointless to raise an objection against use of the MAR denotation, since currently software developers often use even much longer acronyms in their software codes.)

When periodic gratings are employed as the test stimuli, another critical value – the highest discernible spatial frequency (critical frequency, Fc) – is determined instead of the MAR. It is apparent that, ideally, Fc can be calculated from αm: at the threshold, one period of the just resolvable grating has to span 2αm, i.e., one dark and one light bar; hence, visual angle of 1° (60') corresponds to 60'/2αm periods, or Fc = 30/αm cpd (cycles per degree).

In the history of development of VA metrics, professionals initially pursued the physics approaches by treating the human visual system as an optical device (Colenbrander, 2008). Also vision scientists of the 18th and 19th centuries applied approaches, notions and notations that had been developed in physics for optics research.

Frans Cornelis Donders, Dutch physiologist, was the first to introduce the notion of VA in 1861 and to develop the standard procedure and the measurement unit for VA assessment (Pfeiffer, 1936). Donders assumed that calculation of an observer’s VA should be accomplished in the same way as calculation of resolving power of an optical device, i.e., as the reciprocal of resolution threshold. Adapting this approach to human vision, Donders introduced the notion of a “standard eye” which, at threshold, can recognize letters as small as 5' height (without errors or with sufficiently high probability of a correct response). VA of the standard eye was proposed to be taken as the measurement baseline.

Donders’ protocol of VA assessment implied comparison of the testee’s threshold letter size with the threshold letter size of 5' accepted for the standard eye. Thus, in the course of the measurement, the examiner determined linear magnification (M) of the test letters (in relation to the 5' size) required for the observer to provide the standard level of letter recognition. The VA value, $v$, was calculated as the inverse of the magnification value: $v$ = 1/M.

Thus, Donders introduced the first correct quantitative VA notation and the first unit for the VA measurement that is compatible with the modern concepts in metrology: “Unit of measurement – real scalar quantity, defined and adopted by convention, with which any other quantity of the same kind can be compared to express the ratio of the two quantities as a number.” (International Vocabulary of Metrology. 2008. Basic and General Concepts and Associated terms, p. 6).

Donders based the proposed way of VA estimation on the premise that, for normal human vision, the resolution threshold is equal to 1 arcmin, i.e., for the standard eye αst = 1'. Considering that the Donders test letters were drawn within a 5 × 5 matrix, and the stroke width was 1/5 of the letter size, the threshold size of the letters for the standard eye can be assumed as 5' × 5' = 5αst × 5αst. If an individual observer’s threshold letters appeared to be, say, M · 5' high, it is apparent that the corresponding value of MAR, αi, was M times larger than αst, i.e., αi = Mαst. Therefore, the magnification M will be equal to αist and, for the VA value, $v$, one can use the following expressions:

$v = 1{\text{/}}M = {{\alpha }_{{st}}}{\text{/}}{{\alpha }_{i}} = 1{\kern 1pt} '{\text{/}}{{\alpha }_{i}}.$

The expression $v$ = 1'/αi for VA was introduced by Monoyer (1875) and coined decimal notation. In the present paper we denote it as Vd. If the examination outcome is αi = αst = 1', the testee’s VA is characterized as Vd = 1.0, i.e., equal to that of the standard eye.

What are potential ranges of the MAR and corresponding VA values in humans? Apparently, these ranges are limited by the minimum and maximum sizes of the test stimuli that can be projected onto the retina and perceived as discernible visual images. The maximum test size is ultimately limited by the size of the visual field, while the minimum size is limited by the optics of the eye (diffraction, optical aberrations, intraocular scattering, etc.; for reviews, see Westheimer, 1970; 2001; 2010; Artal, 2014), by properties of the retina (the size and packing density of the photoreceptors; structure of the retinal neural networks), and by processing of retinal signals upstream the visual system.

To estimate a theoretical lower limit of VA, consider the central area of the binocular visual field, whereto the largest test image can be presented (Fig. 4). Its extents in various directions is somewhat varying, and spans about 100° in angular terms. However, a test image should occupy no more than one third of this area, i.e., about 33°, leaving a blank space around. A critical size of 1/5 of that test extent would be about 400', which corresponds to MAR = 400' and Vd = 1'/400', or about 0.0025.

Fig. 4.

Combined visual fields of the left and right eyes with the projected optotype (tumbling E) of the maximum size that can be presented binocularly for VA assessment.

To find out an upper limit of the VA, one needs to consider the following factors (Westheimer, 2010):

– the transfer function of the eye’s optics (optical transfer function, OTF:), which determines the quality of the image projected onto the retina and its highest spatial frequency;

– discrete structure of the retina limiting the accuracy of image sampling by the size and density of the retinal photoreceptors in the light sensory layer;

– neural processing of the retinal image upstream the visual system, up to the perceptual level.

With regard to the OTF of the typical human eye, the highest spatial frequency that can be reproduced unambiguously, Fc, is about 60 cpd, i.e., is 1 cycle per 1 arcmin (Campbell, Green, 1965). Thus, the minimum resolution threshold should be no less than 0.5' (half a period of the spatial grating), and decimal Vd, accordingly, not higher than 2.0, if it is solely the OTF that stipulates the maximum VA. However, certain in-built factors of the visual system exert their inherent influences on the OTF-based VA, namely: the size of the retinal photoreceptors and their packing density can reduce upper limits of VA values, whereas neural processing at higher levels of the visual system and eye movements can improve the OTF-based VA (Duncan, Boynton, 2003).

In popular ophthalmology literature, one can find a rather simplified approach to visual resolution assessment, or the so-called “receptor theory” that implicates the limitation of the photoreceptor size in the fovea of the human eye, “grains” or “pixels”. The “receptor theory” proposed by Helmholtz (1867; English translation: 1924, v. 2, p. 33) and treated as quite plausible up to the middle of the past century (e.g., Hecht, Mintz, 1939; Polyak, 1941, p. 430) is based on a seemingly reasonable assumption that if two test points project on two adjacent cones, they cannot be distinguished from a larger single object projecting on those adjacent cones. For discerning the two points, at least one unstimulated or a less stimulated cone is required in-between. This simplified model of resolution threshold is often used as an illustration despite a vast amount of knowledge on visual optics, morphology and physiology providing evidence of the model flaws.

The real picture behind the visual processing is very complicated. Firstly, under natural viewing conditions it is impossible to stimulate two single photoreceptors in the configuration comprising two activated photosensors and one silent in-between: even in the case of an infinitely small test point (a Dirac pulse), the optical point image (the point spread function) is smeared over tens of photoreceptors due to diffraction and eye optics aberrations. Moreover, the stimulation pattern is permanently drifting over the retina because of the eye micromovements. A reliable stimulation of single photoreceptors became possible only in artificial conditions with the development of adaptive optics that compensates for the eye’s aberrations and counteracts eye micromovements (Roorda, Williams, 1999; Roorda et al., 2002; Porter et al., 2006; Artal, 2014).

Secondly, neural processing at the post-receptoral stages of the visual pathway was shown to provide significantly better resolution and higher VA exceeding the value of 1' indicated for the standard eye by Donders on the basis of using letters as the test stimuli (Artal et al., 2004). In this case the so-called Vernier acuity, or hyperacuity, is implicated, i.e., the well-known fact that it is possible to detect the misalignment of just few arcsec between two thin lines or fine segments if they are sufficiently long (Westheimer, 2010). It is apparent that such considerable upsurge of visual analysis of fine stimulus details is achieved due to the aggregate higher-level processing of visual inputs from many photoreceptors activated by stimuli extending beyond a single point (Westheimer, 1975; 2010).

With regard to the constraints posed by the discrete structure of the retina, the highest spatial frequency that can be successfully coded and reproduced in the visual pathway is limited by Shannon’s sampling theorem (Jerri, 1977). In the human fovea, the photoreceptor mosaic can be considered as resembling a regular hexagonal lattice (Curcio et al., 1990; Putnam et al., 2005). For such spatial geometrical organization, the highest spatial frequency that can be resolved unambiguously is given by the Nyquist frequency based on two-dimensional sampling theory (Snyder, Miller, 1977; Miller, Bernard, 1983):

${{F}_{N}} = \pi P{\text{/}}(180\sqrt 3 {\kern 1pt} r),$
where FN is the spatial frequency in cpd, P is the posterior nodal distance (16.7 mm for a standard human eye), and r is the photoreceptor spacing. Taking into account the data of Polyak (1941) and Curcio et al. (1990), for the minimum size of the foveal receptors (about 1.5 µ) the calculation results in FN of about 100 cpd, that is larger than the theoretical OTF limit (60 cpd).

In (Rozhkova, Matveev, 2007, p. 145), the data are presented providing evidence that the VA values of 3.0 and somewhat higher (in decimal units), corresponding to FN ≥ 90 cpd, were really encountered, although seldom, especially in 11–14-year-old children. In this investigation, the VA was assessed using tumbling E.

VISUAL ACUITY NOTATIONS SUGGESTED FOR VARIOUS PURPOSES

By designing the first letter chart for VA assessment (in 1862), Herman Snellen, Donders’ successor, proposed the VA notation that, in essence, was equivalent to the Donders notation but, in addition, included information on the viewing distance Do at which VA was measured (Pfeiffer, 1936; Colenbrander, 2008; Cole, 2014). Based on the threshold letter height hi that was estimated for the individual testee and the viewing distance Do, Snellen suggested to express VA as the ratio of the viewing distance, Do, to the conventional viewing distance, Di, at which the testee can see the threshold letters of height hi as well as the standard letter of height hs is seen by the standard eye at the viewing distance Do: Vs = Do/Di. (Fig. 5). One can easily see that Do and Di are the distances, where the threshold letters hs and hi for the standard and the testee’s eye, respectively, are of equal angular size. Based on denotations in Fig. 5, where tanαms = hs/Do= hi/Di and tanαmi = hi/Do, one obtains Vs = hs/hi = Do/Di = αmsmi (since αms and αmi are small and tanα ≈α).

Fig. 5.

The Snellen notation of VA. The top graph represents a below-standard visual acuity and the bottom graph an above-standard visual acuity. The distances D0 and Di are the reference and the testee’s threshold distances, respectively, providing equal angular sizes of 1' for hs at D0 and hi at Di.

Historically, the numerator and denominator of the Snellen fraction are expressed either in feet or meters for far vision, and occasionally, for near vision, either in inches or centimeters. Thus, for normal vision of the standard eye (MAR = 1'), the fractions 20/20 and 6/6 are different expressions corresponding to one and the same viewing distance expressed either in feet or meters (20 ft ≈ 6 m); the fraction 14/14 corresponds to normal near vision at the viewing distance expressed in inches (14 inch ≈ 35 cm).

When the Snellen fraction is converted to decimal form, one obtains the value identical to the VA in the Donders (decimal) notation, since both can be expressed as αmsmi (see Fig. 5). Thus, the Snellen VA notation is quantitatively equivalent to the Donders notation (and to the Monoyer decimal notation which is equivalent to it), but is more informative, since it also contains the indication of the viewing distance.

Knowing the viewing distance is essential, since, in general, VA depends on it (and not only in cases of refractive anomalies: cf. Heron et al., 1995; Rozhkova et al., 2004, 2005). However, in practice, the examiner regularly performes long series of measurements at a fixed, standard viewing distance (e.g., in examination of schoolchildren or in population surveys). It is therefore often reasonable to indicate the viewing distance only once for the whole assessment series and, in a dataset, to use corresponding decimal values of the Snellen fractions (or their denominators) in further statistical analysis.

For more than 150 years the Snellen notation of VA was prevalent, especially in Anglo-Saxon countries, and until recently remained broadly used for vision examination by clinicians who conducted routine measurements at standard far and near distances. However, instigated by significant progress in vision science and medical technology, in the 20th century there emerged the need to introduce other VA notations.

An extensive development, since 1960s, of the concept of spatial frequency and Fourier analysis in visual psychophysics, neurophysiology, as well as theoretical analysis of experimental findings on pattern recognition resulted in an increased use of the critical spatial frequency, Fc, of test gratings as the measure of VA (Campbell, Green, 1965; Campbell, Robson, 1968; Teller, 1979, 1997; Anderson, Thibos, 1999; a.o.). Actually, gratings had already been used for VA measurements more than a century ago: for instance, at the end of the 19th century, Wertheim (1894) designed small gratings (grids) from thin wire and estimated the largest distance at which the grating structure could be resolved. In effect, by this means Wertheim was measuring Fc.

The advancements of many recent investigations made the use of Fc as the VA measure more compelling than other VA measures and notations. One can outline at least following advantages of employing Fc.

– It is appealing that the Fc measure links the assessment of VA and contrast sensitivity function (CSF), since Fc corresponds to the highest spatial frequency discerned at the highest contrast level. Ideally, Fc can be estimated in the assessment of the full range of CSF, however, many researchers are interested only in the range of optimal spatial frequencies, thus limiting their measurements of CSF by the spatial frequencies which are essentially lower than Fc.

– In cross-sectional studies, Fc can be estimated in observers over the entire lifespan, from the neonatal age onwards (Teller, 1979; 1997; Vital-Durand et al., 1996; Woodhouse et al., 2007; Sturm et al., 2011), thus enabling investigation of age dynamics of VA in different populations. As the uniform and most unambiguous measure, Fc also provides an opportunity to monitor VA in individuals in longitudinal studies.

– The values of Fc, ranging from low to high, correspond to an intuitive interpretation of low and high vision quality.

– Graphic representation of the Fc scale for VA as the frequency-modulated pattern gives the most adequate and direct representation of resolving power that increases with the increasing VA.

– Using the Fc scale for measuring VA provides an opportunity of a direct comparison of the human eye and optical devices with regard to their resolving power.

Significant advances in knowledge of the visual system functioning, on the one hand, and in technology, on the other hand, instigated attempts to facilitate association of VA scores with visual performance and life quality. The new forms of VA measures were also developed with the intention to make them more suitable for statistical analysis of VA data, and for comparison of different methods and procedures of VA assessment. For different purposes, various notations were proposed, discussed and modified, such as logMAR, Visual Efficiency (VE), Visual Acuity Score (VAS), Visual Acuity Rating (VAR), letter-by-letter scores of the Early Treatment of Diabetic Retinopathy Study Group (ETDRS) and others (Snell, Sterling, 1925; Ogle, 1953; Bailey, Lovie, 1976; Ferris et al., 1982; Holladay, 1997; Carkeet, 2001; Bourne et al., 2003; Kaido et al., 2007; Plainis et al., 2007; Gregory et al., 2010).

Each of these notations was developed in response to a certain need of VA assessment and was supposed to meet the requirements of a certain group of professionals, to achieve ease and efficiency of using VA data in the specific form. Each notation has its advantages, shortcoming, and is tailored to a specific area of application, being beneficial for some but not always for other purposes. Due to various purposes of assessing VA, one would anticipate that different VA notations will be used in parallel, depending on specific aims and objectives of individual investigations. Unexpectedly though, there arose an unwarranted competition between different notations, which culminated in the alleged “gold standard” among the tools for VA assessment and the claim of the corresponding “best notation” (Elliott, 2016). To elucidate foundations of VA assessment, it is reasonable to scrutinize different VA notations from various viewpoints.

The VE notation, visual efficiency, was introduced by Snell and Sterling in 1925. These researchers proceeded from the assumption that each increase of MAR by 1' would reduce the efficiency of the visual system functioning (in a broad sense) to 84% of the initial level. In literature, there are two different formulae approximating such relationship (Snell, Sterling, 1925; Westheimer, 1979):

${\text{VE}} = {{0.2}^{{({\text{MAR}} - 1)/9}}}$
${\text{VE}} = 100\exp ( - 0.179[{\text{MAR}} - 1'])$
where MAR is expressed in arcmin and VE in percent.

The logMAR notation was introduced by Bailey and Lovie (1976), Australian optometrists. The notation reflected the logMAR principles, implemented in the Bailey-Lovie VA letter chart. Each chart line has five equi-legible11 letters; from line to line, there is a geometric progression in letter size (constant ratio of size, 1.26 corresponding to 0.1 steps in logarithmic scale); the intervals between the letters and lines are proportional to the letter sizes, to hold constant contour interaction and letter-by-letter scoring, thus, ensuring a similar difficulty of the visual task. As Cole (2014, p. 1) graphically put it, the chart of Bailey and Lovie “took the ophthalmic world by storm. Their principles for the design of a visual acuity chart covered all the bases so that letter size is the only significant variable in measuring visual acuity”. Thus, the core idea behind the logMAR notation is to use the logarithmic function of MAR instead of the directly estimated MAR value. Remarkably, already in 1868 Green (1868; also, Green, 1905) developed a letter chart with a logarithmic scale; also the multiplier of the size progression was the same, as in the Bailey-Lovie chart, 1.26, if one advances from the smallest letters (Green indicated, although, the value 0.79 = 1/1.26, since he proceeds from the largest letters at the top of his chart.) Unfortunately, at that time, the idea of logarithmic scaling was not yet appreciated.

The logMAR notation has been keenly promoted since its inception, to become widely used in the coming decades and dominating in both academic research and clinical practice. The main attractive feature of logMAR is that it transforms geometric progression of MAR values to arithmetic progression. LogMAR chart became very popular for VA assessment as it could provide equal accuracy in the whole range of measures. (However, it is an advantage of the chart design but not the logMAR notation.)

The ETDRS letter scores are based on the number of letters read correctly in the charts with logMAR design. They were proposed as the result of the experience obtained in the process of working with such charts to make the scores more easily collected, compared and interpreted. By designing the chart, the ETDRS Group adopted all of the principles of the Bailey-Lovie chart, although with some modifications (Ferris et al., 1982). The ETDRS chart has firmly ensconced the logMAR principles. By now, various VA charts have been developed on the same principles, i.e., with a proportional design and the multiplier of 1.26 for letter sizes at the reference levels, which led to coining the term “logMAR design”.

The VAR notation, the visual acuity rating score, was proposed as an alternative way of denoting VA in logarithmic scale, while counteracting one of the main inconveniences of the logMAR notation – its negative values in a significant part of the VA range (Bailey, Lovie-Kitchin, 2013). Specifically, the logMAR zero value corresponds to MAR = 1.0' (standard vision); for better-than-standard visual acuities (MAR < 1.0'), logMAR values become negative, which is counterintuitive. As pointedly noted by Colenbrander (2008, p. 65), logMAR is a measure of vision loss, rather than a measure of visual acuity, so, an increase in MAR (or logMAR) means a decrease in vision.

The formula for calculating VAR is a simple transformation of the logMAR scale:

${\text{VAR}} = 100 - 50\log {\text{MAR}}{\text{.}}$

Basic information useful for comparison of the VA notions and VA notations discussed herein is presented in Table 2.

Table 2.

Approximate correspondence of VA values in various VA notations and ranges of VA values reflecting the vision quality conventionally adopted in ophthalmology, optometry and vision science

Table 2 presents the ranges of VA values, in different VA notations, aligned with the categorized quality of vision in accord with experts’ conventions, as reflected in some handbooks (e.g., Somov, 1989; Bondarko et al., 1999; Shamshinova, Volkov, 1999; Сolenbrander, 2001, 2008; Lennie, Van Hemel, 2002; Rozhkova, Matveev, 2007). Notably, the correspondence between the values presented in Table 2 can be considered only tentative. One need to bear in mind that the conversion of outcomes obtained by different tools, methods and under varying testing protocols is problematic, since for each method of measurement and each VA notation, the outcome is stipulated by different physical parameters of the (threshold) test stimuli.

For example, Wesemann (2003) undertook comparison of VA estimates (in decimal notation) obtained by using three tests – the Landolt test, the Freiburg Visual Acuity & Contrast Test (FrACT), and the Bailey-Lovie chart. He found that VA values obtained by administering the Landolt test were lower by 0.5 units compared to the FrACT values, but the difference was within the German DIN tolerance limits; in comparison, VA values obtained using the Bailey-Lovie chart were on average 0.9 units lower than the Landolt test estimates. A consideration of VA scores based on grating and non-grating optotypes could be found in Thorn, Schwartz, 1990; Stiers, 2003; 2004; Strasburger et al., 2018, and a general discussion of dependence of VA on the employed measuring tools and assessment protocol one could see in (Cole, 2014).

We mention these findings in passing since the issue of comparability of the values (expressed in identical units) obtained by individual VA assessment tools is beyond the scope of the present paper.

We consider it more insightful to determine relational functions between the currently used VA measures. Crucially, this opens a new avenue in the quest to find the “grounding truth” measure of VA assessment. In our view, the VA measure is supposed to reflect an intuitive understanding that higher VA values reflect better spatial vision, and that all VA values ≥ 0, since even the smallest degree of spatial vision is certainly positive in comparison to blindness. Predicated by this understanding, the critical spatial frequency, Fc, was used as the benchmark for measuring VA. Fc is inversely proportional to MAR, Fc = 30/MAR; so, unlike MAR values, Fc values increase for better visual acuities. In Fig. 6 we present the conventionally used VA measures as functions of Fc, signposting the quality of observers’ functional vision. In addition, in the bottom of Fig. 6, vertical bars of increasing density illustrate the increasing frequency of spatial gratings (cpd), the measurement tool that reflects an increase in VA, or the quality of functional vision.

Fig. 6.

Comparative characteristics of different VA measures as functions of the critical spatial frequency Fc (Fc = 30/MAR) that signpost the quality of functional vision: a – VA measures based on resolving power (positively correlate with intuitive understanding of VA); b – VA measures based on resolution thresholds (negatively correlate with intuitive understanding of  VA).

The two parts of Fig. 6 presents functions for two principally different types of the VA measures (and corresponding notations) in their relationship to Fc. Specifically, Fig. 6a shows ascending functions that correspond to the three VA measures reflecting resolving power – decimal (1/MAR), VE, and VAR. Each function shows an increase of the VA scores with the increasing quality of vision, as indicated by increasing Fc. The decimal measure, Vd = 1/MAR, that is directly proportional to Fc, has a linear positive relationship of VA scores with Fc. In comparison, the VAR and VE functions rather resemble a logarithm relationship; although similar, they slightly differ in slopes at the lower and upper ends of the Fc range. The larger differences in the slopes of the VAR and VE functions, compared to 1/MAR (a linear function of Fc), are correlated with an overestimation or underestimation of certain VA changes by practitioners guided by VAR and VE scores. In particular, with focus on visual impairments, practitioners are inclined to pay a greater attention to VA improvement within a range of poor vision, hence, might overestimate this improvement; conversely, they pay lesser attention to an improvement within a range of good or excellent vision, and might underestimate the improvement.

Fig. 6b, in contrast, presents two descending functions that are based on resolution thresholds and correspond to the MAR and logMAR measures. We reiterate that the thresholds are negatively related to the vision quality: the higher the threshold, the poorer the vision quality or the greater the vision loss. The MAR and logMAR scores, apparently, are inverted in relation to the quality of spatial vision.

The inverted scores might be as good as the direct ones; it is essential, however, to take into account their properties. The shortcomings of the logMAR notation were discussed by Rozhkova (2017) in her paper in Russian and are briefly summarized here. These are of different kinds: some of them are related to metrological properties of logMAR as the scale of measurement; the others concern an opaque relationship between logMAR values around its zero, characteristic for the majority of the population, and the VA values in other notations.

– The zero value of logMAR, which corresponds to the conventional normal vision (the standard eye with MAR = 1'), precludes using a ratio measurement scale, required by the principles of metrology, for comparison of the vision quality across various cases, since division by zero is impossible.

– Neither can other logMAR scale values be adequately compared via ratios, in particular, because it would be problematic to interpret negative ratio of outcomes in cases where pairs of negative and positive logMAR values are to be related.

– It is also difficult to fathom a meaningful relationship between logMAR values and VA values obtained using the other notations for observers with normal and near-normal vision (cf. Table 2). Specifically, in the decimal notation, VE and VAR, small changes in functional vision are reflected by commonsensical small changes of VA values over the whole VA range; whereas in the logMAR notation, around the logMAR zero, some of such small changes in functional vision may effect change of the sign of the VA value, prompting a spurious impression that these changes in the vision quality are more specific than those in other range.

– During preschool and school ages, most of children with normal and excellent vision quality have negative VA values on logMAR scale (see the blue parts of the histograms in the Fig. 7a–c). From psychological view, negative VA estimates, being associated with impairments, might appear deceptively disturbing for parents.

Fig. 7.

An illustration of the relationship between the logMAR function and the decimal VA measures, presented as histograms of the data obtained for schoolchildren of three age groups (adopted from Rozhkova, 2017, Fig. 3).

– Moreover, on the widespread logMAR charts (with 0.1 logMAR steps between the lines), the number of reference levels in the negative scale range are too few for timely revealing significant changes in a child’s vision quality.

To illustrate the latter point, let us inspect Fig. 7. The top graphs (Fig. 7a–c) present histograms of VA values in decimal units, based on the data obtained without optical correction for binocular far vision in three age groups of primary schoolchildren using specially elaborated charts with tumbling E optotypes and the reference level steps of 0.2 decimal units (Rozhkova et al., 2001; Rozhkova, Matveev, 2007). The bottom graph, Fig. 7d, shows the logMAR function and, in the inset, repeated data for the 7-year-old children in a modified form (with the inverted order of bins to line up the direction of the vision quality changes with the logMAR scale). When the black bar, conventionally normal vision (decimal VA = 1), is aligned with logMAR = 0 (corresponding to MAR = 1'), it is apparent that, in the logMAR notation, more than 70% of the tested 7-year-old children have negative VA values (blue bins). Note also that in this negative logMAR range, there are only four reference levels (with 0.1 logMAR steps). By comparing the three decimal VA histograms presented in Fig. 7abc, one could see that the proportion of children with negative logMAR values (corresponding to blue bins in the graphs) noticeably increases with age. It is problematic to expose this age dynamics properly using only four indicated logMAR reference levels presented in most VA charts based on logMAR notation.

Like many authors comparing various VA notations, we deliberately did not separate the VA notations per se from their embodiment in the measuring tools developed for practical use, although this would be required for a comprehensive analysis. However, this task is not viable in a brief overview. It seemed for us more important to provide researchers with a cautionary advice to abstain from conclusions inferred from a limited experience and/or uncritically rely on an authority’s opinion.

Assessing the advantages of certain VA notation for practical use, the researcher considers and analyzes the experimental data obtained with the existing tools. However, ideally, it is necessary to investigate certain representative sample of tools including various optotypes, designs and notations. Sensitivity and specificity of the testing method, and reliability of its outcomes are dependent on many factors and conditions, in particular, on the parameters of the employed tool. Currently, the tools for VA assessment are predominantly charts. Regarding comparison the VA notations, it is important to bear in mind, that the design of the VA chart and the VA notation used for designating numeric outcomes are actually independent. This evident fact was clear to the author of the first chart with logarithmic (proportional) design but with decimal notation for designation of reference levels (Green, 1868). Unfortunately, in the XX century there appears and become spreading a groundless belief that the chart design implies the use of definite VA notation for outcomes of measurements.

However, we can indicate at least one opposite example – the chart elaborated by Kholina using proportional design and decimal notation for the reference levels (Kholina, 1930). Her chart contained 32 lines for the VA values varied from 0.1 to 2.0 with logarithmic steps of approximately 0.05 logMAR (the multiplier of the geometric progression for the optotype size was close to 101/24 ≈ 1.1), i.e. corresponding levels were: 0.1; 0.11; 0.12; 0.13; 0.15; …; 0.8; 0.9; 1.0; 1.1; 1.2; 1.35; 1.50; 1.65; 1.8 and 2.0.

Fig. 8 shows real sets of the reference levels used in the described chart of Kholina with logMAR steps and decimal notation. The decimal notation is used in practically all Russian charts (Golovin, Sivtsev, 1926; Roslyakov, 2001; Gracheva et al., 2019; Stulova et al., 2019), on the one hand, and in the ETDRS charts with logMAR notation, predominantly employed in the USA and Europe (Ferris et al., 1982; Plainis et al., 2007), on the other hand. As is well known, in ETDRS chart, the intervals between the neighboring reference levels are equal to 0.1 logMAR that is equivalent to multiplying the letter size by 1.26 (26%-steps).

Fig. 8.

Reference levels used in the two charts with proportional designs but different VA notations: the chart of Kholina (1930) with decimal notation of  VA (red: Vd = 1/MAR) and the ETDRS chart with logMAR notation of VA (blue: Vl = logMAR).

Comparing the two presented distributions of the reference levels, one can conclude that, because of its greater density, the chart of Kholina could provide better accuracy of measurement in the whole range of the VA values.

It is important to emphasize that, in all cases, one could vary the sets of the reference levels independently of the VA notation. It seems puzzling that by now the set of reference levels used for a certain chart is considered to be linked to the VA notation. A trivial reason of linking the reference levels and certain notation is probably to simplify numerical values of outcomes by using “round” values for practical purposes. From such viewpoint, the easiest and widespread way (often implicitly used in practice and noted in some guidelines) is to indicate the reference line numbers corresponding to testee’s thresholds: “He sees the 5th line”, or “She sees the 3rd line and 3 letters in the 4th line”. During an examination session, it is sufficient to make such recordings which contain information required for practical purposes, while leaving the task of data interpretation and analysis for future or to other researchers. Noteworthy, many optometrists often employ this scoring of VA by the line numbers since it is simple and understandable to every participant of the measuring procedure.

By close of this overview, we cannot but remark that, with time, the task of optimizing the sets of the reference levels in printed charts becomes less actual, since in the near(est) future these will surely be supplanted by computerized procedures for VA assessment, implying the shift to the task of developing optimal algorithms for varying stimulus sizes. Such algorithms were already emerging in the second half of the 20th century (Taylor, Creelman, 1967; Watson, Pelli, 1983) and continue to be improving at present. More than that: global computerization will enable both standardization and variegation of the VA procedure stipulating quite another issues and approaches to its execution – novel paradigms, principles, optotypes, and protocols.

Another factor that probably will positively change the practice of VA assessment is the envisaged transition from sporadic VA screening to its frequent monitoring. In the latter case, short intervals between the measurement sessions would make employment of such tools as the VA charts problematic, since frequent presentation of the same chart during the test session may ensue memorizing of its elements/lines, which would result in biased outcomes of the retest.

In computer-aided methods, the notion of the reference test level actually becomes unnecessary since the test stimulus size can be voluntarily modified after each stimulus presentation. In these cases, the crucial feature is the display resolution as the limiting factor of the possible increments/decrements of the optotype sizes. This is illustrated in Fig. 9 that shows feasible sizes of the stimuli calculated on the basis on the screen resolution (pixel sizes) for three different display brands at the viewing distance of 5 m (Terekhin et al., 2015; Rozhkova, Malykh, 2017). The points on the graph inside the three gray bars correspond to varying stimulus size by adding pixel by pixel. It is clearly seen that such one-pixel steps do not limit essentially the number of the stimulus size gradations in the range of the lowest VA but could be critical for the range of the high VA values.

Fig. 9.

The sets of feasible reference levels of VA (points) in decimal (left scale) and logMAR (right scale) notations calculated for the case of the tumbling-E optotype presentation on a smartphone (S), iPhone (P) and Notebook (N) from the observation distance of 5 meters. S – Samsung Galaxy S6; pixel size 0.044 mm; P – iPhone 5; pixel size 0.078 mm; N – Sony VAIO; pixel size 0.18 mm.

To develop proper and convenient computer-aided methods of the VA measurement, both scientific and technical components of the elaborated setup become equally important.

CONCLUDING REMARKS

More than 40 years ago Westheimer (1979, p. 327) wrote in his paper on the VA measurement and scaling (italics added by the authors):

“We base the discussion on the premise that it is possible to arrive at a single value for the visual acuity of an eye under a given set of conditions. It is a threshold, α, the minimum angle of resolution (MAR), and is to be measured in minutes of arc. The method of determining the threshold is immaterial for this discussion, and so is the nature of the target. The arguments apply equally to Snellen letter, checkerboard, and grating targets, and to evoked potentials, optokinetic, or forced-choice psychophysical responses. What concerns us here is the scale in which to place a set of visual acuity measurements expressed in minutes of arc.

In the present paper we share this approach bar reservation about Westheimer’s stance that the method of determining the threshold is “immaterial for this discussion”. In our opinion, the task of interrelating results of different measuring procedures that determine the MAR is not yet resolved and, in principle, cannot be solved in a uniform way. As was pointedly worded by Pirenne (1962, p. 175) in his frequently cited quotation, “[t]here are in fact as many different “visual acuities” as there are types of test objects”. This problem was, however, not addressed here and will be considered in more detail in the second part of our overview.

The undertaken analysis of different VA measures and notations shows that the advantages, shortcomings and applicability of a given notation should be assessed by considering multiple factors: the essence of the VA notation in question; the methods implicating its use in the direct form (i.e., without recalculations); intuitive comprehension of the notation; easiness of its understanding for both testees and the examiner; the notation’s applicability for assessing individuals of different age groups; insights from the population-based surveys, etc.

We hope that this paper provides convincing arguments in favor of a wider use of the maximal resolvable spatial frequency – the critical frequency, Fc – as a measure of VA. This metric is close to both the optical standards and ones’ intuitive understanding of VA: the better VA, the higher Fc; in addition, the relationship between Fc and VA can be compellingly presented graphically.

The Donders and Monoyer notations, as well as the Snellen fraction in its decimal form are all proportional to Fc, and, thus, are equally suited for expressing VA. Visual efficiency, VE, is similar to Fc, and the Donders and Monoyer notations in that it corresponds to one’s intuitive concepts. The nonlinear dependence of VE on Fc, in the low-vision range, designates a greater impact of VA increase (or decrease), compared with the case of good and excellent vision.

There are some cautious signs that the logMAR appeal has passed its peak: recently, there appeared a tendency to return to Snellen and decimal notations of VA. As remarked by Tsou et al. (2017, p. 1): “…many ophthalmologists do not understand non-Snellen formats, such as logarithm of the Minimum Angle of Resolution (logMAR) or Early Treatment Diabetic Retinopathy Study (ETDRS) letter scores. As a result, some journals, since at least 2013, have instructed authors to provide approximate Snellen equivalents next to non-Snellen visual acuity values.”

We conclude with a suggestion of an “inclusive practice”: may each researcher and practitioner use the method of VA assessment and the VA notation that is most appropriate for addressing their research question or purpose in a clinical setting, for collecting and analyzing the data required for these – provided the researcher and the practitioner profoundly understand theoretical and methodological bases of the VA measurement and take into account the metric properties of the chosen notation.

Список литературы

  1. Altman D.G., Gore S.M., Gardner M.J., Pocock S.J. Statistical guidelines for contributors to medical journals. Annals of Clinical Biochemistry. 1992. V. 29 (1). P. 1–8. https://doi.org/10.1177/000456329202900101

  2. Anderson R.S., Thibos L.N. Sampling limits and critical bandwidth for letter discrimination in peripheral vision. Journal of the Optical Society of America A. 1999. V. 16 (10). P. 2334–2342. https://doi.org/10.1364/JOSAA.16.002334

  3. Artal P. Optics of the eye and its impact in vision: a tutorial. Advances in Optics and Photonics. 2014. V. 6 (3). P. 340–367. https://doi.org/10.1364/AOP.6.000340

  4. Artal P., Chen L., Fernández E.J., Singer B., Manzanera S., Williams D.R. Neural compensation for the eye’s optical aberrations. Journal of Vision. 2004. V. 4 (4). P. 281–287. https://doi.org/10.1167/4.4.4

  5. Bach M. The Freiburg Visual Acuity test –Automatic measurement of visual acuity. Optometry and Vision Science. 1996. V. 73 (1). P. 49–53. https://doi.org/10.1097/00006324-199601000-00008

  6. Bach M. The Freiburg Visual Acuity Test-Variability unchanged by post-hoc re-analysis. Graefe’s Archive for Clinical and Experimental Ophthalmology. 2007. V. 245 (7). P. 965–971. https://doi.org/10.1007/s00417-006-0474-4

  7. Bailey I.L., Lovie J.E. New design principles for visual acuity letter charts. American Journal of Optometry and Physiological Optics. 1976. V. 53 (11). P. 740–745. https://doi.org/10.1097/00006324-197611000-00006

  8. Bailey I.L., Lovie-Kitchin J.E. Visual acuity testing. From the laboratory to the clinic. Vision Research. 2013. V. 90. P. 2–9. https://doi.org/10.1016/j.visres.2013.05.004

  9. Beck R.W., Moke P.S., Turpin A.H., Ferris III, F.L., SanGiovanni J.P., Johnson C.A., Birch E.E., Chandler D.L., Cox T., Blair C., Kraker, R.T. A computerized method of visual acuity testing: adaptation of the early treatment of diabetic retinopathy study testing protocol. American Journal of Ophthalmology. 2003. V. 135 (2). P. 194–205. https://doi.org/10.1016/S0002-9394(02)01825-1

  10. Bohigian G.M. An ancient eye test–using the stars. Survey of Ophthalmology. 2008. V. 53 (5). P. 536–539. https://doi.org/10.1016/j.survophthal.2008.06.009

  11. Bondarko V.M., Danilova M.V., Krasil’nikov N.N., Leushina L.I., Nevskaya A.A., Shelepin Yu.E. Prostranstvennoe zrenie [Spatial vision]. St. Petersburg: Nauka. 1999. 218 p. (in Russian).

  12. Bourne R.R.A., Rosser D.A., Sukudom P., Dineen B., Laidlaw D.A.H., Johnson G.J., Murdoch J.E. Evaluating a new logMAR chart designed to improve visual acuity assessment in population-based surveys. Eye. 2003. V. 17. P. 754–758. https://doi.org/10.1038/sj.eye.6700500

  13. Campbell F.W., Green D.G. Optical and retinal factors affecting visual resolution. The Journal of Physiology. 1965. V. 181 (3). P. 576–593. https://doi.org/10.1113/jphysiol.1965.sp007784

  14. Campbell F.W., Robson J.G. Application of Fourier analysis to the visibility of gratings. The Journal of Physiology. 1968. V. 197 (3). P. 551–566. https://doi.org/10.1113/jphysiol.1968.sp008574

  15. Carkeet A. Modeling logMAR visual acuity scores: effects of termination rules and alternative forced-choice options. Optometry and Vision Science. 2001. V. 78. P. 529–538. https://doi.org/10.1097/00006324-200107000-00017

  16. Cole B.L. Measuring visual acuity is not as simple as it seems. Clinical and Experimental Optometry. 2014. V. 97. P. 1–2. https://doi.org/10.1111/cxo.12123

  17. Colenbrander A. Measuring vision and vision loss. In Tasman W., Jaeger E.A. (Eds.), Duane’s clinical ophthalmology. Vol. 5 (Ch. 51). Philadelphia, PA: Lippincott Williams & Wilkins. 2001. P. 2–42.

  18. Colenbrander A. The historical evolution of visual acuity measurement. Visual Impairment Research. 2008. V. 10 (2–3). P. 57–66. https://doi.org/10.1080/1388235080263240

  19. Curcio C.A., Sloan K.R., Kalina R.E., Hendrickson A.E. Human photoreceptor topography. The Journal of Comparative Neurology. 1990. V. 292 (4). P. 497–523. https://doi.org/10.1002/cne.902920402

  20. Duncan R.O., Boynton G.M. Cortical magnification within human primary visual cortex correlates with acuity thresholds. Neuron. 2003. V. 38 (4). P. 659–671. https://doi.org/10.1016/S0896-6273(03)00265-4

  21. Elliott D.B. The good (logMAR), the bad (Snellen) and the ugly (BCVA, number of letters read) of visual acuity measurement. Ophthalmic and Physiological Optics. 2016. V. 36 (4). P. 355–358. https://doi.org/10.1111/opo.12310

  22. Elliott D.B., Sheridan M. The use of accurate visual acuity measurements in clinical anti-cataract formulation trials. Ophthalmic and Physiological Optics. 1988. V. 8 (4). P. 397–401. https://doi.org/10.1111/j.1475-1313.1988.tb01176.x

  23. Ferris F.L., Kassoff A., Bresnick G.H., Bailey I.L. New visual acuity charts for clinical research. American Journal of Ophthalmology. 1982. V. 94 (1). P. 91–96. https://doi.org/10.1016/S0896-6273(03)00265-4

  24. Golovin S.S., Sivtsev D.A. Tablitsy dlya issledovaniya ostroty zreniya [Charts for measurement of visual acuity]. Moscow: Gosidatel’stvo; 1926 (in Russian).

  25. Gracheva M.A., Kazakova A.A., Pokrovskiy D.F., Medvedev I.B. Tablitsy dlya otsenki ostroty zreniya: analiticheskii obzor, osnovnye terminy [Visual acuity charts: analytical review, basic terms]. Annals of the Russian Academy of Medical Sciences. 2019. V. 74 (3). P. 192–199 (in Russian)https://doi.org/10.15690/vramn1142

  26. Green J. On a new series of test-letters for determining the acuteness of vision. Transactions of the American Ophthalmological Society. 1868. V. 1 (4–5). P. 68–71.

  27. Green J. Notes on the clinical determination of the acuteness of vision, including the construction and graduation of optotypes, and on systems of notation. Transactions of the American Ophthalmological Society. 1905. V. 10 (3). P. 644–654.

  28. Gregory N.Z., Feuer W.F., Rosenfeld P.J. Novel method for analyzing Snellen visual acuity measurements. Retina. 2010. V. 30 (7). P. 1046–1050. https://doi.org/10.1097/IAE.0b013e3181d87e04

  29. Hecht S., Mintz E.U. The visibility of single lines at various illuminations and the retinal basis of visual resolution. Journal of General Physiology. 1939. V. 22 (5). P. 593–612. https://doi.org/10.1085/jgp.22.5.593

  30. Heinrich S.P., Bach M. Resolution acuity versus recognition acuity with Landolt-style optotypes. Graefe’s Archive for Clinical and Experimental Ophthalmology. 2013. V. 251 (9). P. 2235–2241. https://doi.org/10.1007/s00417-013-2404-6

  31. Helmzoltz H.L.F. Handbuch der physiologischen Optic. 1867. L. Voss: Leipzig (in German).

  32. Helmholtz H.L.F. Helmholtz’ treatise on physiological optics. Trans. From third German ed., ed. by J.P.C. Southall, published by Optical Society of America. 3 vols. Menasha, WI: G. Banta Co. 1924–1925.

  33. Heron G., Furby H.P., Walker R.J., Lane C.S., Judge O.J.E. Relationship between visual acuity and observation distance. Ophthalmic and Physiological Optics. 1995. V. 15 (1). P. 23–30. https://doi.org/10.1046/j.1475-1313.1995.9592788.x

  34. Holladay J.T. Proper method for calculating average visual acuity. Journal of Refractive Surgery. 1997. V. 13 (4). P. 388–391. https://doi.org/10.3928/1081-597X-19970701-16

  35. Hyon J.Y., Yeo H.E., Seo J.M., Lee I.B., Lee J.H., Hwang J.M. Objective measurement of distance visual acuity determined by computerized optokinetic nystagmus test. Investigative Ophthalmology & Visual Science. 2010. V. 51 (2). P. 752–757. https://doi.org/10.1167/iovs.09-4362

  36. International Vocabulary of Metrology – Basic and General Concepts and Associated Terms (3rd ed.), Joint Committee for Guides in Metrology. 2008. P. 6 http://www.bipm.org/utils/common/documents/jcgm/ JCGM_200_2008.pdf (accessed on 08.01.2021)

  37. Jerri A.J. The Shannon sampling theorem – its various extensions and applications: A tutorial review. Proceedings of the IEEE. 1977. V. 65 (11). P. 1565–1596. https://doi.org/10.1109/PROC.1977.10771

  38. Kaido M., Dogru M., Ishida R., Tsubota K. Concept of functional visual acuity and its applications. Cornea. 2007. V. 26. P. S29–S35. https://doi.org/10.1097/ICO.0b013e31812f6913

  39. Kholina A. Novaya tablitsa dlya issledovaniya ostroty zreniya [A new chart for assessment of visual acuity]. Russkii oftal’mologicheskii zhurnal. 1930. V. 11 (1). P. 42–47 (in Russian).

  40. Koskin S.A. Sistema opredeleniya ostroty zreniya v tselyakh vrachebnoi ekspertizy [A system of determining visual acuity for medical expertise]. MD thesis. St. Petersburg. 2009. 48 p. (in Russian).

  41. Lennie P., Van Hemel S.B. (Eds.). Visual impairments: Determining eligibility for social security benefits. Washington, DC: National Academies Press. 2002. https://doi.org/10.17226/10320

  42. Lovie-Kitchin J.E. Validity and reliability of visual acuity measurements. Ophthalmic and Physiological Optics.1988. V. 8 (4). P. 363–370. https://doi.org/10.1111/j.1475-1313.1988.tb01170.x

  43. Lovie-Kitchin J.E., Brown B. Repeatability and intercorrelations of standard vision tests as a function of age. Optometry and Vision Science. 2000. V. 77 (8). P. 412–420. https://doi.org/10.1097/00006324-200008000-00008

  44. Miller W.H., Bernard G.D. Averaging over the foveal receptor aperture curtails aliasing. Vision Research. 1983. V. 23 (12). P. 1365–1369. 10.1016/0042-6989(83)90147-5

  45. Monoyer F. Échelle typographique décimauex pour mesurer l’acuité visuelle. 1875. Gazette Médicale de Paris. V. 21. 258–259 (in French).

  46. Ogle K.N. On the problem of an international nomenclature for designating visual acuity. American Journal of Ophthalmology. 1953. 36 (7). P. 909–921. 10.1016/0002-9394(53)92172-2

  47. Pfeiffer R.L. Frans Cornelis Donders Dutch physiologist and ophthalmologist. Bulletin of the New York Academy of Medicine. 1936. V. 12 (10). P. 566–581.

  48. Pirenne M.H. Visual acuity. In H. Davson (Ed.), The eye. Vol. 2: The visual process. New York/London: Academic Press. 1962. P. 175–195. https://doi.org/10.1016/B978-1-4832-3089-4.50018-2

  49. Plainis S., Tzatzala P., Orphanos Y., Tsilimbaris M.K. A modified ETDRS visual acuity chart for European-wide use. Optometry and Vision Science. 2007. V. 84 (7). P. 647–653. https://doi.org/10.1097/OPX.0b013e3180dc9a60

  50. Polyak S.L. The retina: The anatomy and the histology of the retina in man, ape, and monkey, including the consideration of visual functions, the history of physiological optics, and the histological laboratory technique. Chicago: University of Chicago Press. 1941. 607 p.

  51. Porter J., Queener H., Lin J., Thorn K., Awwal A.A.S. (Eds.). Adaptive optics for vision science: principles, practices, design, and applications. New York: Wiley. 2006. 595 p.

  52. Putnam N.M., Hofer H.J., Doble N., Chen L., Carroll J., Williams D.R. The locus of fixation and the foveal cone mosaic. Journal of Vision. 2005. V. 5 (7). P. 632–639. https://doi.org/10.1167/5.7.3

  53. Radner W. Reading charts in ophthalmology. Graefe’s Archive for Clinical and Experimental Ophthalmology. 2017. V. 255 (8). P. 1465–1482. https://doi.org/10.1007/s00417-017-3659-0

  54. Rayleigh L. Investigations in optics, with special reference to the spectroscope. Philosophical Magazine. 1879. V. 8 (49): 261–274. https://doi.org/10.1080/14786447908639684

  55. Roorda A., Williams D.R. The arrangement of the three cone classes in the living human eye. Nature. 1999. V. 397 (6719). P. 520–522. https://doi.org/10.1038/17383

  56. Roorda A., Romero-Borja F., Donnelly W.J., Queener H., Hebert T.J., Campbell M.C. Adaptive optics scanning laser ophthalmoscopy. Optics Express. 2002. V. 10 (9). P. 405–412. https://doi.org/10.1364/OE.10.000405

  57. Roslyakov V.A. Novye tablitsy dlya izmereniya ostroty zreniya [New charts for visual acuity assessment]. Rossiiskii oftal’mologicheskii zhurnal. 2001. (1). P. 36–38 (in Russian).

  58. Rosser D.A., Cousens S., Murdoch I.E., Fitzke F.W., Laidlaw D.A. How sensitive to clinical change are ETDRS logMAR visual acuity measurements? Investigative Ophthalmology & Visual Science. 2003. V. 44 (8). P. 3278–3281. https://doi.org/10.1167/iovs.02-1100

  59. Rozhkova G.I. LogMAR dlya ostroty zreniya khuzhe, chem loshadinaya sila dlya moshchnosti elektricheskoi lampochki [LogMAR is worse for visual acuity than horsepower for electric lamp]. Sensornye sistemy [Sensory Systems]. 2017. V. 31 (1). P. 31–43 (in Russian).

  60. Rozhkova G.I. Est’li real’nye osnovanija sčitat’ tablicy ETDRS “zolotym standartom” dlja izmerenij ostroty zrenija? [Are there true reasons to consider ETDRS charts as a “golden standard” for measuring visual acuity?] Russian Military Medical Academy Reports. 2018. V. 37 (2). P. 120–123 (in Russian).

  61. Rozhkova G.I., Malykh T.B. Sovremennye aspekty standartizacii vizometrii [Current issues of standardization in visometry]. Aviakosmicheskaya i ekologicheskaya meditsina. 2017. V. 51 (6). P. 5–16 (in Russian).https://doi.org/10.21687/0233-528X-2017-51-6-5-16

  62. Rozhkova G.I., Matveev S.G. Zrenie detei: problemy otsenki i funktsional’noi korrektsii [Vision in children: Problems of the assessment and functional correction]. Moscow: Nauka, 2007. 315 p. (in Russian).

  63. Rozhkova G.I., Tokareva V.S., Vaschenko D.I., Vasiljeva N.N. Vozrastnaya dinamika ostroty zreniya u shkol’nikov. I.Binokulyarnaya ostrota zreniya dlya dali [Age dynamics of visual acuity in schoolchildren. I. Binocular visual acuity for far distance]. Sensornye sistemy [Sensory Systems]. 2001. V. 15 (1). P. 47–52 (in Russian).

  64. Rozhkova G.I., Tokareva V.S., Nikolaev D.P., Ognivov V.V. Osnovnye tipy zavisimosti ostroty zreniya ot rasstoyaniya u cheloveka v raznom vozraste po rezul’tatam diskriminantnogo analiza [Main types of dependence of visual acuity on the distance in individuals of different age based on discriminant analysis results]. Sensornye sistemy [Sensory Systems]. 2004. V. 18 (4). P. 330–338 (in Russian).

  65. Rozhkova G.I., Podugolnikova T.A., Vasiljeva N.N. Visual acuity in 5–7-year-old children: Individual variability and dependence on observation distance. Ophthalmic and Physiological Optics. 2005. V. 26. P. 66–80. https://doi.org/10.1111/j.1475-1313.2004.00263.x

  66. Shamshinova A.M., Volkov V.V. Funktsional’nye metody issledovaniya v oftal’mologii [Functional methods of investigation in ophthalmology]. Moscow: Nauka. 1999. 416 p. (in Russian).

  67. Siderov J., Tiu A.L. Variability of measurements of visual acuity in a large eye clinic. Acta Ophthalmologica Scandinavica. 1999. V. 77. P. 673–676. https://doi.org/10.1034/j.1600-0420.1999.770613.x

  68. Sloan L.L. Needs for precise measures of acuity: Equipment to meet these needs. Archives of Ophthalmology. 1980. V. 98 (2). P. 286–290. https://doi.org/10.1001/archopht.1980.01020030282008

  69. Snell A.C., Sterling S. The percentage evaluation of macular vision. Transactions of the American Ophthalmological Society. 1925. V. 23. P. 204–227.

  70. Snellen H., Graham C.H. Probebuchstaben zur Bestimmung der Sehschärfe [Test letters for determining visual acuity]. Utrecht: Van de Weijer. 1862. 19 p. (in German).

  71. Snyder A.W., Miller W.H. Photoreceptor diameter and spacing for highest resolving power. Journal of the Optical Society of America. 1977. V. 67 (5). P. 696–698. https://doi.org/10.1364/JOSA.67.000696

  72. Somov E.E. Metody oftal’moergonimiki [Methods of ophthalmoergonomics]. Leningrad: Nauka, 1989. 157 p. (in Russian).

  73. Stiers P., Vanderkelen R., Vandenbussche E. Optotype and grating visual acuity in preschool children. Investigative Ophthalmology & Visual Science. 2003. V. 44 (9). P. 4123–4130. https://doi.org/10.1167/iovs.02-0739

  74. Stiers P., Vanderkelen R., Vandenbussche E. Optotype and grating visual acuity in patients with ocular and cerebral visual impairment. Investigative Ophthalmology & Visual Science. 2004. V. 45 (12). P. 4333–4339. https://doi.org/10.1167/iovs.03-0822

  75. Stevens S.S. On the theory of scales of measurement. Science. 1946. V. 103 (2684). P. 677–680. https://doi.org/10.1126/science.103.2684.677

  76. Strasburger H., Bach M., Heinrich S. P. Blur unblurred – A mini tutorial. i-Perception. 2018. V. 9 (2). P. 1–15. https://doi.org/10.1177/2041669518765850

  77. Stulova A.N., Semenova N.S., Akopyan V.S. Otsenka ostroty zreniya: vzglyad v proshloe i sovremennye tendentsii [Visual acuity assessment: historical overview and current trends]. Vestnik oftal’mologii. 2019. V. 135 (6). P. 141–146 (in Russian). https://doi.org/10.17116/oftalma2019135061141

  78. Sturm V., Cassel D., Eizenman M. Objective estimation of visual acuity with preferential looking. Investigative Ophthalmology & Visual Science. 2011. V. 52 (2). P. 708–713. https://doi.org/10.1167/iovs.09-4911

  79. Taylor M.M., Creelman C.D. PEST: Efficient Estimates on Probability Functions. The Journal of the Acoustical Society of America. 1967. V. 41 (4A). P. 782–787. https://doi.org/10.1121/1.1910407

  80. Tehrani N.M., Riazi-Esfahani H., Jafarzadehpur E., Mirzajani A., Talebi H., Amini A., Mazloumi M., Roohipoor R., Riazi-Esfahani M. Multifocal electroretinogram in diabetic macular edema; correlation with visual acuity and optical coherence tomography. Journal of Ophthalmic & Vision Research. 2015. V. 10 (2). P. 165–171. https://doi.org/10.4103/2008-322X.163773

  81. Teller D. The forced-choice preferential looking procedure: A psychophysical technique for use in human infants. Infant Behavior & Development. 1979. V. 2. P. 135–153. 10.1016/S0163-6383(79)80016-8

  82. Teller D. The first glances: The vision of infants. Investigative Ophthalmology & Visual Science. 1997. V. 38 (11). P. 2183–2203.

  83. Terekhin A.P., Gracheva M.A., Rozhkova G.I., Lebedev D.S. Svidetel’stvo 2015616714 Rossiiskaya Federatsiya. [Certificate of the state registration of a computer program. Interactive program for visual acuity assessment based on the accurate threshold measurement using three optotypes “Tip-Top”]; the applicant and copyright holder: A.A. Khar-kevich FGBU IPPI RAN (RU). – No.2014619697; submitted: 26.09.2014; published: 19.06.2015. (in Russian). iitp.ru/ru/patents/1293.htm

  84. Thorn F., Schwartz F. Effects of dioptric blur on Snellen and grating acuity. Optometry and Vision Science. V. 67 (1). P. 3–7. https://doi.org/10.1097/00006324-199001000-00002

  85. Tsou B.C., Bressler N.M. Visual acuity reporting in clinical research publications. JAMA Ophthalmology. 2017. V. 135 (6). P. 651–653. https://doi.org/10.1001/jamaophthalmol.2017.0932

  86. Vital-Durand F., Ayzac L., Pinzaru G. Acuity cards and the determination of risk factors in 6-8 months infants. In F. Vital-Durand, J. Atkinson, O.J. Braddick (Eds.), Infant vision. New York: Oxford University Press. 1996. P. 185–200.

  87. Watson A.B., Pelli D.G. QUEST: A general multidimensional bayesian adaptive psychometric method. Perception & Psychophysics. 1983. V. 33 (2). P. 113–120. https://doi.org/10.1167/17.3.10

  88. Wertheim T. Üeber die indirekte Sehschärfe [On indirect visual acuity]. Zeitschrift für Psychologie & Physiologie der Sinnesorgane. 1894. V. 7. P. 172–187 (in German).

  89. Wesemann W. Die Grenzen der Sehschärfe, Teil 6: Welche Sehschärfe erreicht der Mensch? [The limits of visual acuity, Part 6: What is the best visual acuity in possible in a human being?]. Optometrie. 2003. (2). P. 42–47 (in German).

  90. Westheimer G. Image quality in the human eye. Optica Acta. 1970. V. 17 (9). P. 641–658. https://doi.org/10.1080/713818355

  91. Westheimer G. Visual acuity and hyperacuity. Investigative Ophthalmology & Visual Science. 1975. V. 14 (8). P. 570–572.

  92. Westheimer G. Scaling of visual acuity measurements. Archives of Ophthalmology. 1979. V. 97 (2). P. 327–330. https://doi.org/10.1001/archopht.1979.01020010173020

  93. Westheimer G. Updating the classical approach to visual acuity. Clinical and Experimental Optometry. 2001. V. 84 (5). P. 258–263. https://doi.org/10.1111/j.1444-0938.2001.tb05035.x

  94. Westheimer G. Visual acuity and hyperacuity. In M. Bass, C. DeCusalis, J.M. Enoch, V. Lakshminarayanan, G. Li, C.A. MacDonald, V.N. Mahajan, E. Van Stryland (Eds.), Handbook of optics, 3rd ed. Vision and vision optics. 2010. V. 3. P. 41–417. New York: McGraw-Hill.

  95. Williams M.A., Moutray T.N., Jackson A.J. Uniformity of visual acuity measures in published studies. Investigative Ophthalmology & Visual Science. 2008. V. 49 (10). P. 4321–4327. https://doi.org/10.1167/iovs.07-0511

  96. Wolin L.R., Dillman A. Objective measurement of visual acuity: Using optokinetic nystagmus and electro-oculography. Archives of Ophthalmology. 1964. V. 71 (6). P. 822–826. https://doi.org/10.1001/archopht.1964.00970010838008

  97. Woodhouse J.M., Morjaria S.A., Adler P.M. Acuity measurements in adult subjects using a preferential looking test. Ophthalmic and Physiological Optics. 2007. V. 27 (1). P. 54–59. https://doi.org/10.1111/j.1475-1313.2006.00454.x

  98. Zheng X., Xu G., Zhang K., Liang R., Yan W., Tian P., Jia Y., Zhang S., Du C. Assessment of human visual acuity using visual evoked potential: A review. Sensors (Switzerland). 2020. V. 20 (19). P. 1–26. https://doi.org/10.3390/s20195542

Дополнительные материалы отсутствуют.