We are glad to welcome you to our website

Evidence-based diagnosis: a handbook of clinical prediction rules. Subscribe to Australian Prescriber John Attia Senior Lecturer, Clinical Epidemiology, Centre for Clinical Epidemiology and Biostatistics Australia, University of Newcastle, Newcastle, New South Wales About Australian Prescriber Contact us Date published: 01 October 2003 Reasonable care is taken to provide accurate information at the time of creation.

This is a range of 1: 100,000,000,000,000. At the lower end of this range the visual system trades color perception and good visual acuity for very high sensitivity to low light levels. Photopic (cone) threshold is almost 4 log units above rod threshold. Approximately the next 2 log units is called the mesopic range and it is here that both the rods and cones contribute to vision.

The reader will notice that just inside the photopic range rod saturation begins. Rod saturation refers to the rods output not increasing as luminance increases. They are already responding as vigorously as they can. The beginning of color vision occurs in the mesopic range because of cone stimulation. Color vision reaches its best in the photopic region as does visual acuity. However, as luminance continues to increase, to very high levels, visual performance deteriorates.

When the light energy is high enough it can cause retinal damage. From this description it would appear that the sensitivity differences between rods and cones explains the entire 14 log unit range of visual sensitivity. The time course of receptor sensitivity increasing in darkness and the role of the photopigments is understood. There is more to the story. Although we do not yet fully understand the whole story, more of it can be found in more on sensitivity.

Such information can be used to develop X-factors in price cap regulation, to reward (or punish) companies. Or in case the benchmarking team is the regulator, he might want to publish the rankings or efficiency scores to provide the public with information, putting pressure on managers of poor performing utilities to improve the performance of their firms. In both cases, the accuracy and robustness of inefficiency estimates are very important because they may have significant financial or social impacts.

In particular, if the estimated inefficiency scores or rankings are sensitive to the benchmarking method, a more detailed analysis is required to justify the adopted model.

Tests for mutual consistency are becoming standard. Issues include model specification (cost vs. Following the work of others, we suggest three levels of sensitivity tests. To check for the robustness of performance rankings, researchers have begun to compare results from different methodologies: using correlation matrices or verifying whether different models identified the same set of utilities as the most efficient and least efficient firms.

Clearly, if efficiency scores are to have any use for managerial incentive or as elements in regulatory mechanisms, stakeholders need to be confident that the scores reflect reality, and are not just artifacts of model specification, sample selection, treatment of outliers, or other steps in the analytic process.

Thus, benchmarking teams are performing sensitivity tests. After the three levels of tests, the benchmarking team should have a good sense of the consistency of different methods. If the results pass the sensitivity tests, the benchmarking team can start to analyze scores and rankings and explore the potential determinants of inefficiencies across firms and over time.

The utilities can be divided into different groups by various factors, such as regions, population density, regulatory environment, ownership structure, and vintage to compare the efficiency scores.