Vector Measures (Mathematical Surveys, Number 15)


Free download. Book file PDF easily for everyone and every device. You can download and read online Vector Measures (Mathematical Surveys, Number 15) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Vector Measures (Mathematical Surveys, Number 15) book. Happy reading Vector Measures (Mathematical Surveys, Number 15) Bookeveryone. Download file Free Book PDF Vector Measures (Mathematical Surveys, Number 15) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Vector Measures (Mathematical Surveys, Number 15) Pocket Guide.
MSC Classification Codes

Measurement locates an object on a sub-region of this abstract parameter space, thereby reducing the range of possible states and This reduction of possibilities amounts to the collection of information about the measured object. Since the early s a new wave of philosophical scholarship has emerged that emphasizes the relationships between measurement and theoretical and statistical modeling. The central goal of measurement according to this view is to assign values to one or more parameters of interest in the model in a manner that satisfies certain epistemic desiderata, in particular coherence and consistency.

A central motivation for the development of model-based accounts is the attempt to clarify the epistemological principles underlying aspects of measurement practice. For example, metrologists employ a variety of methods for the calibration of measuring instruments, the standardization and tracing of units and the evaluation of uncertainties for a discussion of metrology, see the previous section. Traditional philosophical accounts such as mathematical theories of measurement do not elaborate on the assumptions, inference patterns, evidential grounds or success criteria associated with such methods.

As Frigerio et al. By contrast, model-based accounts take scale construction to be merely one of several tasks involved in measurement, alongside the definition of measured parameters, instrument design and calibration, object sampling and preparation, error detection and uncertainty evaluation, among others —7. Other, secondary interactions may also be relevant for the determination of a measurement outcome, such as the interaction between the measuring instrument and the reference standards used for its calibration, and the chain of comparisons that trace the reference standard back to primary measurement standards Mari Although measurands need not be quantities, a quantitative measurement scenario will be supposed in what follows.

Two sorts of measurement outputs are distinguished by model-based accounts [JCGM 2. As proponents of model-based accounts stress, inferences from instrument indications to measurement outcomes are nontrivial and depend on a host of theoretical and statistical assumptions about the object being measured, the instrument, the environment and the calibration process. Measurement outcomes are often obtained through statistical analysis of multiple indications, thereby involving assumptions about the shape of the distribution of indications and the randomness of environmental effects Bogen and Woodward — Measurement outcomes also incorporate corrections for systematic effects, and such corrections are based on theoretical assumptions concerning the workings of the instrument and its interactions with the object and environment.

Systematic corrections involve uncertainties of their own, for example in the determination of the values of constants, and these uncertainties are assessed through secondary experiments involving further theoretical and statistical assumptions. Moreover, the uncertainty associated with a measurement outcome depends on the methods employed for the calibration of the instrument. Calibration involves additional assumptions about the instrument, the calibrating apparatus, the quantity being measured and the properties of measurement standards Rothbart and Slayden ; Franklin ; Baird Ch.

Finally, measurement involves background assumptions about the scale type and unit system being used, and these assumptions are often tied to broader theoretical and technological considerations relating to the definition and realization of scales and units. These various theoretical and statistical assumptions form the basis for the construction of one or more models of the measurement process.

Measurement is viewed as a set of procedures whose aim is to coherently assign values to model parameters based on instrument indications. Models are therefore seen as necessary preconditions for the possibility of inferring measurement outcomes from instrument indications, and as crucial for determining the content of measurement outcomes. As proponents of model-based accounts emphasize, the same indications produced by the same measurement process may be used to establish different measurement outcomes depending on how the measurement process is modeled, e.

As Luca Mari puts it,. Similarly, models are said to provide the necessary context for evaluating various aspects of the goodness of measurement outcomes, including accuracy, precision, error and uncertainty Boumans , a, , b; Mari b. Model-based accounts diverge from empiricist interpretations of measurement theory in that they do not require relations among measurement outcomes to be isomorphic or homomorphic to observable relations among the items being measured Mari Indeed, according to model-based accounts relations among measured objects need not be observable at all prior to their measurement Frigerio et al.

Instead, the key normative requirement of model-based accounts is that values be assigned to model parameters in a coherent manner. The coherence criterion may be viewed as a conjunction of two sub-criteria: i coherence of model assumptions with relevant background theories or other substantive presuppositions about the quantity being measured; and ii objectivity, i. The first sub-criterion is meant to ensure that the intended quantity is being measured, while the second sub-criterion is meant to ensure that measurement outcomes can be reasonably attributed to the measured object rather than to some artifact of the measuring instrument, environment or model.

Taken together, these two requirements ensure that measurement outcomes remain valid independently of the specific assumptions involved in their production, and hence that the context-dependence of measurement outcomes does not threaten their general applicability. Besides their applicability to physical measurement, model-based analyses also shed light on measurement in economics. Like physical quantities, values of economic variables often cannot be observed directly and must be inferred from observations based on abstract and idealized models.

The nineteenth century economist William Jevons, for example, measured changes in the value of gold by postulating certain causal relationships between the value of gold, the supply of gold and the general level of prices Hoover and Dowell —; Morgan Taken together, these models allowed Jevons to infer the change in the value of gold from data concerning the historical prices of various goods.

The ways in which models function in economic measurement have led some philosophers to view certain economic models as measuring instruments in their own right, analogously to rulers and balances Boumans , c, , a, , a; Morgan Marcel Boumans explains how macroeconomists are able to isolate a variable of interest from external influences by tuning parameters in a model of the macroeconomic system. This technique frees economists from the impossible task of controlling the actual system.

As Boumans argues, macroeconomic models function as measuring instruments insofar as they produce invariant relations between inputs indications and outputs outcomes , and insofar as this invariance can be tested by calibration against known and stable facts. Another area where models play a central role in measurement is psychology.

The measurement of most psychological attributes, such as intelligence, anxiety and depression, does not rely on homomorphic mappings of the sort espoused by the Representational Theory of Measurement Wilson These models are constructed from substantive and statistical assumptions about the psychological attribute being measured and its relation to each measurement task. For example, Item Response Theory, a popular approach to psychological measurement, employs a variety of models to evaluate the validity of questionnaires.

One of the simplest models used to validate such questionnaires is the Rasch model Rasch New questionnaires are calibrated by testing the fit between their indications and the predictions of the Rasch model and assigning difficulty levels to each item accordingly. The model is then used in conjunction with the questionnaire to infer levels of English language comprehension outcomes from raw questionnaire scores indications Wilson ; Mari and Wilson Psychologists are typically interested in the results of a measure not for its own sake, but for the sake of assessing some underlying and latent psychological attribute.

It is therefore desirable to be able to test whether different measures, such as different questionnaires or multiple controlled experiments, all measure the same latent attribute. A construct is an abstract representation of the latent attribute intended to be measured, and.

Constructs are denoted by variables in a model that predicts which correlations would be observed among the indications of different measures if they are indeed measures of the same attribute. Several scholars have pointed out similarities between the ways models are used to standardize measurable quantities in the natural and social sciences. Others have raised doubts about the feasibility and desirability of adopting the example of the natural sciences when standardizing constructs in the social sciences.

As Anna Alexandrova points out, ethical considerations bear on questions about construct validity no less than considerations of reproducibility. Such ethical considerations are context sensitive, and can only be applied piecemeal. Examples of Ballung concepts are race, poverty, social exclusion, and the quality of PhD programs. Such concepts are too multifaceted to be measured on a single metric without loss of meaning, and must be represented either by a matrix of indices or by several different measures depending on which goals and values are at play see also Cartwright and Bradburn In a similar vein, Leah McClimans argues that uniformity is not always an appropriate goal for designing questionnaires, as the open-endedness of questions is often both unavoidable and desirable for obtaining relevant information from subjects.

Rather than emphasizing the mathematical foundations, metaphysics or semantics of measurement, philosophical work in recent years tends to focus on the presuppositions and inferential patterns involved in concrete practices of measurement, and on the historical, social and material dimensions of measuring. In the broadest sense, the epistemology of measurement is the study of the relationships between measurement and knowledge.

Central topics that fall under the purview of the epistemology of measurement include the conditions under which measurement produces knowledge; the content, scope, justification and limits of such knowledge; the reasons why particular methodologies of measurement and standardization succeed or fail in supporting particular knowledge claims, and the relationships between measurement and other knowledge-producing activities such as observation, theorizing, experimentation, modelling and calculation. In pursuing these objectives, philosophers are drawing on the work of historians and sociologists of science, who have been investigating measurement practices for a longer period Wise and Smith ; Latour Ch.

The following subsections survey some of the topics discussed in this burgeoning body of literature. A topic that has attracted considerable philosophical attention in recent years is the selection and improvement of measurement standards. Generally speaking, to standardize a quantity concept is to prescribe a determinate way in which that concept is to be applied to concrete particulars. This duality in meaning reflects the dual nature of standardization, which involves both abstract and concrete aspects.


  • Weekly outline.
  • Special order items!
  • Human Behavior Understanding: 6th International Workshop, HBU 2015 Osaka, Japan, September 8, 2015. Proceedings.
  • Retribution (Warhammer Novels)?
  • Weekly outline.

In Section 4 it was noted that standardization involves choices among nontrivial alternatives, such as the choice among different thermometric fluids or among different ways of marking equal duration. Appealing to theory to decide which standard is more accurate would be circular, since the theory cannot be determinately applied to particulars prior to a choice of measurement standard. A drawback of this solution is that it supposes that choices of measurement standard are arbitrary and static, whereas in actual practice measurement standards tend to be chosen based on empirical considerations and are eventually improved or replaced with standards that are deemed more accurate.

A new strand of writing on the problem of coordination has emerged in recent years, consisting most notably of the works of Hasok Chang , , and Bas van Fraassen Ch. These works take a historical and coherentist approach to the problem. Rather than attempting to avoid the problem of circularity completely, as their predecessors did, they set out to show that the circularity is not vicious.

Chang argues that constructing a quantity-concept and standardizing its measurement are co-dependent and iterative tasks. The pre-scientific concept of temperature, for example, was associated with crude and ambiguous methods of ordering objects from hot to cold. Thermoscopes, and eventually thermometers, helped modify the original concept and made it more precise.

With each such iteration the quantity concept was re-coordinated to a more stable set of standards, which in turn allowed theoretical predictions to be tested more precisely, facilitating the subsequent development of theory and the construction of more stable standards, and so on. From either vantage point, coordination succeeds because it increases coherence among elements of theory and instrumentation. It is only when one adopts a foundationalist view and attempts to find a starting point for coordination free of presupposition that this historical process erroneously appears to lack epistemic justification The new literature on coordination shifts the emphasis of the discussion from the definitions of quantity-terms to the realizations of those definitions.

JCGM 5. Examples of metrological realizations are the official prototypes of the kilogram and the cesium fountain clocks used to standardize the second. Recent studies suggest that the methods used to design, maintain and compare realizations have a direct bearing on the practical application of concepts of quantity, unit and scale, no less than the definitions of those concepts Tal forthcoming-a; Riordan As already discussed above Sections 7 and 8. On the historical side, the development of theory and measurement proceeds through iterative and mutual refinements.

On the conceptual side, the specification of measurement procedures shapes the empirical content of theoretical concepts, while theory provides a systematic interpretation for the indications of measuring instruments. This interdependence of measurement and theory may seem like a threat to the evidential role that measurement is supposed to play in the scientific enterprise.

After all, measurement outcomes are thought to be able to test theoretical hypotheses, and this seems to require some degree of independence of measurement from theory. This threat is especially clear when the theoretical hypothesis being tested is already presupposed as part of the model of the measuring instrument. To cite an example from Franklin et al. There would seem to be, at first glance, a vicious circularity if one were to use a mercury thermometer to measure the temperature of objects as part of an experiment to test whether or not objects expand as their temperature increases.

Nonetheless, Franklin et al. The mercury thermometer could be calibrated against another thermometer whose principle of operation does not presuppose the law of thermal expansion, such as a constant-volume gas thermometer, thereby establishing the reliability of the mercury thermometer on independent grounds. To put the point more generally, in the context of local hypothesis-testing the threat of circularity can usually be avoided by appealing to other kinds of instruments and other parts of theory.

A different sort of worry about the evidential function of measurement arises on the global scale, when the testing of entire theories is concerned. As Thomas Kuhn argues, scientific theories are usually accepted long before quantitative methods for testing them become available. The reliability of newly introduced measurement methods is typically tested against the predictions of the theory rather than the other way around.

Hence, Kuhn argues, the function of measurement in the physical sciences is not to test the theory but to apply it with increasing scope and precision, and eventually to allow persistent anomalies to surface that would precipitate the next crisis and scientific revolution. Note that Kuhn is not claiming that measurement has no evidential role to play in science. The theory-ladenness of measurement was correctly perceived as a threat to the possibility of a clear demarcation between the two languages.

Contemporary discussions, by contrast, no longer present theory-ladenness as an epistemological threat but take for granted that some level of theory-ladenness is a prerequisite for measurements to have any evidential power. Without some minimal substantive assumptions about the quantity being measured, such as its amenability to manipulation and its relations to other quantities, it would be impossible to interpret the indications of measuring instruments and hence impossible to ascertain the evidential relevance of those indications.

This point was already made by Pierre Duhem —6; see also Carrier 9— Moreover, contemporary authors emphasize that theoretical assumptions play crucial roles in correcting for measurement errors and evaluating measurement uncertainties. Indeed, physical measurement procedures become more accurate when the model underlying them is de-idealized, a process which involves increasing the theoretical richness of the model Tal This problem is especially clear when one attempts to account for the increasing use of computational methods for performing tasks that were traditionally accomplished by measuring instruments.

As Margaret Morrison and Wendy Parker forthcoming argue, there are cases where reliable quantitative information is gathered about a target system with the aid of a computer simulation, but in a manner that satisfies some of the central desiderata for measurement such as being empirically grounded and backward-looking.

Such information does not rely on signals transmitted from the particular object of interest to the instrument, but on the use of theoretical and statistical models to process empirical data about related objects. For example, data assimilation methods are customarily used to estimate past atmospheric temperatures in regions where thermometer readings are not available. These estimations are then used in various ways, including as data for evaluating forward-looking climate models.

Two key aspects of the reliability of measurement outcomes are accuracy and precision. Consider a series of repeated weight measurements performed on a particular object with an equal-arms balance. JCGM 2. Though intuitive, the error-based way of carving the distinction raises an epistemological difficulty.

It is commonly thought that the exact true values of most quantities of interest to science are unknowable, at least when those quantities are measured on continuous scales. If this assumption is granted, the accuracy with which such quantities are measured cannot be known with exactitude, but only estimated by comparing inaccurate measurements to each other. And yet it is unclear why convergence among inaccurate measurements should be taken as an indication of truth.

After all, the measurements could be plagued by a common bias that prevents their individual inaccuracies from cancelling each other out when averaged. In the absence of cognitive access to true values, how is the evaluation of measurement accuracy possible? At least five different senses have been identified: metaphysical, epistemic, operational, comparative and pragmatic Tal —5. Instead, the accuracy of a measurement outcome is taken to be the closeness of agreement among values reasonably attributed to a quantity given available empirical data and background knowledge cf.

Thus construed, measurement accuracy can be evaluated by establishing robustness among the consequences of models representing different measurement processes. Under the uncertainty-based conception, imprecision is a special type of inaccuracy. The imprecision of these measurements is the component of inaccuracy arising from uncontrolled variations to the indications of the balance over repeated trials.

Other sources of inaccuracy besides imprecision include imperfect corrections to systematic errors, inaccurately known physical constants, and vague measurand definitions, among others see Section 7. Paul Teller b raises a different objection to the error-based conception of measurement accuracy. Teller argues that this assumption is false insofar as it concerns the quantities habitually measured in physics, because any specification of definite values or value ranges for such quantities involves idealization and hence cannot refer to anything in reality.

Removing these idealizations completely would require adding infinite amount of detail to each specification. As Teller argues, measurement accuracy should itself be understood as a useful idealization, namely as a concept that allows scientists to assess coherence and consistency among measurement outcomes as if the linguistic expression of these outcomes latched onto anything in the world.

The author is also indebted to Joel Michell and Oliver Schliemann for useful bibliographical advice, and to John Wiley and Sons Publishers for permission to reproduce excerpt from Tal Measurement in Science First published Mon Jun 15, Overview 2. Quantity and Magnitude: A Brief History 3. Operationalism and Conventionalism 5. Realist Accounts of Measurement 6. Information-Theoretic Accounts of Measurement 7. Model-Based Accounts of Measurement 7. The Epistemology of Measurement 8. Overview Modern philosophical discussions about measurement—spanning from the late nineteenth century to the present day—may be divided into several strands of scholarship.

The following is a very rough overview of these perspectives: Mathematical theories of measurement view measurement as the mapping of qualitative empirical relations to relations among numbers or other mathematical entities. Information-theoretic accounts view measurement as the gathering and interpretation of information about a system. Quantity and Magnitude: A Brief History Although the philosophy of measurement formed as a distinct area of inquiry only during the second half of the nineteenth century, fundamental concepts of measurement such as magnitude and quantity have been discussed since antiquity.

Bertrand Russell similarly stated that measurement is any method by which a unique and reciprocal correspondence is established between all or some of the magnitudes of a kind and all or some of the numbers, integral, rational or real. Operationalism and Conventionalism Above we saw that mathematical theories of measurement are primarily concerned with the mathematical properties of measurement scales and the conditions of their application.

The strongest expression of operationalism appears in the early work of Percy Bridgman , who argued that we mean by any concept nothing more than a set of operations; the concept is synonymous with the corresponding set of operations. Realist Accounts of Measurement Realists about measurement maintain that measurement is best understood as the empirical estimation of an objective property or relation. Information-Theoretic Accounts of Measurement Information-theoretic accounts of measurement are based on an analogy between measuring systems and communication systems.

Model-Based Accounts of Measurement Since the early s a new wave of philosophical scholarship has emerged that emphasizes the relationships between measurement and theoretical and statistical modeling. Indications may be represented by numbers, but such numbers describe states of the instrument and should not be confused with measurement outcomes, which concern states of the object being measured.

As Luca Mari puts it, any measurement result reports information that is meaningful only in the context of a metrological model, such a model being required to include a specification for all the entities that explicitly or implicitly appear in the expression of the measurement result. Bibliography Alder, K. Alexandrova, A. Angner, E. Bruni, F. Comim, and M. Pugno eds. Barnes ed. Baird, D. Bogen, J. Boring, E. Bridgman, H. Feigl, H. Israel, C.

C Pratt, and B. Boumans, M. Westerstahl eds. Leonelli, and K. Eigner, Pittsburgh: University of Pittsburgh Press, pp. Hartmann, and S. Okasha eds. Bridgman, P. Brillouin, L. Byerly, H. Campbell, N. Campbell, D. Schlaudt eds. Carnap, R. Martin ed. Carrier, M. Cartwright, N. A summary of this paper appears in R. Cartwright and E.

Montuschi eds. Chang, H. Zalta ed. Psillos and M.

Previous seminars in Probability and Statistics

Curd eds. Clagett, M. Cohen, M. Crease, R. Darrigol, O. Diehl, C. Dingle, H. Duhem, P. Wiener trans. Ellis, B. Heath trans. Fechner, G. Adler trans. Feest, U. Ferguson, A. Myers, R. Bartlett, H. Banister, F. Bartlett, W. Brown, N. Campbell, K. Craik, J. Drever, J. Guild, R. Houstoun, J. Irwin, G. Kaye, S. Philpott, L. Richardson, J. Shaxby, T. Smith, R. Thouless, and W. The final report of a committee appointed by the British Association for the Advancement of Science in to consider the possibility of measuring intensities of sensation.

See Michell , Ch 6. Finkelstein, L. Frank, P. Boston: Beacon Press. Margenau, G. Bergmann, C. Hempel, R. Lindsay, P. Bridgman, R. Seeger, and A. Franklin, A. Anderson, D. Brock, S. Coleman, J. Downing, A. Gruvander, J. Lilly, J. Neal, D. Peterson, M. Price, R. Rice, L. Smith, S. Speirer, and D. Frigerio, A. Giordani, and L. Galison, P. Gillies, D. Giordani, A. Gooday, G. Grant, E. Grattan-Guinness, I. Guala, F. Hartley, R. Heidelberger, M. Klohr trans. Bryan trans. Van Nostrand, Hempel, C. Klein and M. Morgan eds. Israel-Jost, V. Jung, E. Lagerlund ed. Kant, I.

Guyer and A. Wood trans. Kirpatovskii, S. Krantz, D. Luce, P. Suppes, and A. Kuhn, T. Kyburg, H. Latour, B. Luce, R. Krantz, P. Wixted and H. Pashler eds. From , however, fed female G. Because these collections were made using a single sampling system, run at approximately the same time each day, the change in the numbers collected offer an indication of the extent of the decline in tsetse abundance over recent decades. Catches were made for 3 hours in the afternoon during the period of peak tsetse activity [ 42 ]. Each collection team comprised two hand net catchers and an ox, operating within 2 km of the research station.

Each team operated at least m from other teams, in areas chosen to maximise catches in accord with seasonal changes in the distribution of tsetse between vegetation types [ 43 ]. In the s, it was usual for each team to take enough tubes to collect a maximum of about 50 flies each day. This quota was set in consideration of the minimum expected catch at that time and has been maintained at this level ever since, even though it has proved impossible to meet the quota in the last two decades. Daily records are available from for the number of catching teams employed, and for the catch of each team.

The monthly averages of the number of flies caught per team per day are taken as indices of fly abundance. Prior to , tsetse catches regularly reached the upper limit of 50 flies; thereafter, this hardly ever occurred. Fitting the model only to catch data for the period after ensured that there was no truncation of data used in the fitting procedure. Tsetse females give birth, approximately every 7 to 12 days [ 28 ], to a single larva, which immediately burrows into the ground and pupates, emerging 30 to 50 days later as a full-sized adult [ 44 ].

Female adult flies can live for up to days [ 45 ]. As quantified by researchers in the laboratory and field, larviposition and pupal emergence rates are dependent on temperature, as are the mortality rates of both pupae and adults [ 26 — 28 , 30 ]. Preliminary analyses suggested that the inclusion of temperature-dependent mortality rates was sufficient to model the observed decline. In response to suggestions from peer reviewers, we also reanalysed the data using models that included functions for temperature-dependent larviposition and pupal emergence rates.

Hargrove [ 29 , 30 ], using data from mark-recapture experiments on Antelope Island, Lake Kariba, Zimbabwe, showed that for G. T 1 is not a parameter but is a constant set to 25 to ensure that a 2 is in a convenient range. As temperatures depart from this range, the mortality rises sharply, resulting in a U-shaped curve.

The effects of temperature on pupal development and mortality rates in the laboratory are supported by work showing similar effects in the field [ 44 , 47 — 49 ]. Lastly, using ovarian dissection data from marked and released G. The time taken for a female tsetse to produce her first larva is longer than for subsequent larvae. As initial starting estimates for the parameters in the model described in Eqs 5 — 7 , we used the published [ 26 , 28 , 30 ] fitted values for larviposition and pupal emergence rates as described above Eqs 3 and 4.

For adult and pupal mortality, we fitted the functions in Eqs 1 and 2 to published data—described above and elsewhere [ 27 , 29 , 30 ]—using nonlinear least squares regression. It was not necessary to vary all parameters in the ODE model to get a reasonable fit to the bioassay catch data. For model fitting, therefore, we first allowed only this parameter to vary while keeping all other parameter values fixed.

For pupal mortality, it was only necessary to fit b 1 , b 3 , and b 5 in the ODE model. As a preliminary to the data fitting procedure, the model was run for 5 years prior to the start of the first month of available temperature data in October using the average monthly temperatures from October to September because we did not know initial starting values for numbers of pupae, relative to the numbers of fed female adults caught. We fitted the model to the data using maximum likelihood, assuming the data were Poisson distributed.

For each set of parameters, we first estimated parameter values using the stochastic simulated annealing algorithm [ 50 ] and then used updated parameter estimates in a final fit using Nelder-Mead [ 51 ]. A penalty was incurred for parameter estimates of a 1 greater than 0. A penalty was also incurred for model fits for which, on average, there were fewer than 50 tsetse between January and December because, during that period, sampling teams consistently obtained their quota of 50 flies in an afternoon.

A peer reviewer noted that these confidence intervals allow for no uncertainty in the fixed parameters. Although there is considerable seasonal and interdecadal variation in temperature Fig 1A , our analyses indicate an increase of c. This increase is not even across the year, being greatest in November when temperatures are already highest.

During this month, mean daily temperatures increased by c. Estimated using time series linear regression. We used four temperature-dependent functions, with starting parameters estimated from fits to published data for pupal and adult mortality, larviposition, and pupal emergence rates Fig 3 , Table 1 , in an ODE model of tsetse population dynamics Eqs 5 — 7.


  1. Secret Projects: Flying Saucer Aircraft!
  2. Primeval: Dangerous Dimension.
  3. Bodies of Nature.
  4. See Table 1 for fitted parameter estimates of the mortality functions. The observed decline in catches of fed female G. If these catches scale roughly with the population density of tsetse around Rekomitjie throughout the study period, the data suggest a steady decline in numbers over the last 27 years. Data, on log base 2 scale, from to , are average numbers caught by hand net, per afternoon, using an ox-bait. Fitted parameters are provided in Table 1. To simulate this decline, the model was run using mean monthly temperatures between October and June Fixed and fitted parameter estimates for each function are summarised in Table 1.

    From until the mids, fitted model numbers of tsetse fluctuated between about 50 and and then declined from c. In addition, the fitted parameters a 1 and a 2 for adult mortality as a function of temperature Eq 1 were similar to estimates from fits to the published mark-recapture data S2 Fig , Table 1. Prior to the s, mortality was higher than this in October—December in 14 months over 30 years.

    In the 27 years since , the pupal mortality rate was higher than this in 31 months during the hot-dry season. Adult and pupal mortalities in the fitted model were both above these levels in October and November more frequently in years after This is consistent with the idea that increasing temperatures during the hot-dry season are primarily responsible for the observed decline in numbers of tsetse at Rekomitjie since the s, and particularly since Indeed, increases in mean daily temperatures have been most pronounced in November when temperatures are already highest Fig 2.

    The results of the preliminary analyses using constant values for larviposition and pupal emergence rate parameters are presented in S1 Text. The model with no temperature-dependent parameters did not provide a good fit to the data and had an AIC of compared to when adult temperature-dependent mortality was included S1 Text. While there are statistical models relating climate change to changes in vector populations [ 8 , 18 — 20 ], mechanistic models that relate climate change to data for the population dynamics of an important vector of human and animal pathogens are much less common.


    • Associated Content.
    • CHARACTERISTIC LYAPUNOV EXPONENTS AND SMOOTH ERGODIC THEORY - IOPscience.
    • Handbook of Psychoeducational Assessment: Ability, Achievement, and Behavior in Children (Educational Psychology).
    • Accelerating Global Supply Chains with IT-Innovation: ITAIDE Tools and Methods.
    • Josse , Holmes : Measuring multivariate association and beyond.

    Our mechanistic model, incorporating the effects of temperature on mortality, larviposition, and emergence rates was sufficient to explain the observed decline in numbers of tsetse. Hargrove and Williams [ 52 ] found, similarly, that temperature was an indispensable factor in modelling tsetse population growth on Antelope Island, Lake Kariba, Zimbabwe. They had access to a wide range of measures of meteorological variables but found that once temperature had been included in their model, the addition of any further candidate variables—including rainfall, humidity, saturation deficit, and cloud cover—did not result in any improvement in the fit to the data.

    At Rekomitjie, over the whole study period, we had data only on temperature and rainfall. Nonetheless, the Antelope Island study suggests that we were unlikely to be missing other important climatological variables. Our results provide evidence that locations such as the Zambezi Valley in Zimbabwe may soon be too hot to support populations of G. Similarly, G.

    There are several biologically feasible reasons to expect increased tsetse mortality at high temperatures. Tsetse are poikilotherms, and their metabolic rate increases with temperature: adult tsetse therefore utilise their blood meal more rapidly at elevated temperature and must feed more frequently.

    But feeding is a high-risk activity, and increased feeding rates will likely result in increased mortality [ 54 , 55 ]. This behaviour reduces their metabolic rate but also reduces their chances of feeding.

    Measurement in Science

    Hence or otherwise female tsetse have reduced fat levels and produce progressively smaller pupae as temperatures increase [ 58 , 59 ]. This has a knock-on effect on pupal mortality because smaller pupae can suffer very high mortality at elevated temperatures [ 47 ]. As temperatures increase, rates of pupal fat consumption increase linearly with temperature, whereas pupal duration decreases exponentially. Reduced fat levels at adult emergence prejudice the chances of a teneral fly finding its first meal before fat reserves are exhausted and the fly starves or suffers excess mortality as a consequence of taking additional risks in attempting to feed [ 60 ].

    The rate at which fat is used by teneral flies increases with temperature, exacerbating the above problems for the fly. Few studies of vector-borne disease have been able to show a clear link between climate change and a change in either vector or pathogen population dynamics and subsequent disease burden [ 8 ]. Studies are frequently confounded by other environmental, ecological, and sociological factors, or the necessary empirical data are too difficult to collect. Although we acknowledge that this study presents only a first step in linking the effects of climate change to changes in the risk of acquiring a trypanosome infection, it suggests that climate change is already having effects on the density of disease vectors.

    If these effects extend across the Zambezi Valley, then transmission of trypanosomes is likely to have been greatly reduced in this region. Conversely, rising temperatures may have made some higher, and hence cooler, parts of Zimbabwe more suitable for tsetse and led to the emergence of new disease foci.

    There is a pressing need to quantify the magnitude and spatial extent of these changes on tsetse and trypanosomiasis. While there are no data on annual incidence of HAT from Zimbabwe to compare with the long-term data on tsetse populations, in the last 20 years, cases have been reported from the vicinity of Makuti [ 61 ], at the relatively high-altitude of c.

    We are unaware of trypanosomiasis being reported from this area prior to the s. This circumstantial evidence of the emergence of HAT in cooler regions of Zimbabwe raises the possibility of the resurgence of tsetse populations, and then of T. Because tsetse dispersal is thought to arise through random movement [ 62 ], such a resurgence would come about where diffusion took tsetse to areas that are now climatically more suitable than they were in the recent past.

    Tsetse populations could only become established if, in addition, there were sufficient numbers of host animals and suitable vegetation to support tsetse. Hwange National Park in Zimbabwe and Kruger National Park in South Africa are examples of such areas, where suitable hosts and habitat for tsetse are abundant, and where tsetse did occur in the 19th century. HAT is one of several vector-borne diseases for which detecting human cases is difficult even in countries with relatively strong health systems.

    In Uganda, for instance, it is estimated that for every reported case of Rhodesian HAT, another 12 go undetected [ 63 ]. In these circumstances, prospects for gathering data to detect or predict the impact of climate change on HAT seem poor. Models to predict where vectors are abundant, supported by xenomonitoring of tsetse populations for pathogenic trypanosomes [ 64 ], seem a more likely means of assessing the impact of climate change. In general, if the temperature increase seen at Rekomitjie is reflected more broadly in the region, large areas that have hitherto been too cold for tsetse will become climatically more favourable and could support the flies if adequate hosts were available there [ 65 ].

    In any region where the climate becomes more suitable for tsetse, there must, however, be adequate vegetation cover to provide shelter for the flies. The clearing of land for agricultural development, which is occurring at an accelerating pace in many parts of Africa [ 66 ], will reduce the vegetation cover and the densities of wild hosts in what has been termed the autonomous control of tsetse [ 67 ]. Gething and colleagues [ 68 ] demonstrated that any future predicted changes in malaria due to climate would likely be magnitudes smaller than changes due to control and other anthropogenic factors.

    These species have very similar physiology to the savanna species of East and Southern Africa, including G. Over the past decade, c.

    Measurement in Science (Stanford Encyclopedia of Philosophy)

    First, Courtin and colleagues [ 70 ] describe a km shift southwards in the northern limit of tsetse that they attribute to drought, rising temperatures, and increased human density. Regions where tsetse appear to be absent include areas in which sleeping sickness occurred in the s. The authors attribute this change to increased densities of humans, anthropogenic destruction of tsetse habitat, and climate change. Medical surveys conducted between and did not detect any cases of HAT north of the 1, mm isohyet, and comparison of the 1, mm isohyet for the periods — and — show that it had shifted south.

    For tsetse-infested areas of West Africa, Courtin and colleagues suggested that it is difficult to disentangle the effects that changes in land cover, host populations, rainfall, and temperature have on tsetse populations and sleeping sickness [ 70 , 71 ]. Studies are further confounded by the impact of large-scale medical interventions that have led to a decline in the annual incidence of Gambian HAT across Africa [ 72 ].

    With such interpretive problems, there is a need for more studies of the present sort in which long-term measurements of tsetse abundance are made in wilderness areas where there is little change in land cover and host populations. A limitation of our study is that the estimated confidence intervals for model-fitted parameters, such as those for adult mortality, are underestimates, in part because they incorporate only the uncertainty resulting from fitting the model with fixed values for other parameters and do not incorporate uncertainty in those fixed parameters.

    Another limitation is that we did not have sufficient data to test the predictive power of our fitted model. Our deterministic model does, however, provide a good fit to available data for the change in tsetse abundance since Such models are less satisfactory for assessing if and when a population will actually go extinct because they predict that populations go to zero only as time goes to infinity.

    Ideally, therefore, future modelling should adopt stochastic approaches to predictions about tsetse extinction, but these would require detailed knowledge of population dynamics at very low density, such as the probability that male and female tsetse will meet in sparse populations. Unfortunately, our current knowledge of dynamics relates only to populations that are dense enough for convenient study. Nonetheless, present modelling does raise the possibility of the extinction of the Rekomitjie tsetse populations, particularly if temperatures increase further. Future research could also make use of the fitted model to make spatially explicit predictions about tsetse population dynamics for other regions of Zimbabwe and East and Southern Africa under future-predicted climate change scenarios.

    Also showing woodland cover and loss — as estimated by Hansen and colleagues [ 23 ]. Effect of varying pupal temperature-dependent mortality parameters on fitted parameter estimates. The authors are grateful to Mr William Shereni, Director, Tsetse Control Division, Zimbabwe, for his continued support of our work and for making available the resources at Rekomitjie Research Station. Abstract Background Quantifying the effects of climate change on the entomological and epidemiological components of vector-borne diseases is an essential part of climate change research, but evidence for such effects remains scant, and predictions rely largely on extrapolation of statistical correlations.

    Methods and findings The model we developed incorporates the effects of temperature on mortality, larviposition, and emergence rates and is fitted to a year time series of tsetse caught from cattle. Conclusions The model suggests that the increase in temperature may explain the observed collapse in tsetse abundance and provides a first step in linking temperature to trypanosomiasis risk. Author summary Why was this study done? Tsetse flies are blood-feeding insects that transmit pathogens causing fatal diseases of humans and livestock across sub-Saharan Africa.

    The birth and death rates of tsetse are influenced by environmental conditions, particularly temperature. Since , mean daily temperatures at Rekomitjie, a research station in the Zambezi Valley of Zimbabwe, have risen by c. These increases in temperature may have impacted tsetse populations and the diseases they transmit.

    What did the researchers do and find? Since the s, wild tsetse have been caught regularly from cattle for insecticide tests conducted at Rekomitjie. A mathematical model of tsetse population change, which included temperature-dependent rates for births and deaths, suggests that the decline in tsetse is related to rising temperatures. What do these findings mean? Our findings provide a first step in linking the effects of increasing temperatures to the distribution of diseases caused by tsetse.

    If the effect extends across the Zambezi Valley, then tsetse-borne disease is likely to have been reduced across the region. Conversely, rising temperatures may have made some higher, cooler areas more suitable, leading to the emergence of new disease foci. Introduction Tsetse flies Glossina spp. Methods The methods for the production of data for tsetse and climate were not guided by an analysis plan for the present study.

    Background

    Temperature data and analysis Daily records of rainfall and minimum and maximum temperature have been kept at Rekomitjie since Tsetse data Sampling of tsetse at Rekomitjie, in pursuit of various ecological and behavioural studies, has suggested a decline in tsetse abundance in the last two decades. Modelling tsetse population dynamics Tsetse females give birth, approximately every 7 to 12 days [ 28 ], to a single larva, which immediately burrows into the ground and pupates, emerging 30 to 50 days later as a full-sized adult [ 44 ].

    Model fitting As initial starting estimates for the parameters in the model described in Eqs 5 — 7 , we used the published [ 26 , 28 , 30 ] fitted values for larviposition and pupal emergence rates as described above Eqs 3 and 4. Results Temperature increase at Rekomitjie Although there is considerable seasonal and interdecadal variation in temperature Fig 1A , our analyses indicate an increase of c.

    Activities

    Download: PPT. Fig 2. Increases in mean daily temperature between and calculated for each month of the year. Modelling changes in the G. Fig 4. Observed points and modelled line changes in numbers of G. Discussion While there are statistical models relating climate change to changes in vector populations [ 8 , 18 — 20 ], mechanistic models that relate climate change to data for the population dynamics of an important vector of human and animal pathogens are much less common. Supporting information. S1 Fig. Study site location. S2 Fig. Adult temperature-dependent mortality.

    S1 Table. Sensitivity analysis. S1 Text. Results of alternative model fits. References 1. Shaw A. The economics of African trypanosomiasis. The Trypanosomiases. Wallingford: CABI; Cecchi G, Mattioli RC. Global geospatial datasets for African trypanosomiasis management: a review. Swallow B. Impacts of trypanosomiasis on African agriculture. View Article Google Scholar 5. Measuring the costs of African animal trypanosomosis, the potential benfits of control and returns to research. Agric Sysytems. View Article Google Scholar 6. The regional impacts of climate change: an assessment of vulnerability.

    Intergov Panel Clim Chang. View Article Google Scholar 7. Climate change and vector-borne diseases: a regional analysis. Bull World Health Organ.

    Background

    Early effects of climate change: do they include changes in vector-borne disease? View Article Google Scholar 9. Climate change and the resurgence of malaria in the East African highlands. Association between climate variability and malaria epidemics in the East African highlands. Proc Natl Acad Sci.

    Emerg Infect Dis.

    Vector Measures (Mathematical Surveys, Number 15) Vector Measures (Mathematical Surveys, Number 15)
    Vector Measures (Mathematical Surveys, Number 15) Vector Measures (Mathematical Surveys, Number 15)
    Vector Measures (Mathematical Surveys, Number 15) Vector Measures (Mathematical Surveys, Number 15)
    Vector Measures (Mathematical Surveys, Number 15) Vector Measures (Mathematical Surveys, Number 15)
    Vector Measures (Mathematical Surveys, Number 15) Vector Measures (Mathematical Surveys, Number 15)
    Vector Measures (Mathematical Surveys, Number 15) Vector Measures (Mathematical Surveys, Number 15)
    Vector Measures (Mathematical Surveys, Number 15) Vector Measures (Mathematical Surveys, Number 15)
    Vector Measures (Mathematical Surveys, Number 15) Vector Measures (Mathematical Surveys, Number 15)
    Vector Measures (Mathematical Surveys, Number 15)

Related Vector Measures (Mathematical Surveys, Number 15)



Copyright 2019 - All Right Reserved