Selasa, 17 September 2013

Use of Psychological Tests and Other Appraisal Techniques



Perhaps no topic in the counseling literature has generated as much controversy as the question of when, how, or if psychological tests and interventions should be used. Like many emotionally charged issues, this one has generated more heat than light, along with more than its share of misunderstanding and confusion.
The use of tests in counseling must be considered part of the overall diagnostic process. That is, they are part of the counselor's total effort to understand clients, and, out of that understanding, to be helpful. Thus, we can examine the use of tests within this general framework: do tests help us to understand clients better and to be more helpful?

Much of the controversy about the use of tests tends to sidestep this

question and instead focuses on a wide range of feelings about the tests
themselves. Opinions have been widely expressed to the effect that tests
are good, bad, unfair, immoral, prejudicial, useless, or infallible. In this
case, of course, the answer is "none of the above." A psychological "test"
or inventory is simply a sample of behavior, taken under more or less standard
conditions, from which we infer other behavior.                                  

Giving or using tests in counseling is no more indicative of an "evaluative" attitude than is making any other kind of observation and drawing any other kind of inference. If we set up a structured interview to ask a client to answer a specific set of questions or to complete a Personal Data sheet, we have created something that closely parallels a psychological measure­ment situation.
Like questionnaires or structured interviews, tests are really only devices designed to make observations under conditions that will permit us to make comparisons between one person and others or about the same person at two or more different times. That is why we standardize instructions, time limits, and other aspects of the testing procedure.
Observation and Inference
To understand the use of tests and to comprehend fully both their advantages and their potential dangers and limitations, we must understand

the basic processes of observation and inference.                                         

We mentioned these processes earlier and pointed out that they are in volved in the overall activity of diagnosis and our search for understanding.

Indeed, observation and inference, part of systematic inquiry in any field,
are the basic elements in any empirical science.                                                 

Observation If we want to understand any phenomenon, we try to make arrangements to observe it. When we are systematic about our observa­tions, we generally want to insure their accuracy by comparing the percep­tions of two or more observers or by making more than one observation. We do so in any situation where accuracy is important. Certainly, any intervention in the life of another human being should be based on careful, accurate observations.
Tests and inventories with standardized instructions for administration and scoring permit us to make careful observations. They tend, to be objective in the sense that two or more people obtain similar results or scores from the same observation. They also tend to be reliable in the sense that two or more administrations of the test on the same person will yield similar results. Later we will discuss these properties more fully.
Inference. The potential pitfalls in using standardized tests that are properly constructed and administered lie not in the observations we make, but with the inferences that we draw from those observations.
Almost invariably after we make a set of observations about an individual, whether we use tests, interviews, questionnaires, or simply our own unsyste­matic perceptions, we then attempt to generalize from that set of observa­tions in order to enlarge our understanding. That extension of understanding is the process of inference, and it is one that is fraught with many dangers. As extensions of meaning that go beyond a set of specific observations, all inferences are, in effect, guesses that cannot be fully supported by data. While they may be useful at times and even necessary, they must always be recognized for what they are. As we pointed out earlier, they must be constantly seen as tentative approximations.
Most of the misuse of tests comes from the loose, illogical, and erroneous employment of inferences. The careful construction, standardization, and administration of a psychological test or inventory is a necessary but never a sufficient condition for drawing accurate inferences about people. We must always remember that inferences are always based upon two sets of factors: 1) a set of observations as measured by scores and test results, and 2) a set of assumptions that the observer makes about the individual, the observa­tions, and, indeed, the larger world. Because we frequently use test results to infer some psychological construct, the result is something we have in­vented to explain behavior.
When we see a student "burning the midnight oil" in preparation for a big test, we are likely to explain that behavior by saying that he is "highly motivated." Motivation is a construct that we invent to explain purposeful, goal-directed behavior. Similarly, we invent constructs like "anxiety" or "defensiveness" or "intelligence" to explain other behavior patterns. Some constructs, like intelligence, are termed traits.
A trait is a presumed tendency in an individual to behave in a certain way in different situations. When we invent traits, we make assumptions about human behavior in terms of its consistency and resistance to change. For this reason trait concepts are essentially intrapsychic.
When we infer from a performance on a test that a person has a certain trait, we have made certain assumptions about that particular individual and about human personality generally. For example, when we use an "aptitude test" and from it infer some level of "scholastic aptitude" or "me­chanical ability," we are making a set of assumptions. Like other psychologi­cal constructs, aptitudes are never measured directly. After we observe a performance, we infer an aptitude. The performance that we observe directly is almost always learned. The content of the items, the ability to follow directions, and even the skills involved in using pencil and paper are achieved. Hence, we measure achievement and infer aptitude.
When we move beyond the observation to generalize about aptitude, we are making an inferential leap. Our willingness to make that leap, if indeed we are aware of making it at all, is based on a set of assumptions about the individual. In the case of aptitude, our inferential leap is based on certain assumptions about the test and what it measures. In order for this "aptitude test" to be useful we assume that subjects on whom observa­tions are made have had reasonable opportunities to learn the required material. Moreover, if we are to compare the scores of the subjects we further need to assume that these opportunities have been essentially equal for the tested individuals.
Such an assumption may be reasonable when we compare subjects who have had relatively homogeneous learning histories, family backgrounds, cultural experiences, and linguistic exposures, but when comparisons are made about groups or individuals without such homogeneous histories and backgrounds, our assumption about equal opportunity is tenuous, if not invalid.
Most of the misuse of methods of psychological assessment arises from just this kind of inferential leap. It is people who misuse tests. The tests themselves are merely tools that have no intrinsic moral or intellectual properties. The misuse of tests and other methods for .examining human behavior arises primarily from lazy and sloppy thinking by the users.
In this respect our thinking about intelligence has been remarkably unin­telligent. Efforts have been made for many years to develop so-called "culture-free" tests of intelligence. Few quests could be more futile or self-defeating. The nature of the behavior that we label "intelligent" is very largely defined by a society and its values. Perhaps the only completely abstract yet defensi­ble definition of the term intelligence is the overall ability to adapt to environ­ment. A university professor parachuted into the Amazon jungle may be less able to adapt and survive than a football player who recently failed college algebra, or for that matter a semi-illiterate street urchin who has already demonstrated an ability to survive in an urban jungle.
The nature of environmental demands and the ways in which these demands are interpreted by a given culture tend to determine what kinds of behavior will be termed intelligent. Hence, "intelligent" is primarily a value term a culture applies to behavior. As long as there are group and individual differences among human beings, there will be people eager to apply value judgments, in terms of good and bad, stupid and intelligent, superior an inferior, to the behavior of other people.
Hunt (1961) concluded on the basis of his extensive research that that assumption of a simple, fixed, genetically determined intelligence is simply not compatible with the evidence. Instead, he concluded that intellects development and the explanatory construct of intelligence is something that grows out of the interaction between the developing child and then environment. When we measure intelligence, therefore, we obtain a picture of a person that is relevant to a single point in his or her development.
In a similar vein the professional counselor does not use such tests t classify or categorize people, nor to set artificial limits on their potential for growth. Rather, tests are better used as aids in the determination the kinds of experiences and environments that will best nurture and support intellectual growth and development.
In counseling tests and inventories can be utilized for two general purposes. The first, concerning the creation and testing of the counselor's ow hypotheses, has been dealt with earlier. The second involves the direct interpretation of results to give clients better information about themselves and their potential for growth in certain areas.
Perhaps the most important fact to consider in making test interpretation to clients is that such information should never constitute an end in itself Test results' must be seen as tools to be used to facilitate the attainment of some significant counseling goal. For this reason, test results should always be interpreted in the context of and in relation to something that is of real concern to the client. Test interpretations should always be into grated into this larger context of the client's best interests. It should be done in a way that involves the client actively in the interpretation of the meaning of the information and its application to his or her problems are aspirations. For this reason we do not set up "test interpretation interview merely to recite a set of test results that may have little relevance to the client's immediate needs. Similarly, we do not assign a standard batter of tests simply to have the client take them. Instruments, or tests, should be carefully chosen and thoroughly interpreted in close collaboration with the client and in conjunction with the client's felt needs and goals.
Interpretation of results to clients involves considerable technical under standing of several crucial concepts in psychological measurement.
Validity
The concept of "validity" involves the degree to which a given test  instrument actually measures what it claims to measure. A number closely related concepts in psychological measurement relate to various aspects of validity. These include predictive validity, concurrent valid and construct validity. Unfortunately, the term validity is often used very loosely, so that confusion arises about what a given claim for the validity of any specific measure really means (Messick, 1980).
Validity is concerned with the usefulness of psychological test information for specific purposes. As we noted earlier, users of tests typically attempt to draw inferences from their results about various aspects of an individual's present or future behavior. Validity information is important because it tells the user what kinds of inferences about the individual can be attempted, given the nature of the test involved. Even when a test or instrument is considered valid for a particular purpose, as we have already noted, a complex set of other assumptions may be involved (Messick, 1980).
Validity is usually discussed in terms of how the set of scores in question relates to some other measurement of the same trait being tested. For example, we may measure the validity1 of a college aptitude test by relating it to the freshman grade-point averages of the same individuals who took the test. In this example, we call the external measure of the freshman grade-point averages the criterion. Our measure of the relationship between scores and grade—point averages is a measure of criterion-related validity (Betz & Weiss, 1975).
We can consider two aspects of criterion-related validity. Concurrent valid­ity refers to test scores or results that are obtained at the same time. In our example, a group of college freshmen would be given the test and their present grade—point averages would be used as the criterion.
A second aspect of validity is predictive validity. In this situation the scores are obtained prior to the data of the criterion. In our example, the test might be given to a group of college-bound high-school seniors and the criterion information gathered by obtaining their grade—point averages at the end of their first year in college.
We can see that it is possible for a given test to have a degree of concurrent validity without necessarily having predictive validity. For example, we might design a test around material learned in crucial first-year college courses. This test might have high concurrent validity with first-year G.P.A. (grade point average), but because all high school seniors might receive low scores, it might have little predictive validity.
In most counseling situations we are primarily, interested in either the concurrent or predictive validity of the instrument we use. Usually these validities are expressed in terms of a correlation coefficient between test scores and the criterion scores. A high correlation or relationship between test scores and an external criterion has meaning only to the extent to which we believe the criterion is related to the underlying construct that we wish to predict. In other words, the measure of validity must be related to our assumptions about how the trait or construct relates to performance on the criterion. In the example, we used it is much more accurate to term a test validated in terms of a college grade—point average criterion, a college aptitude test, or a test of academic ability than it is to term it an "intelligence test."
Sometimes we are interested in examining the validity of a test in situa­tions where external criteria are unavailable or inappropriate. When we use an achievement test in high-school American History, for example, we may be more interested in how well the test covers the material taught in the course than in how the scores relate to any external measure.
Called content validity, it refers to the degree to which a particular test or set of tests adequately measures knowledge and academic content. Our concern is with the degree to which the items used in the test represent an adequate sample of the total body of knowledge or behaviors within the area in question. Then we can decide whether the test result is a fair indication of the individual's level of mastery-of the subject or behavior. Obviously, such an inference is also dependent upon a variety of other assumptions.
Content validity often depends on the way test items are selected. They might be drawn from frequently used text books, the writing of noted experts, or a pool of items taken from a large sample of teacher-made tests. The final product thus represents a sample from a domain of valued information. Sometimes content validity is indirectly evaluated by assessing the degree of homogeneity and internal consistency among test items.
The final and often the most confusing aspect of validity, called construct validity, may be regarded as the most comprehensive and most important aspect of the whole validity question (Messick, 1980). Construct validity relates to the total pattern of relationships that exist between the scores on a particular test or instrument and all other logically related variables. Construct validation is a process of studying the empirical network of rela­tionships that connect results with other measures and methods of observa­tion. The relationships may be studied either by looking for a convergent pattern of results from tests purporting to measure the same construct, or by looking for divergent patterns from tests assumed to measure different constructs. We might for infer that a given test measures "creativity" by noting that it has high correlations with other tests of creative thinking and low correlations with measures of general intelligence.
Unfortunately, construct validity may be difficult for the average counselor or test user to assess directly or even to understand thoroughly. To under­stand fully the construct validity of a given test or instrument might require the user to read not only the publishers manual, but also many reviews and research studies relating to the test.
Clearly, many widely used psychological tests and inventories have limited evidence of construct validity. Two instruments having similar names and purporting to measure the same underlying construct frequently will fail to produce similar results. Until the counselor has carefully covered the relevant literature about a particular instrument or construct, a healthy degree of skepticism about its claims for construct validity is usually war­ranted.
Reliability
Another major concept in using tests is reliability. Reliability relates to the consistency of a measurement. The two major aspects of reliability are 1) consistency over time, which is usually measured by "test-retest reliability" and 2) consistency between two similar sets of items, which is sometimes called "parallel forms," or "split-half reliability." These two as­pects of reliability are not the same, and one cannot be substituted for the other since they tap very different kinds of consistency.
Test-retest reliability is the aspect of reliability with which counselors are most frequently concerned. When the underlying construct that a given test or instrument purports to measure is deemed stable over a considerable period of time, we would expect that two successive measurements on the same individual would yield very similar results. Of course, discrepancies between the two measurements can be attributed to error. Such errors might be introduced by a variety of factors. Such as an illness or distraction on the part of the subject during one administration of the test, guessing at the answers, misunderstanding the instructions, or a lack of clarity in some of the items, thereby reducing reliability.
We can check out this kind of problem by giving the test to the same population of individuals at two different times. The resulting test-retest reliability is usually expressed with a correlation coefficient between the two distributions of scores. The time interval between the two administra­tions is a relevant factor in evaluating the test's consistency. Usually such administrations are separated by periods ranging from a few days to a month.
Sometimes test-retest reliabilities are not appropriate. Unless we are confident that the underlying construct, itself, is stable, we cannot always safely attribute inconsistent results to error. A mood scale, for example, might be expected to fluctuate over time and indeed be invalid if it failed to do. Moreover, measures of achievement and cognitive development cannot help but change over considerable periods of time and be affected by impor­tant personal experiences.
Thus, in some situations we may want to examine another kind of consis­tency measure. We can look at the internal consistency of test by comparing the scores obtained on the two halves of the total pool of items. This tech­nique, called a "split-half reliability." tells us essentially whether two ran­domly divided halves of the same item pool measure the same thing. A split-half reliability is expressed in terms of a correlation coefficient between the two sets of scores. A similar technique is to construct two parallel forms of the same test and then correlate their results.
Like virtually all other forms of measurement, psychological tests and inventories are never completely reliable. The degree of imprecision or mea­surement error must be considered when such scores,, are interpreted. In communicating results this imprecision can be accounted for by using what is called a "band interpretation" rather than a "point interpretation." A band interpretation takes it for granted that, because of the unreliability factor, the same subject would probably not get the same exact score on a later test.
By converting the reliability coefficient to what is called "a standard error of measurement" we can determine the width of the band within which subsequent scores can be expected to fall about two times out of three. For example, if the standard error of measurement for a given test is plus or minus five points, someone who scored 85 on the test will probably score between 80 and 90 about two thirds of the time on later testings.
Thus, we would interpret the test in terms of a band of scores falling between 80 and 90.

Normative and Criterion Referenced Scores
We usually do not interpret raw scores to a client simply because, as pure numbers, they have almost no real meaning. Instead, we generally interpret the raw scores by comparing them to some kind of reference base. When we wish to compare the test performance of an individual with the performance of some specific group of people, we use a normative reference base, that is a group with some common set of characteristics. These might be, for example, college graduates, or members of a given occupation.
In that case we compare the individual's score with the other scores of the norm group chosen as most appropriate. Then we place the individual's score within that distribution of scores by converting the raw score to a standard score, Z score, or percentile score. Such a score tells us where the individual's score falls within the group's distribution.
When we choose a norm group for this kind of interpretation, we are making a judgment about the appropriateness and meaning of such a com­parison. If the norm group used is not comparable to the individual in such important ways as prior educational opportunities, cultural and linguis­tic background, or other factors that affect test performance, the comparison may be quite misleading.
A second type of reference base is called a criterion-based score. Sometimes we really do want to know how an individual's performance compares to some absolute criterion that is independent" of how other people perform. For example, experience might indicate that to succeed in a specific job the worker must be able to read sixth-grade-level material at two hundred words per minute with at least 80% comprehension. This reading level then becomes the criterion against which the performance indicated by the individual's test score is compared. In many kinds of educational and vocational counseling situations, criterion reference bases, if they are availa­ble, are more meaningful than comparisons to general population norms or to some group that is clearly inappropriate.

Interpretation of Results
There are a few relatively simple cautions that counselors should keep in mind when interpreting scores to clients:
1.   Test scores should be interpreted in the context of all available informa­tion regarding the client. Information regarding the cultural background, health, motivation, and educational and linguistic skills of clients, among other variables, are all essential factors in establishing the meaning of test scores.
2.  Predictions from test scores obtained through actuarial or "expectancy tables" are always based on groups rather than specific individuals, so that such predictions should always be made in the third person plural: "For people with scores like these . . ."
    3.     Success in any endeavor is determined by a complex set of factors;
that certainly include motivation and self-control, as well as aptitude. Apti­tude may be necessary for success, but is almost never sufficient.
In some counseling situations, then, tests can be useful. However, the nature of inferences that can legitimately be .made about clients from such data, as from any other source, is quite limited. It is highly unlikely that any client can be adequately described or understood through tHe use of such data alone. Tests and inventories are most useful when the information is combined skillfully and carefully with other kinds of data in a total process of diagnosis that is continuous, tentative, and testable.

Behavioral Assessment
We have already noted that one of the problems inherent in diagnostic and assessment activities is the use of inferences that go beyond the actual observations made by the counselor and that often involve assumptions that are difficult to test. We also noted that in such situations counselors must be aware that such inferences are only tentative hypotheses that must be rigorously tested if they are to avoid serious errors. Behaviorism is an approach to the study of human behavior that tries to limit the use of inferences about people in an effort to reduce some of these possible errors. We will discuss behavioral approaches to counseling in further detail in Chapter 10.
Behavioral approaches attempt to. describe, explain, predict, and control human behavior by studying the relationships between rewarding events in the stimulus world or environment and the frequency and types of specific, observable responses on the part of the individual. In this process the need for inferences about supposed inner states of the individual is sharply re­duced or eliminated. Behavioral approaches also tend to focus upon the contingencies between behavior and the reinforcements or rewards supplied by the environment. The behavioral view is that these contingencies deter­mine the ways an individual behaves in a given environmental setting.
Behavioral assessment procedures are aimed at discovering basic func­tional relationships between an individual's behavior and environmental factors and stimuli, so behavioral assessment involves performing a func­tional analysis of the individual's transactions with the environment. The S-0-/R-K-C model (Kanfer & Saslow, 1965) is one way of describing a func­tional analysis. In this model of assessment activity several sets of crucial variables are identified and denoted by the symbols S-0-/R-K-C. Stimulus factors (S) include physical and social events occurring in the environment. Organismic variables (0) include the biological states, of the individual, which might include hunger, thirst, various deprivations, or sensory condi­tions. Responses (R) include actions in the cognitive, motor, and physiological systems of the individual. Contingency factors (K) describe the relationships between behavior and consequences, another term for which is "schedule of reinforcements." Consequences (C) are the positive or negative events that follow the emission of specific responses.
Behavioral assessment approaches stress two important principles. The first is the principle of direct observation. By direct observation or sampling we mean the specific symptoms of problematic behaviors that are considered and focused upon directly. They are not attributed to some deeper, underlying and, therefore, hidden cause or disorder that must be discovered and under­stood before the presenting symptoms can be treated. In a behavioral assess­ment the presenting symptoms or problematic behaviors are the problem.
The second crucial principle is that all of the terms or concepts used in the assessment must be operationalized in as»rigorous a way as possible. Terms used to describe the client should always be expressed as observable behaviors. Every attempt should be made to avoid or eliminate inferences about events or processes that are internal and unobservable.
Behavioral assessment is an integral and ongoing part of treatment. The total process involves five specific phases (Keefe, Kopel, & Gordon, 1978): 1) problem definition, 2) functional analysis, 3) treatment procedures, 4) information-sharing session with the client, and 5) evaluation of treatment.
Problem definition is accomplished first by pinpointing the presenting problem in terms of the actual behaviors involved. The complaints and problems of the client are redefined in terms of actual responses. For exam­ple, feelings of depression may be translated into such specific responses as slowed speech, staying in bed all day, frequent crying, refusal to accept social invitations, or the inability to eat or sleep. We term these crucial responses "target behaviors" and address our treatment plan to changing them.
Each problem-relevant response pattern is assessed in terms of its fre­quency, duration, intensity, and appropriateness, as well as with the general situational factors associated with its onset." At the same time, a general history is taken that specifies the time of first occurrence of the problem and any precipitating circumstances, such as changes in environment, use of drugs or alcohol, or illness or disease.
Next, a careful review of general environmental factors is undertaken. The purpose of this review is to identify factors of time, place, and situation, as well as social factors of reward, attention, sympathy, or punishment in response to problematic behaviors. The purpose of this phase of the assess­ment process is to identify all possible target behaviors and environmental events that may be modifiable through treatment.
The second stage in the process of behavioral assessment involves the conduct of a functional analysis of the relationship between possible target behaviors and potentially controlling events in the environment. This mea­surement and functional analysis is focused around the task of identifying possible contingencies between target behaviors and environmental circum­stances. In the functional analysis the antecedent events and situational circumstances that immediately precede the problematic behaviors are care­fully recorded. Similarly, the circumstances and events which immediately follow these behaviors are charted. For example, the functional analysis attempts to specify the possible ways in which the client may be rewarded for the problematic behavior. Hence, such possible social reinforcements as attention, sympathy, and reduction of work are carefully explored.
The outcome of the functional analysis is to make a final selection of target behaviors, establish their frequency of occurrence, and conceptualize the functional relationship or contingencies between the target behaviors and modifiable events in the environment. We call the frequency of occur­rence of a target behavior the "baseline level" and measure change due to treatment from that point. For example, in working with a disturbed child we might observe a baseline frequency of four violent temper tantrums per week.
The third step in the assessment process is Jo design a set of treatment procedures. As part of this phase, an assessment is made of the client's motivation and willingness to cooperate in "treatment and the resources available to the client in the environment. These might include the willing­ness of a spouse, roommate, supervisor, or co-worker to assist in the treat­ment process, or the degree of flexibility possible in the client's schedule.
At this point, the counselor also attempts to understand the client's style of perceiving and solving problems and to develop a "menu" of possible positive reinforcers that may be used with this particular client. Often the client is asked to choose a set of rewards that can be connected to progress in treatment.
The next phase of the assessment process involves an information-sharing session with the client. In this session the counselor reviews and discusses the description of the problem obtained in the early phases of assessment and listens to the client's perceptions. Then, a treatment plan is proposed that is mutually acceptable. This kind of collaborative exchange may be repeated several times during the course of treatment. The counselor and client discuss progress in terms of changes in the frequency and duration of target behaviors compared with the baseline or beginning frequency. When these changes indicate a change in the treatment plan is needed, such changes are discussed and initiated. These might include changes in choice of reinforcers or in other aspects of the treatment plan. These confer­ences may also include people who are assisting in the treatment plan, or reports from such people may also be part of these conferences. Progress in terms of changes in target behaviors are generally charted by the client, counselor, and other helpers in careful and formal ways.
The final phase of the total behavioral assessment process involves a careful evaluation of treatment. This is generally done in terms of specific changes in the target behaviors from baseline levels. Attention is also given to maintaining the changes after treatment and insuring that changes trans­fer from the treatment situation to other relevant situations in the client's life.
Behavioral assessment, then, is based on a search for events in the client's environment that influence specific target behaviors. It involves generating enough information about the client and the environment to allow the design of a treatment plan that will modify appropriately the crucial contingencies or functional relationships between environmental circumstances and client behavior.
Behavioral assessment procedures tend to minimize the use of inferences that cannot be operationalized. However, at times even behavioral assess­ment may deal with so-called "internal responses," or with such internal constructs as motivation. Therefore, even when performing behavioral as­sessments, the injunction to make the assessment procedure continuous, tentative, and testable should always be remembered.

Cognitive Assessment
Cognitive psychology, which has studied changes in human cognitive func­tioning across the life span, also offers a useful set of tools for examining individual needs and resources. Its theoretical formulations derived from such study can help us understand crucial differences in the ways individuals think and process information about themselves and their environments.
Out of cognitive psychology has come what is called "the constructivist view of human behavior." Essentially, this view asserts that human beings do not just respond to the stimuli that confront them directly in the environ­ment. They organize, categorize, and process these stimuli into their own idiosyncratic "construction." It is to this unique interpretation of the stimulus world that they then actually respond.
In this view, the individual is not simply a passive receiver of external information, but rather an active, dynamic "maker of meaning." From this constructivist position view of human behavior, the interaction of an individ­ual with his or her environment depends on the way that individual construes and interprets the information available from the environment.
In studying information processing, cognitive psychology has focused pri­marily upon two cognitive functions—differentiation and integration. Differ­entiation refers to the way in which people pull out and categorize bits of information from a large mass of data. Individuals who pull out many bits of information and have available a number of categories with which to organize them are said to be "complex" thinkers. Integration refers to the ability of the individual to combine seemingly unrelated and disparate bits of information into a general concept or principle that can reconcile and relate the various parts. Again, good integrators are termed high-level, complex thinkers.
Cognitive developmental psychology has attempted to describe and explain the growth processes through which individuals acquire higher level cogni­tive functioning over the life span. In the process it has articulated a number of cognitive developmental frameworks to describe sequences of cognitive growth. These frameworks posit the existence of specific cognitive stages, each of which is characterized by particular tendencies and styles of reason­ing, judgment, decision-making, and even interpersonal relating. Each of these stage-specific cognitive styles is seen to be qualitatively different from earlier and later stages of development.
It is important to note here that the stages used in these cognitive develop­mental frameworks are very different from the chronological stages described earlier. In these cognitive developmental schemas, stages are arranged in hierarchical order. They are not normative because there is no assumption or expectation that all, or even most members of the society will pass through them eventually. Indeed, within such hierarchical frameworks

Reference
Donald H. Blocher. (1987). The Professional Counselor. New York: Macmillan Publishing
Company. 

Tidak ada komentar:

Posting Komentar

Masukan anda dan kritik anda sangat berarti demi kemajuan saya terimakasih atas saran-saran dari anda semua semoga bermanfaat bagi saya dan kita semua.... Amiin