This guide is for readers who will not read the entire dissertation, and for those who prefer to read ‘strategically’ rather than simply start from page 1 and read straight through.
The central section of this project is 1.6 and this section gives a fair idea of what the project is all about. All the sections preceding section 1.6 prepare the way for this section, and all those that follow, except for the very last, are included in order to answer some of the questions raised by it.
The reader should also pay particular attention to section 1.1 because it is crucial to understanding the purpose of the project, a purpose which would otherwise be difficult to guess.
PART 1 – INSTRUCTIONAL DESIGN
1.1 The Nature of this Project
The purpose of the project is explained. It is described as a small part of a larger process of technological development.
1.2 Further Narrowing of the Topic
The subject area is defined in slightly more detail. The project is concerned with instruction in which the learner is quite strictly controlled by the instructional procedure (i.e. ‘spoonfeeding’) and with intellectual learning rather than any other type of progress.
1.3 A Stipulative Definition of ‘Knowledge’
In this section the word ‘knowledge’ is given a stipulative definition to be used for convenience throughout the project, which supports the attempt to be theoretically uncontroversial.
1.4 The Assumed Basis
Two simple premises are introduced in this section and are then recast using the word ‘knowledge’ as defined in the previous section.
1.5 Previous Approaches to Instructional Design
This section provides a selective history of instructional design methods. An emerging consistency with the principles introduced in the previous section is demonstrated.
1.6 An Extended Approach to Instructional Design
The final section in part 1 describes an extended approach to instructional design and explains how this approach is an improvement over previous approaches of a similar type, insofar as it is more consistent with the two principles introduced in section 1.4.
PART 2 – KNOWLEDGE DESIGN
2.1 An Introduction to Part 2
Simply introduces Part 2.
2.2 The Knowledge Designer's Representations of Knowledge
The logical status of the human knowledge designer's representations of human knowledge is explained. This use of the construct is supported by historical precedent.
2.3 Constraints on Knowledge Design
The use of evaluative constructs, and constraints upon them, in design processes is discussed and a comparison is made between the design of computer programs and the design of human knowledge.
2.4 Human Limitations Vs Electronic Limitations
Several important differences between present day computer programming and human knowledge design are discussed in order to underline both the complexity of human knowledge design and the uniqueness of the problems involved in it.
2.5 Design Without Complete Knowledge of the System
The important problem of designing knowledge for use by a partially known system is solved so that human knowledge design does not rely on any great scientific understanding for its development.
2.6 More on Research
The potential uses of two techniques in human knowledge design are discussed: observing the performance of human experts, and computer simulation.
2.7 Further Topics
In this final section a number of additional possibilities are briefly mentioned in order to emphasize the continuity of the project with possible future work, and to provoke thought on the part of the reader.
This project is not intended as some kind of evaluation of a theory in psychology. It does not fit into the ‘grand strategy’ of basic research. There is no comparison of hypotheses, no rallying of evidence, no speculative theorizing and almost no discussion of experiments.
This project aims to make a worthwhile contribution to the application of research on human memory and cognition to education of all kinds. The project has a very specific role in the overall development of this area of applied science. This role may be understood in the context of the ‘science of design’. The ‘science of design’, a title put forward by H.A.Simon, seems to have made little progress since he demonstrated its central importance in 1969. The idea has not been widely discussed by psychologists and so a little explanation seems sensible.
Simon describes various approaches to the design of very complex systems and plans. (The design of instructional materials and procedures may be seen as just such a complex problem of design.) The ‘state of the art’ at the time was illustrated by Simon's description of a design method that was originally intended for use in the development of major roads. This method is essentially an iterative one: a large selection of preliminary designs is developed to a limited extent. The most promising of these are then selected for further development work. When this development has been done the best of the designs are selected for further development, and so on until a clear favourite emerges.
This project is intended as a small, but careful, step in a large process of this kind. It is not an attempt to carry out a full cycle of the process, or to select a set of promising approaches to ‘Instructional Design’. Rather it is an attempt to explore one particular approach to the design of instructional materials and procedures.
The previous section concluded with the statement that this project is an attempt to develop one particular approach to the design of instructional materials and procedures. The approach which will be extended may be called ‘spoonfeeding’. The term ‘spoonfeeding’ has a colloquial meaning and this meaning is used below.
Spoonfeeding generally involves a detailed determination of the capabilities the learner must develop. When, for example, French language is taught in the traditional way the vocabulary and grammar that the learner must master is usually listed in exhaustive detail in the text book.
Another characteristic of spoonfeeding is that the learner's activity is organized for him/her. The learner is usually expected to sit back and obey instructions. For example, Skinner-inspired programmed learning materials give learners information in a specially organized form and according to a strict regimen which makes learners relatively passive. The name ‘spoonfeeding’ seems especially apt in this case.
The opposite approach to spoonfeeding is one in which learners are presented with material which they must reorganize themselves in order to learn. The learner is often not given the solution to a problem but is given the problem and expected to solve it for himself. As Olson (1976) pointed out, in many cases we have no choice but to allow learning to proceed in this manner:
...much of the knowledge most worth having – making discoveries, speaking convincingly, writing effectively, and various social and ethical skills – cannot be taught explicitly because the algorithm underlying them (if indeed there are such algorithms) are not known. Many that are known are too complex to communicate easily...
The kind of learning that will be discussed in this dissertation is intellectual learning, as opposed to ‘moral’ learning, perceptual-motor learning, or any other kind of non-intellectual progress. A good example of the kind of learning with which this dissertation is concerned is learning basic arithmetic and algebra. The reader may feel that the ideas set out below can be applied to other types of learning, and this may be so but it is not claimed here.
One of the principles underlying this project is that applied science should be theoretically uncontroversial. It should involve systematic exploitation of relatively gross and reliable phenomena. In order to achieve a less controversial position, the word ‘knowledge’ will be used in a special sense within this dissertation.
‘Knowledge’ is here taken to be that which remains after learning and bestows new abilities upon the learner. This use is intended to be neutral with respect to such issues as whether knowledge is a trace or a change and whether memory is active or passive.
‘Knowledge’ is intended to include both facts and procedures. Thus knowledge includes includes both semantic and episodic memories. Gagné (1962) described a type of learning which he called ‘productive learning’. By this he meant the acquisition of capabilities of performing classes of tasks implied by names like ‘binary numbers’, ‘musical notation’, and ‘solving linear equations’ rather than tasks requiring the reproduction of particular responses. The product of Gagné's productive learning is of course a type of knowledge as here defined.
It is interesting to note that Gagné's definition of ‘knowledge’ is very similar to that used here: ‘...that inferred capability which makes possible the successful performance of a class of tasks that could not be performed before the learning was undertaken.’
Another use of ‘knowledge’ that is very close to that which is intended here is in the phrase ‘knowledge engineering’. Knowledge engineering is that rather commercial branch of artificial intelligence concerned with the design of expert systems, question answering systems, and the like for real applications. The ‘knowledge’ with which these engineers are concerned includes both programs and databases. In the context of the computer-brain metaphor this use of ‘knowledge’ is again very similar to that stated above. The difference is that knowledge engineers, at present, only design knowledge for electronic computers.
The use of ‘knowledge’ stated above is slightly different in emphasis from its usual variety of uses in cognitive psychology and in everyday speech, since these tend to de-emphasize what is often called ‘procedural knowledge’. It should also be noted that knowledge is not in the materials presented to the learner. In this dissertation the content of instructional materials will be referred to as ‘information’.
Earlier it was stated that there would be no speculative theorizing in this dissertation. Nevertheless, some ‘premises’ must be present. As far as possible, the assumed basis of this work is uncontroversial, in accordance with the principle stated above that applied psychology should consist of the systematic exploitation of relatively gross and reliable phenomena. The intention here is not to use real applications as the testing ground for new theories but instead to work in a conservative manner from what is relatively ‘safe ground’.
As far as the writer is aware, there are two important assumptions underlying this work. Firstly, that there are usually different ways of doing something and, secondly, that usually some of these ways are more effective than others. The first premise will now receive explanation and some expansion.
The word ‘usually’ in the above is included because in certain very simple and usually highly artificial situations (eg. in games) there may be only one course of action open. It may also happen, again in a rather artificially constrained case, that all methods are equally effective. The kind of tasks considered in this work are not of this simple type.
The first assumption, that there are usually different ways of doing something, is held to apply as much to typical intellectual tasks (eg. mental arithmetic or algebra) as it does to, say, building a bridge or a motor car. Ultimately, intellectual performance may be based upon a relatively small set of simple processes but the complexity of intellectual performance in real life tasks makes variety the dominant characteristic.
This first assumption needs still further explanation, not because it is hard to grasp or implausible, but because it is so often ignored in psychology, both in basic research and in applied work. Psychologists often behave as if they thought there was only one way in which people remember, only one way in which people perceive, only one way in which people think. Broadbent (1973) pointed out: ‘...I think that the key difficulty here lies in the assumption that, although different structures can have the same function, yet the same structure cannot have different functions. This assumption is widespread, although quite false. It underlies the strategy of many experimental psychologists themselves, who devise experiments to find the way in which perceptions or learning occur...’
Not only are complex systems capable of operating in different ways, but also the same external behaviour can be the result of different underlying processes. Intellectual behaviour that seems superficially similar can be the result of a variety of mental procedures. The electronic computer is a familiar example of a system in which a variety of complex systems of cause and effect (i.e. programs and hardware) can give rise to observable behaviour (eg. the display on the VDU) that is identical. In the performance of a complex intellectual task such as solving a differential equation, human behaviour will often be the same. Given these two points it seems clear that a multiplicity of methods might be used for performing the same complex intellectual tasks.
In the course of this dissertation, both assumptions will be illustrated by several examples. In the meantime, the reader is asked to introspect and conduct a simple thought experiment. Suppose one learns a method for solving a problem in algebra – in one's head (i.e. without paper or electronic aids). (Notice that algebreic manipulations are unlikely to be ‘pre-wired’ in the brain as a result of natural selection in evolution.) Suppose, then, that a friend who also happens to be a professor of mathematics, tells one that there is a better way to do the problem. In this situation would one say, ‘Sorry, I already have a method – I can't learn another.’ Is it likely that a brain flexible and general purpose enough to have learned one method should be unable to learn another because that method happens to accomplish the same end?
The second assumption needs less explanation. In the course of this dissertation it should become clear that this assumption is justified.
The two assumptions stated above may be recast using the word ‘knowledge’ as defined in section 1.2. In these terms, it is assumed that different knowledge may underly the completion of the same task and that, in general, some such arrangements of knowledge will be more ‘effective’ than others in the task (eg. faster, more accurate, easier to learn, less tiring).
For various practical reasons it is not possible to claim that this section contains a comprehensive review of all previous approaches to ‘spoonfeeding’ in instructional design. Many attempts to create instructional materials and procedures could be said to embody principles relevant to this discussion but do not explicitly describe the basis of the approach used. Rather than attempt a comprehensive review this section considers a number of previous approaches proposed by psychologists during this century. An attempt is made to show that these approaches have, over the years, become more and more consistent with the two principles stated in section 1.4. This is done by considering each approach in the context of the two fundamental assumptions stated in section 1.4.
Some interesting ideas have emerged from attempts to mechanize teaching using various teaching machines. According to Glaser (1960), psychology's pioneer of automated learning is Dr S.L.Pressey, who presented to the American Psychological Association in 1924 and 1925 a ‘simple apparatus which gives and scores tests – and teaches.’ The machine presented multiple choice questions and the learner had to respond correctly to avoid facing the item again. The advantages of the machine were seen as (1) saving the teacher from routine labour, (2) the provision of immediate right/wrong feedback, (3) concentrating on items causing difficulties, and (4) they exploit the ‘law of recency’ and the ‘law of exercise’. Reading the kind of questions that these early machines posed it seems that much of their ‘programming’ seems to have been directed at reducing errors through prompting with very heavy hints.
Further development of mechanized learning systems led to a proliferation of machines but a lack of good materials to use in them. This point was made influentially by Skinner (1958c). Skinner described the teaching machine as ‘a labour saving device because it can bring one programmer into contact with an indefinite number of students.’ However, he noted that Pressey's ‘industrial revolution’ in education had not come about. He emphasized the importance of programming the material and defining more closely just what the program is intended to teach. Glaser (1960) also urged that the design of material was crucial. He wrote, ‘The task indicated is a de-emphasis on the hardware and a concerted attack on the programming of materials and the development of specific principles of programming learning sequences.’
Part of the behaviourist push towards effective programming of machine learning materials was the use of ‘behavioural learning objectives’. The idea was that what the learner was to learn should be defined in detail in purely behavioural terms. It was not enough to say, for example, that students should ‘have an understanding of’ something. One had to define exactly the behaviour that learners should be able to exhibit after instruction.
This was certainly a step forward in that it demanded a more detailed consideration of the objectives of instruction. However, in the context of more recent ideas in psychology and in particular the assumption that similar behaviour may be the result of quite different processes, the use of purely behavioural objectives can be seen to obscure many potentially important possibilities in education. In these circumstances the second assumption is not applicable.
Meanwhile, other ideas in mechanized instruction were developing. Galanter (1959) looked for a machine that would ‘be able to make plans for itself, and also able to diagnose the plans and ideas that the student has formed.’ The synthesis of mechanized learning technology and the new field of Artificial Intelligence was an attempt to create the kind of ‘intelligent tutor’ that Galanter was looking for.
Sleeman and his co-workers have been particularly influential in this area. Hartly and Sleeman (1973) argued that an intelligent teaching system requires access to the following information:
Their approach amounts to programming the computer to diagnose the students problems with the material and correcting them. Sleeman et al have been using a system called the Leeds Modelling System to model the knowledge of the learner during instruction. This model is based on the form of a production system. The teaching system is programmed to recognize the effects of common erroneous versions of the production rules and to correct the student's production rules when these effects are detected in the student's responses.
According to Sleeman (1983) a major problem with intelligent teaching systems has been that:
‘Most tutoring systems are capable of solving problems in only one or two prescribed ways. For instance, GUIDON has to use the backchaining control structure of MYCIN, rather than following other equally valid medical diagnosis procedures. As a result of this constraint, the system coerces a user's performance into its own conceptual framework.’
Sleeman does not consider the practical difficulties of constructing a system that would overcome this problem. His programs work by correcting errors in the student's knowledge. To do this the programs are given a list of common errors. In the case of a typical complex task there will be a large number of arrangements of knowledge that work satisfactorily and the program will become impossibly complex.
It is clear from this that Sleeman et al believed that for a given task there would be alternative, valid processes for accomplishing it. Sleeman's approach is consistent with the assumption that there are different ways of accomplishing the same task. Unfortunately, he does not pay very much attention to the relative merits of alternative approaches.
Sleeman's assumption appears to that alternative methods will be equally useful. The basic problem is that Sleeman's teaching systems correct errors without being programmed with an explicit representation of what is the ‘correct’ knowledge. It may be argued that this representation is implicit in the means-ends guidance rules (see point 4 above) because otherwise the program would be providing correcting feedback without a goal – a very bizarre situation. It appears that this implicit statement of the ‘correct’ knowledge has obscured the importance of this knowledge statement. To reinterpret Sleeman's criticism of teaching systems, it seems that such systems coerce learners into ‘copying’ the ‘correct’ method of thinking represented implicitly in the means-ends guidance rules.
The apparent weakness of the Sleeman-type approach is that the ‘correct’ knowledge is defined almost accidentally in the design of the means-end guidance rules. No great attempt is made to consider the effectiveness of the ‘correct’ knowledge that the program leads students to emulate. In the light of the assumption that some knowledge will be more effective than other knowledge, this omission would seem to be a serious weakness.
There have been several attempts to analyze the objectives of instruction, not into behaviours, but into knowledge ‘structures’. Davies, for example, recognized five ‘learning structures’:
Gilbert (1962) and Mechner (1965) on the other hand recognized only chains, multiple discriminations, and concepts.
The most detailed and authoritative system, however, is that of Gagné (1965a). Gagné also developed a system of task analysis that used the notion of ‘learning hierarchies’. In his system of task analysis the capability to be acquired by the learner is broken down into component capabilities in a recursive analysis that ends with the specification of simple knowledge structures. The pattern of instruction follows the hierarchical structure of the analysis in such a way that the components of a capability are taught before the capability itself.
There are some rather questionable aspects of this method. Firstly, although his method is regarded as a method of task analysis and the examples he gives appear to show a task broken down into successively smaller stages, Gagné repeatedly refers to ‘capabilities’ and their component capabilities. Presumably, he means that once a task has been broken down into its components the resulting tree structure is identical to the structure showing the dependencies of the learner's required capabilities and their components. Secondly, he writes as if the learning hierarchies his method produces are naturally occuring and necessary for learning. He appears to assume that there is only one valid learning hierarchy for a given task. He hopes that instructional effectiveness will serve as a partial validation of the analyses and describes attempts to find out whether learning hierarchies are correct through experiment. In addition, his method contains no reference to the possibility of alternative analyses. The idea seems to be that the correct analysis is simply deduced from close inspection of the task to be learned and this is a completely deterministic process.
This last point is clearly not compatible with the assumption that there are usually different ways of performing a given task. Interestingly, Brien and Lagana (1977), whilst endorsing Gagné's methods and developing extensions of them, apparently do not take the same attitude. They repeatedly refer to the ‘design’ process involved and draw a direct analogy between teaching a person and programming a robot.
Greeno (1976) described an approach to instructional design that had much in common with the use of behavioural objectives favoured by many educational psychologists at the time. He agreed that the development of instructional objectives should begin with consideration of the kind of tests used to assess whether students have acquired the knowledge intended as the outcome of learning. However, he continued, ‘But rather than just specifying the behaviours needed to succeed on such tests, cognitive objectives are developed by analysing the psychological processes and structures that are sufficient to produce the needed behaviours.’ His assumption was that ‘...the goals of instruction, including aspects of conceptual understanding, can be inferred from the tasks that students are expected to perform during instruction and, following instruction, on tests.’ Later in the article he notes that in general, there are more ways than one to complete a task.
It is possible that the shift in emphasis from behaviour to thought that distinguishes cognitive psychology from its behaviourist predecessor, and the example of the electronic computer with its many alternative programs, created the intellectual climate in which it could be seen that there are different ways to think through the same task. This is simply speculation. The reader should note that in his article Greeno gives very little emphasis to the idea that there would be alternative cognitive objectives for the same task. It is possible that this is because he had not fully explored the implications of that idea at that time. His references to ‘inferring’ cognitive objectives from the tasks that students are expected to be able to perform suggests a deterministic process of deduction incompatible with the notion of alternative objectives. In his later work this inconsistency dissappears.
Mayer (1977) developed Greeno's idea into a notion with which the reader should now be familiar. He wrote, ‘The main idea of the present article is that different underlying rules systems (different cognitive objectives) can be acquired even though the same levels of mastery performance on a given behavioural objective are achieved.’ Mayer was apparently concerned about the relative merits of these alternative rules systems. He cited Ehrenpreis and Scandura (1974) who taught a mathematical skill by providing either a set of discrete low-level rules, or a system of higher order rules which could be used to generate lower rules; the two instructional groups performed similarly on tests of specific transfer but the higher order group showed a clear superiority on transfer tasks. Mayer clearly regards the acquisition of higher order rules as of the greater importance. His own experiment showed that meaningful context during instruction in a simple counting task, as opposed to ‘rote’ learning, improved student's ability to generalize their learning to other tasks. Mayer not only points out that different knowledge may underlie the performance of a given task, but also is clearly concerned with the idea that some knowledge is more effective than other knowledge. His approach is thus completely consistent with the two principles stated in section 1.4.
Soon after this, Greeno (1978b) placed greater emphasis on evaluating the desirability of different knowledge structures. He suggested three criteria that could be applied in evaluating the degree of understanding reflected in a semantic system. These were, (1) internal coherence of representation, (2) degree of connectedness of the information to other things the person knows about, and (3) correspondence of the representation with the material that is to be understood.
Resnick and Ford (1981) pursued these ideas at length and concluded (p205) that the goal of instruction was ‘well structured knowledge’. Apparently, Resnick, Ford, Greeno, and Mayer are all in agreement with the basic principles set out in section 1.4 at the present time.
This selective history of instructional design methods based in psychology and pursuing the ‘spoonfeeding’ strategy has attempted to show that there has been a gradual movement towards a method of design that takes into account both the principles upon which this dissertation is based. In the next section, the emerging approach is generalized to include different goals and different representations of knowledge.
Previous sections of this dissertations have described its goal and its area of concern, have laid down the conceptual basis of the work, and described the development of certain ideas in the field of instructional design. In this, the concluding section of part 1, a method of instructional design based on this foundation is described. This method is the result of applying the ideas outlined above to the general form of the ‘spoonfeeding’ approaches described in section 1.5. It represents an advance on the Skinnerian method and Gagné's form of task analysis and those methods based on these approaches, and also represents a translation of the idea of cognitive objectives into a general framework for real design work.
The spoonfeeding method as practised by behaviourists earlier in this century proceeded in the following way: a designer's brief would be the starting point and within this the designer would develop a number of behavioural objectives for instruction. These objectives were tasks that the subjects were to be conditioned to be able to perform. After this, a program of conditioning was developed that was aimed at achieving the behavioural objectives determined in the previous stage. Finally, the learning program would be implemented.
When programming for very lengthy and complex tasks the designer would break the task down into a number of sub-tasks and attempt to ‘shape up’ the learner's behaviour. In doing this, the designer would presumably have found that there were different ways of doing things. For example, if the overall task were to mend a radio set then the designer would probably have been given quite a lot of conflicting advice about the best way to mend radio sets. To this extent, even designers using behavioural objectives would have been operating according the the principle that there are usually alternative ways to carry out a task.
However, when tasks involved comparatively little overt behaviour but much mental activity, this method of specifying behavioural objectives would have been of limited use. If it is assumed that even superficially very similar performance may be the result of different knowledge giving different levels of effectiveness, then the design of the knowledge that learners should be led to acquire will be seen as a potentially rewarding activity. In the choice of knowledge for a particular brief, the instructional designer has the opportunity to improve the overall results of the instructional programme.
The method of instructional design described below is little more than the behaviourist method of spoonfeeding with a new stage, the ‘knowledge design’ stage, replacing the behavioural objectives stage of the original. The differences between this new approach and the approach of Gagné and of Greeno and Resnick are less obvious.
Whilst Gagné is careful to specify the knowledge to be produced by instruction, he implicitly assumes that there can be only one viable knowledge specification. For Gagné, determining the required knowledge is a matter of analysis, based on a close specification of the task to be mastered by the learner. This is quite different from knowledge design as described below.
Another way of explaining the difference is to say that the task is not the technique. Performing a task analysis can only reveal the task in more detail. A task analysis does not determine the technique.
In comparison with the developing approach of Greeno and Resnick, it is virtually identical, except that the criteria for evaluating knowledge designs are not specified in the method below, whereas they are by Greeno and Resnick. In Part 2 of this dissertation the knowledge design stage will be discussed at greater length and an attempt will be made to apply some general ideas about design to the problem of instruction. In particular, the role of evaluative criteria will be discussed together with a selection of criteria of potential use in knowledge design. It will become obvious that the criteria specified by Greeno are a premature development, of only limited use.
Here then is the proposed method:
THE BRIEF. Instructional design would begin with a ‘brief’ of some kind. We are not concerned here with the creation of the brief. This is a very difficult area in all methods of design. There are several approaches and the only restriction on the choice of method at this stage is that it should reduce the problem to a statement of the tasks that the learners are to be able to perform plus information about who the learners are, when they will have to perform the task, and what resources they have for learning (eg. how long they can afford to spend, what equipment they have, what they can afford to spend on computers, tape recorders, books and so on).
KNOWLEDGE DESIGN. The instructional designer would then work from this brief towards a detailed specification at the level of cognitive processes, of the method that the learner is to be taught to use, and the knowledge that would underlie this use. At this stage there will be a specification of the overt behaviours that result from the operation of the design but this would not be the result of a separate stage of design preceding knowledge design proper. Rather, the overt action and the underlying knowledge would be designed simultaneously within the knowledge design stage. Precisely what form the knowledge specification would take is not to be part of the statement of this method. Various representations of knowledge and cognitive processes have been used by psychologists in their attempts to explain intellectual capabilities. Greeno used ‘semantic networks’ whilst Sleeman used production systems. Both these workers have borrowed descriptive schemes for their own prescriptive uses and it is likely that in any future work of this type the same will happen again.
IMPLEMENTATION PLAN. The next stage in this process of design would be to plan a programme of ‘implementation’ that would lead the learner to acquire the knowledge specified at the knowledge design stage. The work would be finished once the programme had been implemented.
Although this method is described as a number of stages, realistically one must expect that designers will often have to ‘backtrack’, for example if they find that they have no satisfactory way of implementing a particular knowledge design. The general direction of the design process should, however, be through the stages described, with each stage providing the goals of the next.
Greeno's work illustrates this method in a rudimentary form. Working from a statement of the tasks that his pupils were to master he determined ‘cognitive objectives’. These cognitive objectives may be seen as his knowledge design. Had he been applying his later ideas, he would presumably have tried to set cognitive objectives that represented ‘well structured’ knowledge according to his three criteria. Finally, he implemented his design using relatively traditional classes in which information was given to students about how they should think their way through the tasks.
The method of instructional design outlined above is only a very high-level plan. It contains few specific prescriptions. In part 2 a number of points relevant to more detailed development of this method are discussed.
Having introduced the ‘brief/knowledge-design/implementation’ method of instructional design in part 1 there is a need to develop each stage in more detail. In particular, it would be nice to get down to a level at which the reader can easily see how one might actually go about instructional design within this overall strategy. For a number of reasons, only the knowledge design stage will be explored in greater detail in this dissertation. The two most important reasons for this are that knowledge design is the newest and least familiar idea and that the writer has considered this stage in far more detail than any other.
The ideas presented in this second part are mostly suggestions about how various parts of the knowledge design task might be carried out. These suggestions do not follow from the idea of knowledge design, though they could be used as part of the method. In the future, far better methods would probably be developed.
An attempt will be made to bring the technology of design itself to bear on the problem of instruction. This may seem strange to the reader since design itself is not, as yet, a well established discipline in its own right. The position taken here is almost exactly that expressed by Simon (1981) in chapter 5 of ‘The Science of Design’. A number of quite sophisticated methods of design have been developed in architecture, urban planning, management science and operations research, mechanical engineering and of course, most recently in computer programming. Some of these are surveyed by Simon and another good survey is Jones (1980).
Psychologists are familiar with the idea of knowledge representations of various kinds being used as models of, or explanations of, learning, memory, thinking, etc. In the case of knowledge design, the logical status of the knowledge representations used is quite different. The knowledge that is being represented does not exist at the time the designer begins work. (There can be no scientific modelling because there is nothing to model.) Moreover, the knowledge the designer designs is an invention in the sense that what is designed would not exist were it not for the act of invention. Instead, the learner might acquire different knowledge, either by his/her own metacognitive activity or through the activities of another knowledge designer, or even by some accidental process. In the ‘science’ of psychology (where ‘science’ is taken to be the study of natural things, how they are, and how they work) a model of knowledge must be changed if it does not fit the system that it was designed to model. The knowledge designer, on the contrary, is more likely to change the system so that it accords with his model. This may be summarized by saying that for the knowledge designer, knowledge representations are not models of human knowledge, they are models for human knowledge. This is not to say that whatever the knowledge represented the knowledge designer would expect people to be able to model it. One would have to work within the limitations of the human system.
Modern psychology makes great use of cybernetic ideas such as information, feedback, and processing in its attempts to explain phenomena such as thinking and memory. These ideas were originally the property of engineers rather than scientists. Norbert Weiner's contribution to cybernetics (apart from choosing the name) was ‘to make a science out of what had until then been the art of engineering design.’ (McCorduck 1979). Shannon's work on information theory was towards creating a tool for use in the design of electronic communications systems and his use of Boolean algebra to describe the behaviour of relay and switching circuits stands as a valuable contribution, not because it told us something new about switches, but because it gave engineers a tool that helped in designing complex switching circuits.
The idea that the origins of many modern psychological ideas were pragmatic rather than strictly scientific may be pursued further. Newell and Simon (1973) acknowledged the importance of formal logic in their theory of information processing systems. Logic itself has developed at least partly because of the efforts of people like Leibnitz, who longed for a universal scientific language in which workers could exchange scientific ideas, and for a calculus to manipulate those ideas. Lewis and Langford (1956) wrote that such a calculus would ‘facilitate the process of logical analysis and synthesis by the substitution of compact and appropriate ideograms for the phonograms of ordinary language.’
The status of knowledge representations in knowledge design is pragmatic. It is similar to the status of Boolean algebra when it is used in the design of switching circuits, in that it is a designer's tool. This use is at least supported by historical precedent.
Evaluative constructs, both continuous and discontinuous, are used widely in design methods. They are often thought of as the ‘objectives’ of a design process. For example, Papanek (1972) describes what he calls the ‘function complex’ of Use, Need, Telesis, Association, Aesthetics, and Method. There is no need to go into exactly what each of these means; each is a desirable quality of an artefact and the designer is advised to seek all of these qualities during design. Other examples abound in Jones (1980) whilst the mathematics of optimization and decision making seem to be directed almost entirely at developing the use of evaluative constructs such as ‘money’ and ‘utility’. It is not always the case that the evaluative constructs are used during design itself. In some cases evaluative constructs have played a part in the development of design methods that, when used, automatically yield designs that rate highly on various constructs. An example of this kind of process is the method of computer programming advocated by Jackson (1975) who claimed that it gave programs which were, amongst other things, relatively free from bugs and easy to change/extend.
At the end of section 1.5, Greeno's three criteria for well structured knowledge were listed. These criteria were presumably intended for use as guides to the design of cognitive objectives and for the evaluation of cognitive objectives once they had been determined. Greeno's criteria would act as three evaluative constructs forming the input to an overall evaluative function of some kind that reflected the relative importance of each dimension within the particular brief. Each of the evaluative constructs was continuous rather than discontinuous with the possible exception of the last. For realistic design of human knowledge the evaluative framework will have to be far more extensive and flexible than this simple scheme.
Rather than attempt to develop a complete evaluative scheme (which would be very difficult) this section and the next will be given over to a discussion of the similarities and differences between the constraints on human knowledge design and those on computer program design. From this discussion a number of potentially useful evaluative constructs will emerge. The use of a large number of evaluative constructs is itself a complex problem. Different briefs would clearly lead to different priorities or ‘constraints’ amongst the various criteria. For example a brief that called for knowledge that had to be implemented very quickly would impose different constraints from a brief that insisted on high performance on the task whatever the cost.
A comparison will now be made between the constraints that have been used in computer programming and the kind of constraints that would probably operate in knowledge design. This comparison serves to illustrate the complex nature of the design constraints that would be used by serious human knowledge designers. A discussion of the problem of evaluating computer programs may be found in Weinberg (1971).
In the very early days of electronic computers the main constraints on programming were hardware constraints. Computer memories were small and CPUs relatively slow. At that time programs were far smaller than they are now and comparatively easy to understand and de-bug. The desirable quality of programs at that time was ‘efficiency’. This meant that programs should be very careful with memory space and be streamlined to reduce run time as far as possible. As programs became larger and programmers became more devious, the quest for efficiency led to programs that were devious and demanding of programmers. Program structures became more difficult. The reaction to this, ‘structured programming’, emphasized the importance of criteria like ‘comprehensibility’ and ‘simplicity’ of program structure often at the expense of simple efficiency. They recommended certain programming rules that were intended to favour these criteria (eg. Dahl, Dijkstra and Hoare 1972; Jackson 1975).
A similar shift has occured in the design of databases. The old hierarchically organized data structures have been superceded by relational databases storing information in the form of tables. Although relational databases can sometimes be slower than hierarchical databases, relational databases can be adapted to new uses much more easily and they can handle a wider variety of requests for information.
In any programming design exercise it is now necessary to assess the relative importance of a large number of factors including available memory space, run time, prevention of bugs (correctness), ease with which the program can be altered/expanded in the future (possibly by another programmer), compatibility with other machines and software, and of course user friendliness. The reader can imagine how these criteria might be translated into human terms and this analogy gives us our first set of possible evaluative constructs for use in human knowledge design. For example, ‘run time’ would be more important in the high speed decision making situation of a tennis rally than it would in the comparatively slow game of snooker. Simply translating computer criteria into human terms yields a number of potentially useful constructs. However, there are many differences between computer programming and human knowledge design, and these are the subject of the next section.
In the previous section the complexity of designing software for electronic computers was used to suggest the likely complexity of the human knowledge design task. There was also a suggestion that some of the evaluative constructs used by computer programmers might be adapted to the parallel human endeavour. In this section the differences between the constraints on computer programs and the constraints on human knowledge designs will be discussed.
The significance of the differences is emphasized by Michie and Johnston (1984), who described the human brain as ‘puny’. They gave the brain's vital statistics as:
Michie and Johnston give these figures as estimates accurate to within about 50 percent and conclude that great human intellectual abilities must be built in another way to computer talents ‘ – a way which compensates for man's tiny working memory and lumbering processor.’ (p78).
Despite the kind of quantitive differences noted by Michie and Johnston, some have tried to treat human and computer knowledge as interchangable. Expert systems are usually developed by ‘harvesting’ human knowledge. Knowledge engineers interview human experts in the relevant field and try to find out all they can about the expert's way of thinking about the problem area. A program is written that emulates the human expert's knowledge as far as this is possible. The program may be brought closer to the expert's thoughts as early versions are shown to the expert and modifications are made until the human expert has fewer complaints. In this stage of expert system development human knowledge is ‘copied’ on electronic computers.
Some have turned this relationship around and now use expert systems constructed in this way as an educational device. The student is trained to acquire knowledge similar to that embodied in the software. For example, GUIDON (Clancy 1982) provides problem solving and tutoring facilities in medical diagnosis. GUIDON uses the backchaining control structure of MYCIN, an expert system for medical diagnosis and it is MYCIN that students are trained to emulate. To the extent that the computer's knowledge is a copy of expert human knowledge this would seem to be a fairly safe strategy. The danger is that in the original translation from human knowledge to computer program certain distortions might be introduced because they suit the computer better than the original human knowledge would have. Students would then suffer relative to learning directly from a human expert, at least in terms of the suitability of the knowledge they were attempting to acquire.
A rough description of the difference between electronic talents and human talents would be that it is relatively easy to get computers to do the things that humans find difficult, but it is difficult to get them to do things that humans find easy! Feigenbaum and McCorduck (1984) wrote (p57): ‘...mathematics and logic and the ability to splice genes or infer the underground geological facts from instruments, are what computers handle best, because the more highly structured the knowledge is, the easier it is for us to codify it for computer use. On the other hand, getting around the real world is not a highly structured task – the average house pet can manage it easily, but machines cannot. This is not to say that they won't ever be able to; it's a statement about affairs at the moment.’
The remainder of this section is devoted to a more detailed discussion of some of the most important limitations imposed on knowledge designs (as opposed to computer software).
The limitations of working memory and processing speed have already been mentioned. Perhaps the most important limitation of human minds is that they are rather slow. H.A.Simon (1983) noted that ‘The first obvious fact about human learning is that it's horribly slow.’ This limitation exerts its influence on the design in two ways.
Firstly, the amount of learning involved in acquiring some knowledge will be one of the main determinants of the time necessary to implement the design. Whilst there may be little concern about ‘filling up’ the human long term memory one would still have to be careful to avoid placing an impractical learning load on the student. In the case of a computer, data can be input so quickly that there is rarely any great pressure to shorten the time it takes. A great deal of design effort might save only a few seconds. However, in the case of human learning even small percentage savings in learning load might save significant amounts of time – several hours or even days.
This is perhaps just one aspect of the special problems of implementing knowledge designs. As Simon (1983) notes, ‘The second distinctive feature of human learning is that there's no copy process. In contrast, once you get a debugged program in the computer you can have as many copies as you want (given equivalence of operating systems and hardware). You can have these copies free, or almost free. When one computer has learned it, they've all learned it – in principle. An algorithm only has to be invented once, not a billion times.’
This is an exaggeration. Humans can benefit in a fairly direct way from the learning of others. When an algorithm has been invented once by a human, others can follow with comparative ease. Nevertheless, there would seem to be a huge quantitive difference between the electronic ability for this kind of direct transfer of knowledge and the human one.
Secondly, memory is an important factor in human knowledge designs because the cognitive processes the student is supposed to learn cannot overstretch the limited speed of human learning when employed. For example, a process that called for memorization of several items for a long period of time, but only allocated, say, 4 seconds to the memorization of each one would fail. Estimates of speed of memorizing, such as Simon (1969), indicate that a more realistic rate might be one item memorized every 5 to 10 seconds.
The efficient deployment of long and short term stores would probably be one of the central problems of most knowledge designs.
The important role of memory use is suggested by results such as those obtained by Suppes, Jerman, and Brian (1968), who developed a method of scoring the difficulty of simple arithmetic problems. Their metric had three components: Transformation [the number of operations require to convert a problem into the form where the unknown stands alone on the right of the equal sign], Operation [the number of operations occurring in a calculation], and Memory [a measure of the digits or sums that have to be held in memory during the calculation]. They found that their measure was a good predictor of errors and response latencies for addition, subtraction, and multiplication but also found that the Memory component alone was very nearly as good a predictor.
A number of techniques might be employed to increase the speed of memorization. For example, suppose the problem required temporary storage (say 2 – 6 seconds) of between 3 and 12 items and gave only 1 or 2 seconds in which to memorize each item. Ordinarily this would overload short term storage. However, if the items to be remembered often contained certain combinations of items then these combinations could be given a single symbolic identifier. When required to remember the combination, the person would only have to hold the single identifier in mind and then at recall could retrieve the designated combination from long term storage. Many psychologists, including Newell and Rosenbloom (1981), have hypothesized that this process occurs ‘naturally’ so the idea is already well known.
The reader can imagine that it would be a fairly simple matter to develop computer programs that would automatically perform statistical analysis of the frequency of occurrence of particular combinations and thus determine the most efficient set of combinations for use in this kind of knowledge. This operation has already been done on computer databases in order to reduce the memory space used (Maggs, 1974).
Techniques such as this, together with many others can make a great deal of difference to apparent performance of working memory. Consider, for example, the single subject, SF, reported by Chase and Ericsson (1981). This subject became an expert at the standard digit-span task. Over the course of two years, involving over 250 hours of laboratory practice, SF steadily increased his digit span from seven digits to about 80 digits. This exceeds that of normals by a factor of more than 10 and was four times higher than had ever been reported in the literature before. Chase and Ericsson maintained that SF was originally an ordinary subject and that his performance on other similar tasks and aspects of his developed behaviour showed him to be entirely normal in his mental abilities.
This kind of result is important here because, whilst 250 hours is a very long psychology experiment, it is but a small fraction of the duration of an average person's education.
Another serious problem for human minds is forgetting. Electronic machines using magnetic or optical media for mass storage of information have relatively few problems with information ‘decaying’ whilst in store. Perhaps as a result they are very badly affected when this happens. In contrast, forgetting is a fact of mental life in both the long and short term. Human knowledge needs to be robust – cushioned from forgetting by inbuilt redundancy. If the student forgets something it should be possible to deduce it, eventually, from the remaining knowledge. It is also essential to continue the implementation for as long as necessary, extending it to become a programme of periodic maintenance.
Whilst the maximum channel capacity of electronic computers is fixed (at least until new hardware is added) this does not seem to be the case with humans. Mobray and Rhoades (1959) used a single, dedicated subject who practised a simple choice reaction task tens of thousands of times. At the beginning of this experiment the usual relation between simple, two-choice, and four-choice reaction times was observed. After three months of concentrated practice only the difference between simple and choice reaction time remained. This result was a blow to the information theorists of the time because the horizontal latency function apparently corresponded to an infinitely high rate of information processing (Legge and Barber 1976).
A more striking example of this phenomenon was provided by Neisser, Novick, and Lazar (1963) who found that in a visual search task there was an apparent change from serial to parallel searching. Their subjects scanned a matrix of letters for characters in either a 1-, 5-, or 10-character list. These lists were embedded, so that the 10-item list contained characters in the 5- and 1-character list. After about 20 days of practice on the same list, subjects were able to search a matrix for any one of 10 characters as fast as they searched for a single character.
The phenomenon of apparently expandable channel capacity has been the subject of some theorizing by psychologists. [Perhaps there might have been even more interest if the experiments involved were shorter and less demanding of subjects and experimenters.] Legge and Barber (1976), for example, suggest that there are three levels of information processing channel. At the top is a super-channel, at the bottom the wired-in genetically determined channels subserving vegetative functions, and in between a set of channels that are more or less dependent upon learning. Legge and Barber suggest that the special feature of the super-channel is its almost unlimited flexibility and adaptiveness, whilst the special-purpose channels below are limited to the situations that they can deal with, though extremely efficient in handling problems within their competence.
More recently, Shiffrin and Dumais (1981) have discussed the development of ‘automatism’ and Neves and Anderson (1981) have discussed ‘automatization’. Both terms refer, at least in part, to the phenomenon demonstrated by Mowbray and Rhoades (1959) and by Neisser et al (1963).
More possibilities are suggested by the attacks which have been levelled at the computer metaphor by Carello, Turvey, Kugler, and Shaw (1984). Carello et al discuss alternatives to the digital computer analogy. They describe a number of ‘potential machines’ in which a number of elements interact continuously (‘analogue’ as opposed to ‘digital’) and simultaneously with each other, rapidly finding their own equilibrium according to a ‘geometrodynamic logic that generically couples physics with geometry.’
According to Carello et al, when American computer effort chose to concentrate on digital computers, the U.S.S.R. invested in analogue hardware of the kind that might have been used to realize potential machines. The result of this was that the Russians lost the computer race because their attempts to develop general purpose analogue machines failed.
Carello et al state: ‘Whereas the digital machine is a general-purpose device that can be designed to instantiate an indefinitely large number of rules, a potential machine is a special-purpose device that is successful in specialized circumstances by virtue of a particular geometry linked to a particular subset of physical laws.’ Fowler and Turvey (1978) argued that the actor has the capacity to become a variety of special purpose devices. They were concerned with motor control but would probably be happy to extend the principle to a wider variety of situations.
These phenomena are difficult to think about. This may be because the distinction between software and hardware, so simple in the world of electronics, is very problematic when applied to the human brain/mind. For example, hormones released in the brain can affect behaviour. In this case it is not clear whether these hormones should be regarded as the operation of software, or as an element of the hardware environment. Moreover, there is a possibility that at least some learning involves alterations in the structure of neurons, through the growth or atrophy of dendritic spines and the like.
From research on ‘channel capacity’ or ‘automatism’ we can at least conclude that knowledge designers given a brief requiring high performance more or less regardless of the cost would probably be in a position to consider designs involving ‘parallel processing’ or ‘special purpose devices’ beyond that of which the student is already capable of manifesting.
The traditional neuropsychologist's view of the brain was that it may be divided into a number of specialist areas, eg. visual cortex, motor cortex, and general purpose areas referred to as ‘association areas’. Fodor (1983) has developed this idea further. Within a model of this kind the knowledge designer would be primarily concerned with altering the functioning of the association areas but would attempt to exploit the various, powerful but relatively inflexible, specialist systems.
Clearly, one would not expect these specialist systems to bear any more than an occasional resemblance to those found in the ‘typical’ electronic system. Suppose, for example, that there was in fact a separate visual memory system, and that this system had special properties of its own, somewhat different from other memory systems in the brain [perhaps loosely analogous to the Video RAM in the computer used to produce this document]. If this were the case, then mnemonic systems of the kind reviewed by Rawles (1978) might be seen as methods exploiting the special properties of this visual memory system.
A further difference between computer programming and human knowledge design stems from the situation described by Feigenbaum and McCorduck (1984) and quoted at the beginning of this section. The human abilities of perception, particularly visual perception, movement control and natural language still represent the greatest challenge to architects of the Fifth Generation. These functions are carried out by every healthy person and would have to be considered as part of many knowledge designs. Even in intellectual abilities there is often a need to define some construct in terms of perceptual experience, rather than simply in terms of other constructs.
The reader may be able to remember having had a conversation with someone who was very enthusiastic about some particular theory or ideology and who would only define the constructs of his system in terms of other, equally baffling constructs. Eventually, one simply has to demand examples and try to get the gist of the idea from these. This illustrates the frequent need to link language to the real world and the importance of this kind of activity in the design and implementation of human knowledge.
In most electronic computer systems the computer has no senses of its own. Rather, its users provide it with the data that it is to process. The present technology of computer vision and hearing is not sufficiently developed to allow comparison between human knowledge involving use of human senses and computer programs using computer senses.
In the previous section, several differences between computer programming and human knowledge design were discussed. Perhaps the most important difference of all was, however, left unmentioned. In the case of computer programming, everything is known about the computer, right down to the transistors and the power supply. There is no part of the machine which has not been designed, no part that was not put there deliberately. In contrast, much of what goes on in the brain is a mystery and even without any obvious, deliberate attempts at control, mental functions of some kind will develop during the individual's lifetime. This difference is so important that this entire section is given over to a discussion of it and to the description of a simple way to overcome the apparent problems.
The previous paragraph overstated the real difference between computer programming and human knowledge design. If we consider a typical programmer we find that he/she has only a very general grasp of how the computer carries out the commands in a program and that he/she rarely has a comprehensive knowledge of what commands are available. Thus, not only is the programmer ignorant of the inner workings of the machine, he/she is also only aware of some of the things that the machine can be instructed to do. This situation is quite similar to the predicament of the human knowledge designer.
The solution to this problem suggested here is to gradually build up a statement of what most people can do in terms of a large number of relatively simple information processes and the speed and accuracy and so on, with which they can be carried out. One would then devise complex information processes that were built up solely from these elementary processes. For example, suppose one has mastered simple addition and subtraction of numbers. The knowledge designer could then design knowledge that used these basic abilities, possibly in quite a lengthy and complicated sequence of operations, but would not have to worry about how the basic operations would be carried out. This idea is well known in slightly different contexts. Gagné's learning hierarchies, for example, are virtually identical. There are a number of points, however, that need to be emphasized in the context of human knowledge design.
Firstly, knowledge design can proceed even though not all of the capabilities of the human mind are known. However, the more capabilities the knowledge designer knows about the better the knowledge design is likely to be.
Secondly, there is no need to find the most elementary information processes of which the mind is capable. This is not to say that such knowledge would not be useful. It would be very useful but it is not necessary. As more elementary processes are discovered they can be added to the list of capabilities, just as new capabilities developed by the knowledge designer can be added to the list.
Discovering more basic processes might be interesting for psychologists attempting to explain how the mind works, but it would be interesting for knowledge designers for a different reason. If more elementary processes were discovered, they could be used to design knowledge that would carry out the functions of other, less elementary processes. If these designs proved to operate more effectively than the original process, then they could replace the original in all its uses and the result would be a gain in performance.
Thirdly, the method would be far from infallible. There would be difficulties in identifying all the conditions relevant to the performance of a particular information process. A student might fail to perform a process because some hitherto undetected condition on the input to the process had not been satisfied. In addition there may be complex sequence effects preventing particular information processes being carried out following the execution of other processes. For example, it is not inconceivable that people may find it difficult to repeat some processes over and over again without rest. Also, people may have difficulty in some circumstances in both performing a process and remembering what they have to do next. This method of developing knowledge depends on a large degree of independence between the components of a process. They must be ‘partially decomposable systems’ to use H.A.Simon's phrase.
Fourthly, although much use could be made of common knowledge about human capabilities and of research that has already been done by psychologists there might still be a need for special applied research in order to find out more about certain human capabilities. For example, there would probably be a need for more studies of learning that involved very large amounts of practice, as in the Mobray and Rhoades study, and for more research on ‘continuous’ thinking of the kind presumed to underlie performance in tasks such as Shepard's figure rotation task. The objective in all such studies would be to find out what people can usually do (i.e. prior to the knowledge designer's intervention) and in particular to find a large set of simple, varied processes that people can usually carry out and which could form the basis of detailed knowledge designs.
In conclusion, it should be possible to begin useful knowledge design without elaborate explanatory models of the mind. This situation conforms to the usual relationship between science and technology: technology usually precedes science in the sense that designs usually work some time before scientists have a satisfactory explanation of why they work (Sahal, 1981 Chapter 2).
In the previous section a goal for research supporting knowledge design was introduced. There was no suggestion as to how this research might be carried out and there would be more or less effective ways of doing so. In theory there would seem to be few difficulties involved in measuring human performance – far fewer than are involved in trying to explain that performance. In this section, two other types of research that could be of use in knowledge design are discussed: (1) copying human experts, and (2) computer simulation.
One valuable source of design ideas for human knowledge design is the performance of people who are already ‘experts’ at the task. An experiment reported by Thorndyke and Stasz (1980) illustrates the kind of study that might be used. Thorndyke and Stasz compared the performances of experts and novices in a task requiring them to memorize a map. Comparing the protocols of the two groups they identified 6 procedures that appeared to characterize the experts' performance as opposed to that of the novices. They then ran three groups of novice subjects on the same task. One of these groups was taught the procedures found in experts, another group were taught unrelated procedures, whilst the third group were not taught anything. The subjects in the first group, who were taught expert procedures, out-performed the other two groups, whose performance did not differ.
In deciding whether to use such a study, the knowledge designer should consider factors such as:
Whether or not human experts already exist. If they do not, for example if the task is to use a new product such as a new design of video camera or a new piece of business software, it would be necessary to wait for humans to become expert at the task. There might not be time for this.
How difficult it would be to find out what experts do. Psychologists are familiar with the practical and theoretical difficulties involved in trying to find out how people think. If the task is of the kind that can be usefully investigated through protocol analysis then copying may be a relatively cheap procedure. If, however, the task is not amenable to such investigation (as when the skill involved has become automatic for the subject) and would have to be investigated by careful analysis of the results of a battery of behavioural measures, then copying experts may be judged not cost-effective.
The sophistication of knowledge design technology. If knowledge design technology reached a state of such sophistication, within a particular class of tasks, that knowledge designs routinely out-performed the knowledge acquired by unaided students then there would be little point in studying the performance of people who have acquired knowledge for a task in that class without the knowledge designer's help.
Despite the limitations on the usefulness of this kind of research it might still be useful, particularly in the early stages of design projects and in the early development of knowledge design technology.
If a knowledge design would take a very long time to implement in a person then it may be more convenient to test a simulation of it rather than test it in people. One would have to simulate the knowledge performing in a simulated problem environment. Testing the knowledge on a range of problems of the kind it is designed to deal with would yield information on the strengths and weaknesses of the method and might lead to revisions of it or to a revision of the specification of the range of problems for which the knowledge is suitable.
The knowledge design might be implemented simply in terms of the basic operations that have been assumed in the design (see section 2.5 above) with the cost (probably some metric involving time and exertion) of carrying out each operation being added to the cost of the overall operation of the knowledge as the simulation continues. Alternatively, some of the limitations of the mind might be expressed in terms of a mental architecture such as that of Forgy and McDermott (1977). The capabilities of this mental architecture would have to fall within the capabilities of the human mind which it simulates. Still another possibility would be to use a computer for some parts of the knowledge design but to use a human subject to carry out some operations when these operations were particularly difficult to program. This arrangement is very similar to simulation gaming.
This kind of simulation follows the standard strategy of engineering simulations: a model composed of components having known properties and representing both the designed system and the system's environment is constructed and used to find out how the system as a whole behaves given various conditions of the environment.
The objective of computer simulation in knowledge design would be to discover the properties of a complex system composed of elements whose properties would be largely known. This may be contrasted with the strategy of psychologists using computer simulations to gain insights into the workings of the mind. In this latter case it is the properties of the system as a whole that are best known at the outset. The simulation uses hypothetical components whose properties are hypothetical.
This is the last section of this dissertation. In the first section it was stated that the dissertation was intended to represent a small step in a large process of design, leading eventually to more effective education/training – a step in which a particular approach to instructional design would be developed beyond its present state. The central idea, human knowledge design, was proposed and then developed. There are however, many more possible techniques to discuss than can be included here.
In this section a selection of areas suitable for further development is described very briefly. In this way continuity with the future development of Instructional Design procedures and human knowledge design is enhanced. Moreover, it is hoped that you, the reader, will be provoked into considering these possibilities for yourself, whether or not you are interested in the practical problems of education.
The approach to instructional design outlined in this dissertation relies as much on the implementation of knowledge as it does on the design of knowledge. The question of how to implement specified knowledge requires serious consideration.
The focus within this project has been on the knowledge of an individual. However, much of human behaviour is within a co-operating group. Very large tasks are often divided between several individuals whose varied abilities are brought together to form a combined system. In an arrangement of this kind, the compatibility of the knowledge used by the members of the group is of the highest priority.
Throughout this dissertation human and electronic cognitive systems have been contrasted to emphasize the distinction between the problems facing knowledge engineers and the problems that would face human knowledge designers. However, people have been using external aids in their thinking for thousands of years. The abacus, writing on paper, mechanical calculators, and now electronic computers have all been used by people to form a combined cognitive system with special limitations of its own. Knowledge engineers and human knowledge designers would have to come together to design knowledge for such combined systems.
Another possibility that has been left out is that people could become their own knowledge designers. [One might like to argue that they already are, but if we restrict our definition of knowledge design to deliberate and conscious knowledge design then this is largely prevented.] Education might include the implementation of knowledge whose function is to design knowledge. This is essentially the strategy of cognitive strategy research (Levin and Pressley 1983). The knowledge design framework might be useful in the development of learning strategies.
The history of science abounds with examples showing how the development of new tools and techniques can permit new scientific discoveries. A fully developed technology of instructional design, including knowledge design and implementation, would constitute a tool that would surely open up new possibilities for scientific psychology. Reitman (1965) outlined a strategy for psychological research that requires control of the subject's cognitive strategies. An effective technology of instruction containing knowledge design at the level of detail envisaged here would provide precisely this control.
If scientific theories are regarded as human knowledge, we might ask how knowledge design might be exploited within a strategy of scientific research. Fruitfulness and parsimony are sometimes given as reasons for favouring one hypothesis over another. These qualities are not strictly speaking related to the accuracy of the model. Rather they relate to the ease with which humans can manipulate the model and connect it with other models, which in turn is related to the usefullness of the theory within a programme of research. Moreover, if the goal of science is thought to be to provide models of systems that can be used by people to solve their various problems (i.e. pure science informs applied science) then the ease with which these models can be used will again be seen as a real factor in the construction of scientific models.
ANDERSON, J.R. (1981) ‘Cognitive Skills and their Acquisition’, Lawrence Earlbaum Associates.
APTER, M.J. and BARATT, G. (1973) ‘The Computer in Education and Training’, in Apter, M.J. and Westby, G. (eds) ‘The Computer in Psychology’, John Wiley and Sons.
BRIEN, R. and LAGANA, S. (1977) ‘Flowcharting: A procedure for the Development of Learning Hierarchies’, Programmed Learning and Educational Technology 14:305-.
BROADBENT, D.E. (1973) ‘In Defence of Empirical Psychology’, Methuen.
CARELLO, TURVEY, KUGLER, and SHAW (1984) ‘Inadequacies of the Computer Metaphor’, in Gazzaniga, M.S. (ed) ‘Handbook of Cognitive Neuroscience’.
CHASE, W.G. and ERICSSON, K.A. (1981) ‘Skilled Memory’, in Anderson, J.R.DAHL, O-J., DIJKSTRA, E.W., and HOARE, C.A.R. (1972) ‘Structured Programming’, Academic Press.
DAVIES, I.K. (1973) ‘The Management of Learning’,.
FEIGENBAUM, E.A. and MCCORDUCK, P. (1984) ‘The Fifth Generation’, Michael Joseph Ltd.
FODOR, J.A. (1983) ‘The Modularity of Mind’, MIT Press.
FORGY, C. and MCDERMOTT, J. (1977) ‘OPS, a domain-independent production system language’, Proceedings of the 5th International Joint Conference on Artificial Intelligence, 933-939.
FOWLER and TURVEY (1978) in Stelmach (ed) ‘Information Processing in Motor Control and Learning’.
GAGNE, R.M. (1962) ‘The Acquisition of Knowledge’, Psychological Review 69:355-365.
GAGNE, R.M. (1968) ‘Learning Hierarchies’, Educational Psychologist, November.
GAGNE, R.M. (1970) ‘The Conditions of Learning’, (2nd edition).
GALANTER, E. (1959) ‘The Ideal Teacher’, in Galanter,E. ‘Automatic Teaching: the state of the art’, John Wiley.
GILBERT (1962) ‘Mathematics: The technology of education’, The Journal of Mathematics 1:7-73.
GREENO, J.G. (1976) ‘Cognitive Objectives in Instruction: Theory of Knowledge for Solving Problems and Answering Questions’, in Klahr, D.
GREENO, J.G. (1978) ‘Understanding and Procedural Knowledge in Mathematics Education’, Educational Psychologist 12:262-283.
HARTLEY, J.R. and SLEEMAN, D.H. (1973) ‘Towards Intelligent Teaching Systems’, International Journal of Man-Machine Studies, 5:215-236.
JACKSON, M.A. (1975) ‘The Principles of Program Design’, Academic Press.
JONES, J.C. (1980) ‘Design Methods’, John Wiley & Sons.
KLAHR, D. (1976) ‘Cognition and Instruction’, Lawrence Earlbaum Associates.
LEGGE, D. and BARBER, P.J. (1976) ‘Information and Skill’, Methuen.
LEVIN, J.R. and PRESSLEY, M. (eds) (1983) ‘Cognitive Strategy Research: Educational Applications’, Springer-Verlag.
LUMSDAINE, A.A. and GLASER, R. (eds) (1960)‘Teaching Machines and Programmed Learning’, National Education Association of the United States.
MAGGS, P.B. (1974) ‘Compression of legal texts for more economical computer storage’, Jurimetrics Journal 14:254-261.
MAYER, R.E. (1977) ‘Different Rules Systems for Counting Behaviour Acquired in Meaningful and Rote Contexts of Learning’, Journal of Educational Psychology 69:537-546.
McCORDUCK, P. (1979) ‘Machines Who Think’, W.H.Freeman Co.
MICHALSKI, R.S., CARBONELL, J.G., and MITCHELL, T.M. (1983) ‘Machine Learning: An Artificial Intelligence Approach’, Tioga Publishing Co.
MICHIE, D and JOHNSTON, R (1984) ‘The Creative Computer’, Viking.
MOWBRAY, G.H. and RHOADES, M.V. (1959) ‘On the reduction of choice reaction times with practice’, Quarterly Journal of Experimental Psychology 11:16-23.
NEVES, D.M. and ANDERSON, J.R. (1981) ‘Knowledge Compilation: Mechanisms for the Automatization of Cognitive Skills’, in Anderson, J.R.
NEWELL, A. and ROSENBLOOM, P.S. (1981) ‘Mechanisms of Skill Acquisition and the Law of Practice’, in Anderson, J.R.
NEWELL, A. and SIMON, H.A. (1972) ‘Human Problem Solving’, Prentice-Hall.
OLSON (1976) in Klahr, D.
PAPANEK, V (1972) ‘Design for the Real World’, John Wiley & Sons.
RAWLES, R.E. (1978) ‘The Past and Present of Mnemotechny’, in Gruneberg, Morris and Sykes (eds) ‘Practical Aspects of Memory’, Academic Press.
REITMAN, W. (1970) ‘What does it take to remember?’, In Norman, D.A. (ed) ‘Models of human memory’, Academic Press.
RESNICK, L.B. and FORD, W.W. (1981) ‘The Psychology of Mathematics for Instruction’, Lawrence Earlbaum Associates.
SAHAL, D. (1981) ‘Patterns of Technological Innovation’, Addison-Wesley Publishing Company.
SHEPARD, R.N. and COOPER, L.A. (1982) ‘Mental Images and their Transformations’, MIT Press.
SHIFFRIN, R.M. and DUMAIS, S.T. (1981) ‘The Development of Automatism’, in Anderson, J.R.
SIMON, H.A. (1969, 1981) ‘The Sciences of the Artificial’, The MIT Press.
SIMON, H.A. (1983) in Michalski et al.
SLEEMAN, D.H. (1983) ‘Inferring Student Models for Intelligent Computer-Aided Instruction’, in Michalski et al.
SLEEMAN, D.H. and BROWN, J.S. (1982) ‘Intelligent Tutoring Systems’, Academic Press.
THORNDIKE, P.W. and STASZ, C. (1980) ‘Individual Differences in Procedures for Knowledge Acquisition from Maps’, Cognitive Psychology 12:137-175.
WEINBERG, G.M. (1971) ‘The Psychology of Computer Programming’, Van Nostrand Reinhold Company.
© 1986 Matthew Leitch