Unit 2: Behaviouristic Theories
According to behaviourism, learning is the
result of the relationship between environmental stimuli and behavioural
responses. This means that learning or knowledge acquisition is the formation
of habits or behaviours that occur as a result of the connection between a
stimulus and a response. The theory emphasizes the importance of environmental
factors in shaping behaviour, and stresses the role of conditioning in
learning. Behaviourists focus on observable behaviour rather than internal
mental processes to understand human learning and behaviour.
Classical conditioning also known as
Pavlovian conditioning is a theory of learning that was first described by
Russian psychologist Ivan Pavlov (1849 - 1936) in the late 1900s. He was
awarded the Nobel Prize in 1904 for his
work on the physiology of digestion. When Pavlov was studying how dogs digest
food, he noticed that the dogs began to
salivate at the sound of a bell even before they saw the food. This discovery became an important principle of
learning that explains how we can learn to associate one thing with another,
and that can be applied to many different types of behaviours. Pavlov named
this theory of learning as “conditioned reflex theory”. Later, B.F. Skinner renamed it as classical
conditioning.
What is classical conditioning?
Classical conditioning is a type of
learning in which a neutral stimulus (one that does not elicit a particular
response) is paired with an unconditioned stimulus (one that naturally elicits
a particular response) in such a way that the neutral stimulus eventually
elicits the same response as the unconditioned stimulus. In other word; it is
the pairing of a neutral stimulus with an unconditioned response. (In his
experiment, a neutral stimulus: bell acquires the capacity to elicit saliva).
Classical conditioning is crucial in
learning, creating specific behaviors and responses by associating stimuli. This
knowledge can be used to develop effective strategies for changing behavior,
treating anxiety disorders and phobias, and training animals. Moreover, it is
also useful in various other fields such as advertising, and marketing.
2.1.1 Experiment
on dog and Basic process of conditioning
a. Experiment on dog
Pavlov tied a hungry dog for 24 hours in a
mechanically controlled laboratory, where he placed an automatic device to
provide food to the dog easily. He operated the dog's salivary gland and
arranged to collect the saliva in a glass tube. He observed the dog's reaction
when he sounded a bell and placed food near the dog. The dog salivated at the
sight of the food. Pavlov then
consistently rang the bell at the specific time of feeding. As a result, the
dog established a connection between the bell and the food, and salivated at
the sound of the bell. Pavlov then stopped providing food with the bell
ringing. Despite not receiving food, the dog continued to became restless and
salivate. Pavlov saw that bell ringing still elicited a natural response
(salivation).
This experiment demonstrated that an
initially neutral stimulus (the sound of the bell) could come to elicit a
specific response (salivation) after being repeatedly paired with a naturally
occurring stimulus (food). He termed this process of pairing the neutral
stimulus with the naturally occurring stimulus is known as classical conditioning.
B. Basic process of conditioning
The basic process of classical
conditioning involves pairing a neutral stimulus (such as a sound of bell) with
a naturally occurring stimulus (such as food) so that the neutral stimulus
becomes associated with the naturally occurring stimulus, and can eventually
produce a similar response. The process involves:
a. Presentation of unconditioned stimulus
with neutral stimulus: This involves repeatedly presenting the bell (NS) with
the food (UCS) in close succession.
b.
Time proximity: Both stimuli should be
presented in quick succession because if the time interval is too long, the
relationship between them will not be established and the established
relationship will be lost.
c. Repetition: Repetition is required to
establish the connection between the stimuli to obtain expected response.
d. Achievement of desired response: It is the phase of obtaining conditioning that
is habit formation.
e. Extinction: If the conditioned stimulus
(CS) is not repeatedly presented with the natural stimulus (UCS) 2-4 times,
salivation does not occur, or the conditioned response may be eliminated.
The process or experimental paradigm of Classical conditioning
(C. C.) Learning can be illustrated by the following diagram:
First phase: before conditioning
UCS (meat) -------------------- UCR ( saliva)
NS (bell)
---------------------- No UCR ( Saliva)
Second phase:
conditioning
NS (bell) +
UCS (meat) ---- UCR ( saliva)
Third phase:
after conditioning
NS (now CS) (
bell) ---------------------- CR (saliva)
Definitions of Terms Used
Stimulus (S): Any object or event that can
be detected by one of the five senses and that can potentially elicit a
response from an organism.
Unconditioned stimulus (UCS): A stimulus
that naturally and automatically triggers a particular response without any
prior learning.
Response (R): Any observable behavior or
action that is produced by an organism as a result of a stimulus. For example,
salivating when presented with food is a response.
Unconditioned response (UCR): The natural
and automatic response that is elicited by an unconditioned stimulus. For
example, salivating in response to the smell of food is an unconditioned
response.
Conditioned stimulus (CS): A previously
neutral stimulus that, after being repeatedly paired with an unconditioned
stimulus, comes to elicit a particular response on its own. For example, a bell
ringing is a neutral stimulus, but if it is paired with the smell of food
repeatedly, it can become a conditioned stimulus that elicits salivation.
Conditioned response (CR): The response
that is elicited by a conditioned stimulus after the two have been repeatedly
paired together. For example, salivating in response to the sound of a bell
that has been paired with the smell of food is a conditioned response.
2.1.2 Phenomena
and characteristics of classical conditioning
Phenomena and characteristics of classical
conditioning refer to the various principles and processes that govern how conditioning
learning occurs. Some key phenomena and characteristics include:
a. Stimulus
generalization: Stimulus generalization refers to
the tendency in which a conditioned response is elicited by a stimulus that is
similar but not identical to the original conditioned stimulus. It means that
when we respond to similar stimuli as if they were the original stimulus. For
example, if a person was stung by a bee and develop a fear response, s/he may
also become fearful of wasps, hornets, or other insects that are similar in
appearance. This happens because our brains associate the similar stimuli with
the original stimulus, and we respond in a similar way. The degree of
similarity between the stimuli and the conditioned stimulus will determine the
strength of the response.
b. Stimulus
Discrimination: Stimulus discrimination is the
opposite of stimulus generalization, referring to the ability to distinguish
between a specific conditioned stimulus and other irrelevant stimuli. Stimulus
discrimination can occur in a range of situations. For example, if someone has
a fear of heights, they may be able to discriminate between different heights
and only feel fear at certain heights, such as standing on top of a tall
building, rather than on a ladder or a step stool. It can also occur in social situations, such
as recognizing different accents or dialects within a language.
c. Inhibition:
In classical conditioning, inhibition
refers to the learning of a negative association between the conditioned
stimulus (CS) and the unconditioned stimulus (UCS), where the CS predicts the
absence of the UCS. It means learning that a signal (like a bell) predicts the
absence of something (like food) rather than its presence. For example, a dog
that hears a bell repeatedly without getting food may learn that the bell means
no food and stop drooling. Inhibition can also happen when one signal is better
at predicting something than another, causing the second signal to become
inhibitory.
d. Extinction:
Extinction refers to the gradual disappearance or weakening of a learned
response over time. In other words,
extinction is the process by which an association between a conditioned
stimulus and a conditioned response is gradually weakened or disconnected. In
Pavlov's classical conditioning, when the bell was presented repeatedly without
the food, the dogs eventually stopped salivating in response to the bell.
e. Spontaneous
recovery: Spontaneous recovery refers to the
reappearance of a previously extinguished conditioned response (CR) after a
period of time has passed. For example, if a dog has been conditioned to
salivate at the sound of a bell, and then the bell is repeatedly presented
without food, eventually the dog will stop salivating. However, if the bell is
presented again after a period of time, the dog may exhibit a weak, but
noticeable, salivary response. This phenomenon suggests that the original
learning has not been completely erased and that the CR can be reactivated
under certain circumstances.
2.1.3 Educational
implications of classical conditioning
The Classical Conditioning Theory has
several implications for education, which are relevant to the teaching and
learning process. It can be enlisted as follows:
·
Training: The Classical
Conditioning Theory can be applied in the training of animals and humans. For
instance, pets are trained using this theory to behave in a certain way.
Similarly, the theory can also be used to teach human beings expected
behaviors, such as ethical conduct, by conditioning them.
·
Removing Special Fear:
The Classical Conditioning Theory can be used to remove specific fears that
people may have. For example, children or elderly may feel scared of a certain
things, and the theory can help them overcome such fears by gradually exposing
them to the suitable stimuli.
·
Developing balanced
emotions: Learner can experience unnecessary fear, anxiety, stress, attachment,
jealousy, etc. which can hinder learning. For example, a student who is afraid
of the teacher may also fear his subject. Similarly, a teacher who teaches with
love and care can make the subject easier to understand. In such a situation,
if teaching is done by removing such stimulus that hinders the obstacle, the
teaching and learning can be made effective by bringing emotional improvements.
·
Formation of good habits:
The main objective of classical conditioning is formation of good habits such as going to school
regularly, doing homework, respecting elders, and staying clean. Good habits
can be developed by conditioning according to the appropriate time and process.
For instance, creating a pleasant and home-like environment in school or giving
favorite toys while going to school can help children develop the habit of
going to school regularly.
·
Elimination of bad
habits: Classical conditioning is useful for eliminating certain negative
habits in students, such as using foul language, engaging in various bad habits
such as stealing, running away, using addictive substances, gambling, speaking
rudely, etc. They can be conditioned to overcome such antisocial behavior.
·
Verbal learning: Classical
conditioning can be used to associate a neutral stimulus with a meaningful one
to aid in verbal learning. For example, a child can be taught the association
between the letter "B" and the word "ball" by repeatedly
presenting the letter "B" alongside a picture or an actual ball,
while saying "B for ball". Eventually, the child learns to associate
the letter "B" with the word "ball" through the process of
classical conditioning. It aids in memorization and retention of the new
vocabulary.
·
Sports teaching: The
classical conditioning is very useful in subject like physical education while
teaching sports skills. Coaches can use
it to shape athlete's behaviors and create positive associations with learning.
For example, a coach may praise an athlete every time they perform a specific
movement correctly, leading to improved performance and increased motivation to
learn.
2.2 Operant Conditioning (Skinnerian
Conditioning)
2.2.1 Introduction to Operant Conditioning
The founder of operant conditioning theory
is renowned American psychologist and behaviourist Burrhus Frederic Skinner (BF
Skinner). Therefore, the theory is also known as the Skinnerian conditioning. Skinner
was born in Susquehanna, Pennsylvania, in 1904, and earned his Ph.D. in
psychology from Harvard University in 1931.
Skinner's theory of operant conditioning
is modified version of Pavlov's classical conditioning. His influential works,
such as "The Behaviour of Organism," "Science of Behaviour,"
and "Walden Two," focused on behaviour as the foundation of
psychology, rather than mental processes.
What is operant conditioning?
Operant conditioning is a learning process
where behaviour is modified through reinforcement or punishment of
consequences.
Skinnerian conditioning is based on the
S-R (Stimulus-Response) chain. According to Skinner, behavior operates in the
environment to generate its consequences. It means that an organism's behavior
is shaped by the consequences it produces in the environment. If the
consequences of a behavior are positive or reinforcing, the organism is more
likely to repeat that behavior in the future. On the other hand, if the
consequences are negative or punishing, the organism is less likely to repeat
that behavior in the future. Therefore, the environment plays a crucial role in
shaping and reinforcing an organism's behaviour.
Skinner's
theory of conditioning identifies two types of behaviour: respondent and
operant. Respondent behaviour refers to an automatic reaction to a specific
stimulus, which is also known as type S or I behaviour as explained in the
classical conditioning. Whereas operant behaviour is a type of behaviour that
is modified by its consequences, either through reinforcement or punishment. it
is also known as a type R or II behaviour.
2.2.2 Basic
process of operant conditioning and experiment on rat
a. Experiment on rat
B.F. Skinner conducted a series of
experiments with animals to observe how they learn new things. He wanted to
understand how behaviour can be changed through reinforcement. He designed a
box called a "Skinner box" which was similar to Thorndike's
"puzzle box". The box had a bar or key that animals could press to
receive food or water. This box also recorded their responses.
In
1948, he conducted his first experiment on a rat where he placed it in the
Skinner box and observed how the rat learned to press the lever to receive
food. As soon as the rat was put in a box, it started exploring by moving
around and touching things. Finally, it discovered a lever that released food
when pressed. In the repeated experiment, he found that the rat learned to
press the lever faster and faster each time to get the food quickly. Skinner
termed this learning as an Operant Conditioning.
b. Basic process of operant
conditioning
Operant conditioning is a learning process
that involves modifying behaviour through consequences. The basic
process of operant conditioning involves:
·
Acquisition of operant behaviour:
This is the initial stage of learning, where an organism learns to associate a behaviour
with a consequence. For example, a rat pressing a lever to receive food.
·
Behaviour shaping: This
involves reinforcing successive approximations to the desired behaviour to mould
it into the final behaviour. For example, shaping a rat to press the lever with
a gradual increase in required effort.
·
Generalization: This is
when the learned behaviour is applied to new situations similar to the original
learning context. For example, a rat that learned to press a lever for food in
one box can press it in another box.
·
Habit competition: This occurs when two behaviours
compete for the same reinforcing consequence. For example, a rat may choose
between pressing the lever and grooming. The organism will ultimately choose
the behaviour that is most likely to deliver the reinforcing consequence it
desires, while suppressing or extinguishing competing behaviours that do not
offer the same level of reinforcement.
·
Chaining: This involves
linking together a series of behaviours to create a complex sequence, with each
behaviour acting as a cue for the next. For example, a rat may learn to press a
lever, run to a corner, and then jump through a hoop.
·
Extinction: This is the
gradual decrease and eventual disappearance of a behaviour due to the lack of
reinforcement. For example, if the rat stops receiving food after pressing the
lever, it will eventually stop pressing the lever.
2.2.3 Positive
and negative reinforcement
Reinforcement is a stimulus/ consequence that
strengthens the connection between a stimulus and a response and motivates
individuals to engage in tasks repeatedly. It increases the likelihood of
occurring the behaviour again and again in the future. In operant conditioning,
there are two types of reinforcement: positive reinforcement and negative
reinforcement.
a. Positive
reinforcement:
Positive
reinforcement is a type of reinforcement that involves adding something
desirable or rewarding after a behaviour, which increases the likelihood of
that behaviour happening again in the future. It is a way of encouraging and
strengthening certain actions or responses. The reward can be anything tangible
or intangible that the individual finds desirable, such as cash prize, trophy, medals,
praise, hug or attention.
Positive
reinforcement is a powerful tool that can be used to teach new behaviours or to
strengthen existing ones. It is often used in education, parenting, and animal
training.
Here are some
examples of positive reinforcement:
·
A teacher gives a student
a sticker for completing their homework.
·
A parent gives their
child a hug for being kind to their sibling.
·
A dog trainer gives a dog
a treat for sitting on command.
B. Negative reinforcement
Negative reinforcement involves the
removal or avoidance of something unpleasant or aversive after a
behaviour. It serves as a reward for the
behaviour which encourages the individual to repeat the behaviour in order to
escape from or avoid the unpleasant stimulus in the future.
Here are some examples of negative
reinforcement in relation to education:
·
A teacher gives a student
a break from class if they raise their hand and answer a question correctly.
·
A parent allows their
child to watch TV after they finish their homework.
·
A student stops talking
in class after the teacher gives them a disapproving look.
It
is important to note that negative reinforcement should not be confused with
punishment. Punishment involves the presentation of an undesirable consequence
after an undesirable behaviour is exhibited. In contrast, negative
reinforcement involves the removal of an undesirable consequence after a
desirable behaviour is exhibited. The punishment aims to decrease unwanted behaviour
by applying aversive consequences but negative reinforcement focuses on
increasing desired behaviour by removing or avoiding aversive stimuli.
Negative reinforcement can be an effective
tool for promoting positive behaviour change in the classroom. However, it is
important to use it in a way that is fair and consistent. Additionally, it is
important to be aware of the potential negative side effects of negative
reinforcement, such as the development of anxiety or avoidance behaviours.
Schedule of reinforcement
As
suggested by operant conditioning theory of learning, following schedule of
reinforcement can be executed effectively:
a. Continuous
reinforcement schedule: When reinforcement is given for every correct behaviour
or response, this is called continuous reinforcement schedule. Under this
schedule of reinforcement learning occurs very rapidly and this is more useful
for establishing or strengthening new behaviour. Continuous schedule works in
regard to expectation. For example: giving chocolate to the child for every
time he helps his parents, giving praise to the student for every correct
answer, providing a candy to the child every time S/he uses toilet etc.
b. Partial
or intermittent reinforcement schedule: It is a non-continuous patterns of delivering
reinforcement. In this type, reinforcement is given occasionally either in a
fixed ratio or in fixed interval or given randomly and thus reinforcement is
unpredictable. This type of reinforcement generates greater resistance to
extinction than with continuous reinforcement. The partial reinforcement
schedule can be further classified into:
I. The
ratio schedules
II. The
interval schedules
I. The
ratio schedules
When reinforcement is provided in
accordance with the number of desired responses, this is called ratio schedule.
There are two types of ratio schedules. They are:
a. Fixed
ratio schedule: In this schedule, the reinforcement is given after a fixed
number of responses. That is, the organism gets reinforcement only after
showing a fixed number of behaviors. For example a student is rewarded for
every three or five correct answers etc.
b. Variable
ratio schedule: Here, the reinforcement is given at varying number of responses or exact number of
responses required in order to receive the reinforcement is not specified. The
reinforcement is given after an unpredictable number of desired responses. For
example., a student is rewarded
sometimes for three and sometimes for five correct answers etc. This type of
reinforcement is very useful in producing high and steady response rates.
II. The
Interval schedules: This is the second type of intermittent or partial schedule
of reinforcement in which reinforcement is provided considering the time limit.
It is further divided into following two types:
a. Fixed
interval schedule: In this schedule, reinforcement is given for a response made
only after a fixed interval of time, eg., every 3 minutes or every 5 minutes,
every week, every month and so forth. It does not consider the number of
correct responses made during that interval. For example a child may be
rewarded once a week if their room is cleaned up, a weekly paycheck etc. This schedule produces a drop in response
immediately after reinforcement is achieved and a gradual increase in response
as the time for the next reinforcement is about to come.
b. Variable
interval schedule: In variable interval
schedule, the reinforcement is provided after a variable amount of time
interval. Here, the time interval changes after every reinforcement and it is irregular
and unpredictable. For example cross
checking of homework by teacher from time to time, fishing by waiting, gambling
etc. This schedule is very useful in making the behaviour steady and
sustainable.
2.2.3 Principle of shaping
a.What is shaping?
The term "shaping" or
"shaping behaviour" comes from the theory of operant conditioning.
Shaping is a gradual learning process that occurs step by step. It can be
defined as a process of reinforcing successive approximations of behaviour
until the target behaviour is achieved.
It
is believed that new and complex skills cannot be learned all at once. It is
possible if it is taught slowly, one after the other. Skinner used a schedule
of reinforcement to train a mouse to carry marbles from one place and store
them in a specific location. Additionally, he trained two pigeons to play table
tennis using their beaks and claws as paddles. Behaviour shaping is used to train animals and
humans in acquiring complex behaviours.
During shaping, an organism receives reinforcement for each step that
brings it closer to the desired behaviour.
Shaping is a powerful tool that can be
used by clinicians, teachers, and parents when needed. To effectively shape behaviour,
four key steps should be followed (Martin and Pear, 1999):
I.
Identifying the target behaviour:
In this step, the specific behaviour that needs to be developed or changed is
determined. Defining the behaviour helps increase the likelihood of success in
the shaping process.
II.
Selecting the starting behaviour:
The entry point or starting behaviour for the shaping process is decided in
this step.
III.
Establishing shaping
steps: After determining the starting behaviour, the trainer creates a list of behaviours
that progress step by step towards the target behaviour. Each successive
approximation is reinforced.
IV.
Adjusting the pace: If
the individual is not making progress, the trainer should try simpler steps. On
the other hand, if progress is fast, the criteria for reinforcement should be
raised. Positive or negative reinforcement and punishment play significant
roles in the shaping process.
B. Principles of shaping
The modern principles of shaping as stated
by Karen Pryor are as follows:
I.
Be prepared before you start:
Be ready to click/treat immediately when
the training session begins. When shaping a new behaviour, be ready to capture
the very first tiny inclination the animal gives you toward your goal behaviour.
This is especially true when working with a prop such as a target stick or a
mat on the ground.
II.
Ensure success at each step:
Break behavior down into small enough pieces that the learner always has a
realistic chance to earn a reinforcer.
III.
Train one criterion at a time:
Shaping for two criteria or aspects of a behavior simultaneously can be very
confusing. One click should not mean two different criteria.
IV.
Relax criteria when
something changes: When introducing a new criterion or aspect of the skill,
temporarily relax the old criteria for previously mastered skills.
V.
If one door closes, find another:
If a particular shaping procedure is not progressing, try another way.
VI.
Keep training sessions
continuous: The animal should be continuously engaged in the learning process
throughout the session. He should be working the entire time, except for the
moment he's consuming/enjoying his reinforcer. This also means keeping a high
rate of reinforcement.
VII. Go
back to kindergarten, if necessary: If a behavior deteriorates, quickly revisit
the last successful approximation or two so that the animal can easily earn reinforcers.
VIII. Keep
your attention on your learner: Interrupting a training session gratuitously
by taking a phone call, chatting, or doing something else that can wait often
cause learners to lose momentum and get frustrated by the lack of information.
If you need to take a break, give the animal a "goodbye present,"
such as a small handful of treats.
IX.
Stay ahead of your learner:
Be prepared to "skip ahead" in your shaping plan if your learner
makes a sudden leap.
X.
Quit while you're ahead: End
each session with something the learner finds reinforcing. If possible, end a
session on a strong behavioural response, but, at any rate, try to end with
your learner still eager to go on.
2.2.4 Educational implications of
operant conditioning
Operant conditioning, advocated by B.F.
Skinner, is highly useful for teaching, training, and behavior control.
Skinner's research in 1954 highlighted its application in human education. They
all work on the principle of reinforcement and punishment. The technique has
gained prominence in various fields as follows:
a. Programmed
instruction
b. Teaching
machine
c. Self
management
d. Token
economy programs
e. Verbal
learning
f. Group
contingency
g. Behaviour
therapy
a. Programmed
instruction: Operant conditioning techniques are employed in programmed
instruction, which involves breaking down learning materials into small,
manageable steps and providing immediate feedback and reinforcement for correct
responses.
b. Teaching
machine: A teaching machine is an instructional device or system designed to
facilitate learning through programmed instruction. It typically presents
instructional materials in a sequential manner, providing immediate feedback
and reinforcement. Teaching machines can include various interactive elements,
such as quizzes, exercises, and assessments. They are aimed at promoting
self-paced learning, individualized instruction, and the mastery of specific
subject matter or skills.
c. Self-management:
Operant conditioning can be utilized for self-management, where individuals
learn to regulate and modify their own behaviors by setting goals, tracking
progress, and rewarding themselves for achieving desired outcomes. For example,
if an individual wants to lose weight, he might set a goal of losing 1 pound
per week. He would then track his progress and give himself a reward, such as a
new outfit or a night out with friends, when he reaches his goal.
d. Token
economy programs: In token economy programs, individuals receive tokens or
points as rewards for exhibiting desired behaviors. These tokens can be
exchanged for various privileges or incentives, promoting positive behavior
change. It has been shown to be effective in a variety of settings, including
schools, prisons, and mental health facilities.
e. Verbal
learning: It is a type of learning that involves the acquisition of new words
or phrases. Operant conditioning can be used to promote verbal learning by
providing positive reinforcement for correct responses and corrective feedback
for incorrect responses. For example, a child who is learning to read might be
given a sticker each time they correctly identify a word.
f. Group
contingency: Group contingency refers to applying operant conditioning
techniques within a group setting. It involves reinforcing the behavior of an
entire group based on the performance of individuals or a subset of the group,
fostering cooperative behavior and encouraging positive group dynamics. For
example, a class of students might be given a pizza party if they all turn in
their homework on time.
g. Behaviour
therapy: Operant conditioning plays a crucial role in behavior therapy, a
therapeutic approach that focuses on modifying maladaptive behaviors. In
behavior therapy, individuals are taught to identify the triggers for their
unwanted behaviours and to develop strategies for avoiding or managing those
triggers. Behavior therapy has been shown to be effective in treating a variety
of conditions, including anxiety, depression, and addiction.
Difference between classical conditioning and
operant conditioning
1. CC: It is discovered by
Russian psychologist Ivan Pavlov.
OC: It is discovered by an American psychologist BF Skinner.
2. CC: It is the pairing between involuntary responses (CR) with
a neutral stimulus.
OC: It is the pairing between a voluntary responses with its consequence.
3. CC: Organism is passive or reactive.
OC: Organism is active or proactive.
4.
CC: Learning is more reflexive in nature.
OC: Learning is more proactive in nature.
5. CC: Stimulus comes first
OC: Behaviour comes first
6. CC: Response is under
the control of stimulus
OC: Response is under the control of organism
7. CC: Reinforcement
follows stimulus
OC: Reinforcement follows response
8. CC: Extinction occurs by withdrawing UCS.
OC: Extinction occurs by
withdrawing reinforcement.
2.3 Connectionism (Thorndike's Theory
of Learning)
a. Introduction to Connectionism
Edward Lee Thorndike (1874-1949) was a
renowned American psychologist known for his work in educational psychology and
animal behaviour. In 1913, he introduced his theory of connectionism, also
known as the Trial and Error Theory of Learning, through his doctoral thesis
titled "Animal Intelligence: An experimental study of the Association
process in Animals." His theory is also known as bond psychology or theory
of association. This theory was originally called “selecting and connecting”
theory. Thorndike conducted experiments using animals like cats, dogs, and monkeys,
often employing puzzle boxes. It was Thorndike who introduced the concept of reward
in learning.
What is connectionism or Trial and Error?
Organisms generate multiple responses,
including errors, when confronted with problems. With persistent effort and
practice, the errors gradually diminish and eventually they achieve desired
learning. In this way, learning is the formation of bond/ association or
connection between stimulus and response through the process of trial and error.
In essence, Thorndike's theory proposes
that learning occurs through the establishment of connections or associations
between stimuli and responses through a process of trial and error. He observed
that animals learn by trying different responses and gradually refining their behaviour
based on the consequences they experience. Through his experiments, Thorndike
found that behaviours leading to favourable outcomes are reinforced and more
likely to be repeated, while behaviours resulting in unfavourable outcomes
diminish over time.
Thorndike's theory of connectionism has
had a significant impact on educational psychology. It highlights the
importance of the consequences or rewards associated with behaviours in shaping
learning and behaviour change. By understanding and applying this theory,
educators can design effective teaching methods that encourage positive
reinforcement and facilitate the formation of desired associations between
stimuli and responses.
2.3.1 Basic process of conditioning (process
of trial and error) and experiment on cat
a. Experiment on cat
Thorndike placed a hungry cat in a Puzzle
Box, where all of the cat's behaviours were recorded by an automatic mechanism.
Inside the box, there was a latch that the cat had to manipulate in order to
escape. Outside the box, there was food (fish) visible. The cat attempted
various actions such as scratching, jumping, meowing, and pawing to open and get
out of the box. Initially, the cat was unfamiliar with the correct sequence of
actions, but with persistent effort and practice, it accidentally pressed the
latch and the box opened. After repeating this process for about five times, it
was revealed that the cat gradually reduced its mistakes, and finally without
any errors, it succeeded to open the latch and come out to get favourite food.
This process was referred to as trial and error learning. Later, Thorndike
conducted similar experiments with dogs and monkeys. Dogs made fewer errors
compared to others. Based on this, Thorndike formulated the laws of learning,
which he divided into primary and secondary laws.
B. Basic Process of Conditioning
(process of trial and error)
The process of trial and error involves
the following steps:
I.
Emergence of a new
situation/problem/block: The process begins when a new situation, problem, or
obstacle arises that requires attention or resolution. This could be a
challenge, barrier, or unfamiliar circumstance that demands a response. For
instance, in an experiment involving a cat, the initial hurdle was the cat
being confined in a box with a closed door. The closed door prevented the cat
from easily accessing the food, presenting an obstacle or challenge to
overcome. To address this, the organism needs to have a goal and drive.
II.
Multiple responses: Faced
with the new situation, the organism instinctively engages in various random
responses. These responses are spontaneous and driven by the organism's
existing knowledge, instincts, or past experiences. The organism tries
different actions or behaviors without a specific plan or strategy. In the cat
experiment, the cat exhibited random movements in an attempt to escape the box.
Without knowing the exact way to open the door, the cat experimented with
various actions and behaviors in a trial and error manner.
III.
Chance success: Within
the trial and error process, there is a possibility of chance success. Among
the multiple random responses, some may accidentally lead to a successful
outcome or solution. The organism may stumble upon the correct response through
luck or without consciously understanding the cause-and-effect relationship
between its actions and the desired result. In the experiment, the cat achieved
chance success through continuous striving and random movements. By trying
different movements, the cat accidentally succeeded in opening the door,
achieving its goal of accessing the food.
IV.
Repetition of successful
response and elimination of unsuccessful ones: If the organism experiences
chance success, it repeats the actions or responses that led to the favorable
outcome. By repeating the successful response, the organism increases the
likelihood of achieving the desired result again. Simultaneously, it eliminates
or reduces the frequency of incorrect or unsuccessful responses. Through
repetition and comparison, the organism learns which responses are effective
and which are not, gradually refining its behavior. In the experiment, the cat
gradually recognized the correct way to pull the latch or perform the necessary
actions to open the door. Through repeated attempts and observations, the cat
began selecting the proper movements that consistently led to the desired
outcome. It refined its behavior and focused on the specific movements that
proved effective.
V.
Fixation: Over time,
through repetition and elimination, the organism solidifies the correct
response. It recognizes the cause-and-effect relationship between its actions
and the desired outcome and focuses on the specific behavior that consistently
leads to success. Fixation occurs as the organism learns from its experiences
and reinforces the learned behavior. The correct response becomes ingrained and
serves as a reliable solution to the given situation or problem. For example,
through repetition and learning from past experiences, the cat eliminated all
incorrect responses or movements that did not result in success. It reinforced
only the correct responses, allowing it to consistently open the door without
errors. The cat acquired knowledge and learned the correct way of opening the
door through the process of fixation.
2.3.2 Primary Laws of Learning: (Law
of readiness, law of exercise and law of effect)
Thorndike presented various laws of
learning after conducting different experiments and tests. He divided these
laws into two categories: primary and secondary. However, we shall discuss only
about primary laws here:
a. Law
of readiness:
The Law of Readiness, proposed by Edward
Thorndike, emphasizes the importance of psychological and motivational
preparedness in the learning process. It suggests that effective learning
occurs when an individual is prepared and motivated to learn, being mentally
and physically ready.
Thorndike highlighted the crucial role of
readiness in achieving successful learning outcomes. When individuals are
ready, they are more likely to actively engage in the learning process,
establish connections between new information and existing knowledge, and retain
what they have learned. Therefore, it is necessary to create a suitable
learning environment that aligns with the learner’s needs, interest, level and
abilities. Similarly, the instructional contents and methods should also
encourage active participation and engagement.
There are two subordinate laws that are
associated with the Law of Readiness:
·
Law of Satisfaction: This
subordinate law states that when learners are in a state of readiness and their
responses are followed by a satisfying or rewarding outcome, the connections
between the stimulus and response are strengthened. Positive reinforcement or
rewards enhance the likelihood of the learned behavior being repeated.
·
Law of Annoyance: On the
other hand, the Law of Annoyance states that when learners are in a state of
readiness, but their responses are followed by an annoying or unsatisfying outcome,
the connections between the stimulus and response are weakened. Negative
consequences or punishments reduce the likelihood of the undesired behaviour
being repeated.
B. The law of
exercise
The Law of Exercise works on the familiar
saying "Practice makes the man perfect". It means that practice is
crucial for effective and lasting learning. When we practice a subject, whether
it's new or something we've learned before, we improve our ability to learn
faster and easier.
According to E.L. Thorndike's law of
exercise, the more we practice, the stronger and more stable the connection
between stimuli and responses (S-R) becomes which leads to more effective and
enduring learning. However, it was later recognized that blind repetition alone
is not enough to strengthen the S-R relationship and enhance learning. The
introduction of rewards alongside practice becomes necessary. It was observed
that about six attempts without a reward are equal to one attempt with a reward
to reinforce the S-R connection. On this basis, the law of excise can further
be divided into following two parts:
·
Law of Use: When we
frequently use or practice what we've learned, whether it's new or old
knowledge, it becomes more effective and enduring. Through practice, the
relationship between stimuli and responses (S-R) strengthens and becomes more
stable. Learning becomes more effective and lasting when the S-R relationship
is adaptable. This principle aligns with the saying "Learning by
doing," emphasizing that practice is a fundamental aspect of the learning
process.
·
Law of Disuse: If we
don't use learned information for a long time or neglect to reinforce and
modify the S-R relationship, the knowledge gradually fades away and can be
forgotten. Experiences and lessons that are not regularly utilized lose their
significance over time.
In summary, the Law of Use highlights the
importance of practicing and utilizing knowledge to strengthen the S-R
relationship, while the Law of Disuse warns about the risk of forgetting when
learned information is not regularly reinforced.
C. Law of effect
The Law of Effect explains how our learning
is influenced by the experiences we have. When something we do leads to
positive outcomes or rewards, we are more likely to do it again because it
makes us feel good. On the other hand, when our actions result in negative
consequences or punishments, we tend to do them less often because they make us
feel unhappy and dissatisfied. This law applies to both humans and animals. We
learn better when we are rewarded for our actions and less when we are
punished.
After 1930, this law was further revised.
It was observed that the influence of rewards and punishments is not equal and
opposite. It means that rewards and punishments have different effects. Rewards
increase the chances of a specific action being repeated, while punishments may
not necessarily reduce the likelihood of an action being repeated. Punishments
are not as effective in discouraging actions as rewards are in encouraging
them. It was suggested that rewards strengthen the connection between what
prompts our actions and how we respond, while punishments do not weaken this
connection.
In summary, positive experiences and
rewards motivate us to continue certain actions, while negative experiences and
punishments make us less likely to repeat them.
2.3.3 Educational implications of
Thorndike’s Theory
Thorndike's trial and error theory has
great influence on learning and behaviour
modification. It has significant applications in education which are as
follows:
·
Prepare students ready to
learn: According to the trial and error theory, it is important to prepare
students mentally and emotionally for the learning process. Teachers can create
a conducive learning environment by establishing a positive classroom culture,
setting clear expectations, and helping students develop a growth mindset. This
prepares students to approach learning with a willingness to take risks, make
mistakes, and learn from their experiences.
·
Emphasize exercise to
strengthen the learning: The trial and error theory also suggests that
students learn by practicing what they have learned. This means that teachers
should provide opportunities for students to practice new skills and knowledge.
They can do this through homework, classwork, and projects.
·
Create conducive learning
environment: According to trial and error theory, it is important to create a
positive and supportive learning environment where students feel safe to
explore, take risks, and learn from failures without fear of judgment or
embarrassment. A pleasant learning environment encourages students to
persevere, seek help when needed, and maintain a positive attitude towards
learning.
·
Use integrated approach
in teaching: The trial and error theory suggests that students learn best when
they are able to see how different concepts are related to each other. This
means that teachers should use an integrated approach to teaching, which
involves teaching multiple concepts at the same time. This can be done by using
thematic units or by teaching across the curriculum.
·
Provide novelty of
methods and materials in teaching: Incorporating novelty in teaching methods
and materials can stimulate students' interest and engagement. Teachers can
introduce new and innovative instructional approaches, technologies, and
materials to make the learning experience more exciting and captivating. By
providing novel experiences, educators can grab students' attention, spark curiosity,
and enhance their motivation to explore and learn.
·
Use feedback: Feedback
plays a crucial role in trial and error learning. Teachers should provide
timely and constructive feedback that highlights students' strengths,
identifies areas for improvement, and guides them towards achieving their
learning goals. Feedback helps students understand the consequences of their
actions, make adjustments, and refine their approaches.
·
Use reward and punishment
as necessary: Rewards and punishments are vital in shaping behavior and
learning outcomes. Educators can use rewards to reinforce positive behaviors
and outcomes, while employing punishments sparingly to discourage undesirable
behaviors and promote learning from mistakes. It is important to strike a
balance, avoiding excessive rewards that may create dependency and excessive
punishments that can lead to discouragement. Teachers should apply rewards and
punishments judiciously, ensuring fairness and consistency for effective
implementation. However, punishment should not be used as far as possible.
2.4 Applications of Integrated Approaches
to Learning
An integrated approaches to learning is a
teaching method that connects different subjects or disciplines to create a
more holistic learning experience for students. It involves intentionally
blending knowledge, skills, and concepts from different fields to provide a
holistic and interconnected learning experience for students. By integrating
various subjects, this approach aims to foster deeper understanding, critical
thinking, and problem-solving skills by encouraging students to make
connections and apply their learning across different contexts.
Benefits of Integrated Approaches to Learning
Some of the advantages of integrated approach
to learning are as follows:
·
Integrated learning pays
particular attention to an increase in understanding, retention, and
application of general concepts.
·
It provides a better
understanding of the content.
·
Integrated learning
encourages active participation in relevant real-life experiences.
·
It serves as a connection
between various curricular disciplines.
·
It develops higher-level thinking skills.
·
Ensures active
participation by triggering the point of interest of students.
Applications of integrated approach
to learning
The integrated approach to learning has a
wide range of applications across different educational contexts. Here are a
few examples:
a. Project-Based
Learning: Integrated learning can be implemented through project-based learning,
where students work on a comprehensive project that integrates concepts from
multiple subjects. For instance, a project on sustainable cities could involve
elements of science (environmental impact), mathematics (data analysis), social
studies (urban planning), and language arts (communication and presentation
skills).
b. STEM
Education: Integrated learning is highly relevant in STEM (Science, Technology,
Engineering, and Mathematics) education. Instead of teaching these subjects in
isolation, educators can create interdisciplinary projects that encompass
multiple STEM disciplines. For example, designing and building a renewable
energy system involves principles from physics, engineering, and environmental
science.
c. Environmental
Education: Integrated learning can be employed to address environmental issues
and promote sustainability. Students can explore the interconnections between
ecological systems, climate change, social dynamics, and economic factors. This
approach allows them to understand the complex nature of environmental
challenges and develop holistic solutions.
d. Global
Education: An integrated approach is valuable in global education, where
students learn about different cultures, languages, and global issues. By
integrating social studies, geography, history, language arts, and current
events, students gain a deeper understanding of global interconnectedness,
cultural diversity, and global challenges.
e. Career
and Technical Education (CTE): Integrated learning is applicable in CTE
programs that prepare students for specific careers. For example, a program
focused on robotics might integrate concepts from electronics, programming,
engineering, and entrepreneurship. Students gain a comprehensive skill set that
prepares them for various aspects of the robotics industry.
f. Arts
Integration: Integrating arts into other subject areas enhances creativity and
critical thinking. For instance, incorporating visual arts, music, or drama
into a literature unit can deepen students' understanding and interpretation of
a literary work.
2.5 Addressing learning difficulties through
different learning approaches
Learning difficulties, or learning
disabilities, are challenges individuals face in acquiring and processing
information. They can affect areas like reading, writing, math, and
comprehension. Causes include neurological, cognitive, and genetic factors. Common
learning difficulties include dyslexia, dyscalculia, and attention deficit
hyperactivity disorder (ADHD). Support and accommodations are crucial for
individuals with learning difficulties to succeed academically and socially.
Learning difficulties can be addressed
using different approaches such as:
1. Differentiated
Instruction: Teachers adapt their teaching methods, materials, and assessments
to meet the diverse learning needs of students.
2. Multi-Sensory
Learning: Engaging multiple senses (sight, hearing, touch) helps individuals
understand and remember information better.
3. Personalized
Learning: Instruction is tailored to individuals' strengths, weaknesses,
interests, and learning styles, often using technology-based tools.
4. Collaborative
Learning: Working in groups or pairs allows students to learn from each other,
develop social skills, and gain confidence.
5. Assistive
Technology: Tools like text-to-speech software, graphic organizers, and
specialized apps aid reading, writing, and organization.
6. Visual
Aids and Mnemonics: Charts, diagrams, and memory aids help individuals process
and recall information more effectively.
7. Chunking
and Simplification: Breaking down complex tasks or concepts into smaller parts
makes learning more manageable.
8. Regular
Review and Reinforcement: Consistent practice and repetition of learned
concepts enhance understanding and retention.
9. Emotional
Support and Positive Reinforcement: Providing encouragement and support helps
individuals overcome emotional challenges and maintain a positive attitude
towards learning.
10. Individualized
Education Plans (IEPs): Customized plans with specific goals, accommodations,
and strategies are created for students with significant learning difficulties.
It's important to understand each person's
unique challenges and strengths to implement the most suitable learning
strategies.
0 Comments