Operant Learning for Dogs Continued

A series of articles for professional dog trainers, those who want to become professional dog trainers, and those who want to become certified dog trainers.

 

In the last issue we discussed the operant quadrant and extinction, two very important concepts in operant learning. In this issue, we’ll discuss some more concepts of operant learning.

Primary & Secondary Reinforcers

A primary reinforcer is any reinforcer that is not dependent on another reinforcer for its reinforcing properties. (Chance, Learning & Behavior, 5th ed., pg 453.) There are actually very few primary reinforcers. Chance states that primary reinforcers are those that are not dependent on their association with other reinforcers. (Chance, Learning & Behavior, 5th ed., pg 149.) So, primary reinforcers are food, water, sexual stimulation, and shelter from the elements. Chance also states that some weak electrical stimulation of certain brain tissue and certain drugs can be primary reinforcers. It’s also probable that movement is a primary reinforcer.

A secondary reinforcer is any reinforcer that has acquired its reinforcing properties through its association with other reinforcers. Also called conditioned reinforcers. (Chance, Learning & Behavior, 5th ed., pg 454.) Creating a conditioned reinforcer is a respondent operation. We use respondent learning to condition the secondary reinforcer, but we then use that respondently conditioned reinforcer in operant procedures.

Many who have become professional dog trainers have been taught that a primary reinforcer is the reinforcer that follows a secondary reinforcer. For instance, you click the clicker then throw a ball – the click being the secondary reinforcer and the ball being the primary reinforcer. The ball is actually a secondary reinforcer. A ball has no intrinsic value to a dog; before it has value it must be associated with something else. For most dogs the association that makes a ball reinforcing is chasing. We can absolutely use conditioned reinforcers to maintain behavior, once they have acquired a strong reinforcement history.

So, how do we change the way we think about reinforcers? If you are in the habit of using the terms “primary” and “secondary” when talking about reinforcers, you can just train yourself to drop the word “primary” entirely, and change the word “secondary” to “conditioned.” So, instead of saying “the click is the secondary reinforcer and the ball is the primary reinforcer,” you can say “the click is the conditioned reinforcer which precedes the ball,” or “the ball is reinforcing to the dog, so we condition the click to the ball . . .”

Another thing we’ve learned as we become professional dog trainers is that a secondary reinforcer must always be followed by a primary reinforcer. Well, as the ball example illustrates, this is not exactly true, either! We can condition a reinforcer so strongly that it becomes reinforcing in and of itself. Now, I need to qualify this somewhat. We cannot condition just anything, and the reality is that the things we can condition to this level are usually surrounded by other stimuli which help that conditioned reinforcer maintain its effectiveness.

Let’s take that ball as an example. When we throw the ball, some of the other things going on around the dog that make the ball reinforcing are chasing and happy interactions with the thrower. These are the things that keep that ball reinforcing.

We can take this concept to the next step and help our clients condition a word that will be reinforcing to their dogs. When training obedience classes, I don’t use a clicker in class, but I use clicker principles. I have my clients use a marker word and I explain the above concept. Over time, that word gains reinforcing qualities and remains a potent reinforcer with just occasional backup from food (or perhaps no food backup at all!). When the dog hears that word, the surrounding environment – Mom’s happy, and all’s well with the world – helps that reinforcer maintain its reinforcing properties.

Finally, I don’t want to end this discussion of reinforcers without making sure that everyone understands that it is reinforcers that maintain behavior. We’re very used to thinking about reinforcers when we want to train a new behavior or increase a behavior, but we must also use reinforcers to maintain a behavior.

Shaping

First, I want to clarify some terminology. Because the term “free shaping” is so common in the animal training world, I distinguish free shaping from shaping. When I refer to shaping, I am simply talking about training through successive approximations and that may include luring, capturing and molding. When I refer to “free shaping,” I am talking about shaping using no luring or molding – only capturing.

This is, again, a concept that we are very familiar with. Almost all training uses shaping; it’s very rare for a behavior to be perfect the first time. As trainers, we can’t help manipulating a behavior into a form that is more pleasing to us! We must credit B.F. Skinner for our understanding of shaping principles. Although it seems obvious now, it wasn’t always so. Here is a quote from Thorndike which describes his attempts to train a behavior:

I would pound with a stick and say, “Go over to the corner.” After an interval (10 seconds for 35 trials, 5 seconds for 60 trials) I would go over to the corner (12 feet off) and drop a piece of meat there. He, of course, followed and secured it. On the 6th, 7th, 16th, 17th, 18th and 19th trials he did perform the act before the 10 seconds were up, then for several times went during the two-minute intervals without regarding the signal, and finally abandoned the habit altogether. (Chance, Learning & Behavior, 5th ed., pg 153.)

This quote shows us how much we’ve benefited from those who went before us. And it also shows the benefit of the systematic study of behavior – taking measurements, compiling data, and better understanding how behavior works. Thorndike was not able to train the dog to go to the corner on cue. Undoubtedly, the dog went to that corner more frequently than he had before this exercise because he has been reinforced in that corner, but it never became a “trained” behavior. It took Skinner’s systematic study to turn that exercise into something that today any of us can do in just a few minutes.

When shaping, we take the behavior closest to what we want and shape it into the behavior we ultimately want the animal to do. So, let’s say you want that dog to go to the corner on cue. Probably the first, closest behavior the dog will display is looking or taking a step in that direction. So, we reinforce that look or that step. Then, once we have the dog looking or stepping in the direction of the corner reliably, we raise the criteria.

Everyone has their own measure of reliability, but there is research to fall back on. The Brelands and Bailey came up with the 80% rule – in other words, when an animal is correctly performing the behavior 80% of the time, you can raise the criteria. This means that, if the animal is performing the behavior correctly 80% of the time to your criteria, you can safely raise the criteria without risking the behavior falling apart.

The key here is “to your criteria.” You must define your criteria to train efficiently. If you don’t know what the criteria is, how can you expect the animal to know? Most people have a vague idea of what they want. “I want the dog to sit on cue.” But they haven’t defined what that really means – how long after you give the cue; does it make any difference how he sits (i.e., over on a hip, straight, etc.); does he have to be in any particular position in relation to the trainer; does he have to be any specific distance from the trainer; and so on.

This is where shaping comes in. We start with fairly loose criteria. The dog needs to sit within “x” seconds of the cue, in any position, facing any direction, and at any distance. Once we get the sit, we can then refine it through shaping. Once the initial criteria is met (80% compliance), we can require something new. Not too much, but something!

Shaping is the foundation of training, and something every trainer should know how to do. Anyone who has read Karen Pryor’s book, “Don’t Shoot the Dog” is familiar with the game “101 Things to Do With a Box.” This is an introduction to shaping. Personally, I have a hard time with that game, because I like to have a specific goal, but it’s a good place to start. The first time I tried that game, I changed it a bit and did set a specific goal – the dog had to sit inside the box. In my beginning trainers’ course, everyone is required to shape a behavior. It doesn’t have to be a complex behavior, but it is a requirement. I urge anyone who has not tried shaping to give it a go – it will add a very valuable tool to your training toolbox.

In the next issue, we’ll discuss punishment – what it is and why we need to understand it.

Raising Canine has a school for dog trainers which focuses on operant training for dogs, dog behavior, working with clients and addressing client compliance, and the science behind behavior modification.