Jenn wrote: "I also agree with the above statement. Entirely. What I don't see is the benefits of 'losing a bit of the charge' at all. When you can keep the charge at 100% by continuing to use a reinforcer. I would rather keep a method working as well I could, as consistantly as I can, and as clearly for my dog as I can."
Well, if that last portion of your post is actually the way you feel, I submit for your consideration a few comments about variable ratios and secondary reinforcers (or conditioned reinforcer).
Pay special attention to #4.
1. From Mowrer’s “Learning Theory and Behavior”, pg 106 in a discussion of secondary reinforcement:
…reported by B.F. Skinner in 1938. After alluding to Pavlovian higher-order, or ‘secondary,” conditioning (which Skinner termed “secondary conditioning of Type S,” to stress the stimulus or sensory elements in the situation), this writer went on to say:
“There is, however, a process that might be called secondary conditioning Type R (for response). It does not involve a conflict with the process of discrimination because it is a response rather than a stimulus that is reinforced. The process is that of adding an initial member to a chain of reflexes without ultimately reinforcing the chain. In the present example, the sound of the food magazine acquires reinforcing value (becomes a secondary reinforcer) through its correlation with ultimate (primary) reinforcement. It can function as a reinforcing agent even when this ultimate reinforcement is lacking. Its reinforcing power will be weakened through the resulting extinction, but considerable conditioning can be effected before a state of more or less complete extinction is reached.” (Skinner, 1938)
2. Subsequent experimenting with secondary reinforcement took the use of intermittent primary reinforcement much, much further. Mowrer, pg 112: “…although Hull knew in 1943 that a habit can be made more resistant to extinction if established by means of intermittent reinforcement, he thought that the most effective secondary reinforcement was established by associating the secondary reinforcement stimulus “repeatedly and consistently” with primary reinforcement. However, Hull writing in 1951, showed a very good appreciation of the other view. He said, ‘The strength of a secondary reinforcing stimulus developing in essentially the same manner that the strength of a habit develops might be a possible explanation of the operation of secondary reinforcement’; and the same writer observes that ‘intermittent primary reinforcement should increase the strength of a secondary reinforcing stimulus more than consecutive primary reinforcement’; and cites Saltzman in support of this surmise.
3. Another quote from Mowrer, pg. 114: As already noted, Saltzman has shown that a formerly neutral stimulus acquires greater secondary-reinforcing capacity if associated with primary reinforcement intermittently rather than unfailingly.
4. In two journal articles in Psychological Review and The Journal of Comparative Physiological Psychology, Zimmerman presents further evidence that intermittent pairings of the SR (secondary reinforcer) and the PR (primary reinforcer) produce stronger associations. Zimmerman was the first one to not only use intermittent reinforcement of the operant behavior, but to ALSO use intermittent pairing of the SR and the PR. This double-intermittent reinforcement procedure obtained secondary reinforcement effects that were roughly 40 times as great as those previously reported. In plain english, he put his subjects on a variable ratio schedule of reinforcement for bar pressing, so not all bar presses were reinforced. And then he used a variable ratio schedule for pairing the PR with the SR, meaning that only sometimes was the SR followed by a PR. Thus he was using double-intermittent reinforcement. One bar press MIGHT be followed by a click of the food magazine or it might not. And the click of the food magazine MIGHT be followed by a pellet or it might not. Using this procedure, animals can be made to emit as many as 2000 bar pressing responses, with secondary reinforcement ONLY, and even then, the SR had not lost all of its potency.
So in simple words. You can actually make the click intermittent and the treat intermittent AND INCREASE the number of repetitions (or time) it takes for the behavior to become extinct from lack of primary reinforcement.
This is a step beyond what I was saying, but in support of what I'm talking about and the further comments posted, it is relevant.
Mr. Frost brought this up in a PM to me, but I thought it would really throw this whole discussion out of wack. . .but what the hell. I have my own secret research assistant that pulled these quotes for me, so here they are.