Discrete-trial vs. continuous free-operant procedures in assessing whether reinforcement context affects reinforcement value

Document Type

Article

Publication Date

3-2-2008

Publisher

Elsevier

Abstract

Pigeons were trained to respond to two alternating concurrent reinforcement schedules. The reinforcement probabilities were .05 and .10 in one component, and .10 and .20 in the other. In one condition, the pigeons received training on a discrete-trial procedure in which the keylights remained illuminated for 5 s or until a response occurred. In another condition, pigeons received training on a procedure in which the reinforcement contingencies were the same as in the discrete-trial procedure, but the stimuli were not turned off after 5 s or after a response. Following training in each condition, probe tests were presented. In both conditions, the .20 alternative was, overall, preferred to the .05 alternative during probe tests. Following discrete-trial training, there was no reliable preference between the two .10 alternatives. However, when the stimuli remained illuminated during the intertrial interval periods during training, probe tests results showed preference for the .10 alternative that had been presented in the leaner context during training. The pattern of results is consistent with the notion that probe preference can be influenced both by the absolute reinforcement schedules associated with each alternative, as well as changeover behavior developed during training.

Share

COinS