With this type of operant conditioning reinforcement schedule, an organism must wait (either not make the operant response, whatever it is in that experiment; or it can make the response but the response produces nothing) for a specific amount of time and then make the operant response in order to receive reinforcement. For example, if you are conducting a study in which you place a rat on a fixed-interval 30 second schedule (FI-30s), and the operant response is pressing the lever, then the rat must wait for 30 seconds, then press the lever, and it will receive reinforcement. This type of schedule is called fixed because the amount time the organism must wait remains constant. In addition, the investigator can determine what NOT waiting will do. If the rat presses the lever before the interval has elapsed, it can either make the interval start all over again (so if the rat waits 15 seconds and then presses the lever, it starts the 30 seconds all over again), or do nothing so that the rat can press the lever constantly for 30 seconds, and then the next one will produce reinforcement.