Psychology Class Notes > Research Methods
I. Why Are Research Methods Important?
Science, at a basic level attempts to answer questions (such as "why are we aggressive) through careful observation and collection of data. These answers can then (at a more complex or higher level) be used to further our knowledge of us and our world, as well as help us predict subsequent events and behavior.
But, this requires a systematic/universal way of collecting and understanding data -- otherwise there is chaos.
At a Practical level, methodology helps US understand and evaluate the merit of all the information we're confronted with everyday. For example, do you believe in the following studies?
1) study indicated that the life span of left-handed people is significantly shorter than those who are right hand dominant.
2) study demonstrated a link between smoking and poor grades.
There are many aspects of these studies that are necessary before one can evaluate the validity of the results. However, most people do not bother to find out the details (which are the keys to understanding the studies) but only pay attention to the findings, even if the findings are completely erroneous.
They are also practical in the work place:
1) Mental Health Profession - relies on research to develop new therapies, and learn which therapies are appropriate and effective for different types of problems and people.
2) Business World - marketing strategies, hiring, employee productivity, etc.
II. Different Types of Research Methods
1) Basic Research
answer fundamental questions about the nature of behavior. Not done for application, but rather to gain knowledge for sake of knowledge.
For Example, look at the titles of these publications:
a) Short and long-term memory retrieval: A comparison of effects of information overload and relatedness.
b) Electrophysiological activity in the central nucleus of the amygdala: Emotionality and stress ulcers in rats.
Some people erroneously believe that basic research is useless. In reality, basic research is the foundation upon which others can develop applications and solutions. So while basic research may not appear to be helpful in the real world, it can direct us toward practical applications such as, but definitely not limited to:
a) Skinner - trained animals to work for reinforcement - lead to work schedules and applications in I/O psychology, therapy, and education.
b) all those therapeutic techniques that clinical psychologists and other therapists use to help people must studied to determine which are most effective for which situations, people, and problems.
2) Applied Research
concerned with finding solutions to practical problems and putting these solutions to work in order to help others.
Some examples of publication titles:
a) Effects of exercise, relaxation, and management skills training on physiological stress indicators.
b) Promoting automobile safety belt use by young children.
Today, there is a push to more applied research. This is no small part due to the perspective in the United States where we want solutions and we want them now! BUT, we still need to keep our perspective on the need for basic research.
3) Program Evaluation
look at existing programs in such areas as government, education, criminal justice, etc., and determine the effectiveness of these programs. DOES THE PROGRAM WORK?
For example - Does capital punishment work? Think of all the issues surrounding this program and how hard it is to examine its effectiveness. The most immediate issue, how do you define the purpose and "effectiveness" of capital punishment? If the purpose is to prevent convicted criminals from ever committing that same crime or any other crime, than capital punishment is an absolute - 100% effective. However, if the point of capital punishment is to deter would-be criminals from committing crimes, then it is a completely different story.
III. How Do Non-Scientists Gather Information?
We all observe our world and make conclusions. HOW de we do this:
1) seek an authority figure - teacher tells you facts...you believe them. Is this such a good idea?
For example, if your teacher tells you that there is a strong body of evidence suggesting that larger brains = greater intelligence.
2) intuition - discussed in previous chapter.
Are women are more romantic then men?
Is cramming for an exam is the best way to study?
Whatever you opinion, do you have data to support your OPINIONS about these questions???
Luckily, there is a much better path toward the TRUTH...the Scientific Method.
IV. THE SCIENTIFIC METHOD
How do we find scientific truth? The scientific method is NOT perfect, but it is the best method available today.
To use the scientific method, all topics of study must have the following criteria:
1) must be testable (e.g., can you test the existence of god?)
2) must be falsifiable - easy to prove anything true (depends on situation), but systematically demonstrating a subject matter to be false is quite difficult (e.g., can you prove that god does not exist?)
A. Goals of the Scientific Method
Describe, Predict, Select Method, Control, Collect Data, Analyze, Explanation
1) Description - the citing of the observable characteristics of an event, object, or individual. Helps us to be systematic and consistent.
This stage sets the stage for more formal stages - here we acquire our topic of study and begin to transform it from a general concept or idea into a specific, testable construct.
a) Operational Definitions - the definition of behaviors or qualities in terms of how they are to be measured. Some books define it as the description of ...the actions or operations that will be made to measure or control a variable.
How can you define "life change"? One possibility is the score on the Social Readjustment Rating Scale.
How do you define obesity, abnormality, etc. in a way that is testable and falsifiable?
2) Prediction - here we formulate testable predictions or HYPOTHESES about behavior (specifically, about our variables). Thus, we may define a hypothesis as a tentative statement about the relationship between two or more variables. For example, one may hypothesize that as alcohol consumption increases driving ability decreases.
Hypotheses are usually based on THEORIES - statements which summarize and explain research findings.
3) Select Methodology & Design - chose the most appropriate research strategy for empirically addressing your hypotheses.
4) Control - method of eliminating all unwanted factors that may effect what we are attempting to study (we will address in more detail later).
5) Collect Data - although the book is a little redundant and does not differentiate well between this stage and selecting the design and method, data collection is simply the execution and implementation of your research design.
6) Analyze & Interpret the Data - use of statistical procedures to determine the mathematical and scientific importance (not the "actual" importance or meaningfulness) of the data. Were the differences between the groups/conditions large enough to be meaningful (not due to chance)?
Then, you must indicate what those differences actually mean...discovery of the causes of behavior, cognition, and physiological processes.
7) Report/Communicate the Findings - Psychology is a science that is based on sharing - finding answers to questions is meaningless (to everyone except the scientist) unless that information can be shared with others. We do this through publications in scientific journals, books, presentations, lectures, etc.
B. Ways of Conducting Scientific Research
1) Naturalistic Observation - allow behavior to occur without interference or intervention by the researcher.
we all do this (people watch)
weaknesses: often not easy to observe without being intrusive.
strengths: study behavior in real setting - not lab.
2) Case Study - in depth investigation of an individual's life, used to reconstruct major aspects of a person's life. Attempt to see what events led up to current situation.
Usually involves: interview, observation, examine records, & psych. testing.
weaknesses: very subjective. Like piecing together a puzzle, often there are gaps - relies on memory of the individual, medical records, etc.
strengths: good for assessing psychological disorders - can see history and development.
3) Survey - either a written questionnaire, verbal interview, or combination of the two, used to gather information about specific aspects of behavior.
PUT IN EXAMPLE
weaknesses: self-report data (honesty is questionable)
strengths: gather a lot of information in a short time.
gather information on issues that are not easily observable.
4) Psychological Testing - provide a test and then score the answers to draw conclusions from.
Examples. - I.Q. tests, personality inventories, S.A.T., G.R.E., etc...
weaknesses: validity is always a question; honesty of answers.
strengths: can be very predictive and useful if valid.
5) Experimental Research (only way to approach Cause & Effect) - method of controlling all variables except the variable of interest which is manipulated by the investigator to determine if it affects another variable.
V. KEY TERMS (you will need to get very familiar with these terms to succeed in Psychology. You can also look in the glossary of terms we have provided for these and other important terms):
1) variable - any measurable condition, event, characteristic, or behavior that can be controlled or observed in a study.
Independent Variable (IV)- the variable that is manipulated by the researcher to see how it affects the dependent variable.
Dependent Variable (DV)- the behavior or response outcome that the researcher measures, which is hoped to have been affected by the IV.
2) control - any method for dealing with extraneous variable that may affect your study.
Extraneous variable - any variable other than the IV that may influence the DV in a specific way.
Example - how quickly can rats learn a maze (2 groups). What to control?
3) Groups (of subjects/participants) in an Experiment - experimental vs control
experimental group - group exposed to the IV in an experiment.
control group - group not exposed to IV. This does not mean that this group is not exposed to anything, though. For example, in a drug study, it is wise to have an experimental group (gets the drug), a placebo control group (receives a drug exactly like the experimental drug, but without any active ingredients), and a no-placebo control group (they get no drug...nothing)
both groups must be treated EXACTLY the same except for the IV.
4) Confound - occurs when any other variable except the IV affects the DV (extraneous variable) in a systematic way. In this case, what is causing the effect on the DV? Unsure.
Example - Vitamin X vs Vitamin Y. Group 1 run in morning, group 2 in afternoon. Do you see a problem with this? (I hope so)
Many things may lead to confounds (here are just two examples):
5) Experimenter Bias - if the researcher (or anyone on the research team) acts differently towards those in one group it may influence participants' behaviors and thus alter the findings. This is usually not done on purpose, but just knowing what group a participant is in may be enough to change the way we behave toward our participants.
6) Participant Bias (Demand Characteristics) - participants may act in ways they believe correspond to what the researcher is looking for. Thus, the participant may not act in a natural way.
7). Types of Experimental Designs: true experiment, quasi-experiment, & correlation.
a) The True Experiment: Attempts to establish cause & effect
To be a True Experiment, you must have BOTH - manipulation of the IV & Random Assignment (RA) of subjects/participants to groups.
1) manipulation of the IV - manipulation of the IV occurs when the researcher has control over the variable itself and can make adjustments to that variable.
For example, if I examine the effects of Advil on headaches, I can manipulate the doses given, the strength of each pill, the time given, etc.. But if I want to determine the effect of Advil on headaches in males vs females, can I manipulate gender? Is gender a true IV?
2) Random Assignment - randomly placing participants into groups/conditions so that all participants have an equal chance of being assigned to any condition.
b) Quasi-Experimental Designs: same as the true experiment, but now there is no random assignment of subjects to groups. Still have one group which gets the IV and one that does not, but subjects are not randomly assigned to groups.
There are many types of quasi designs (actually, too many to go into detail here). What is vital to know is that in all of them, there's a lack of RA.
c) Correlation: attempts to determine how much of a relationship exists between variables. It can not establish cause & effect.
1) to show strength of a relationship we use the Correlation Coefficient (r).
The coefficient ranges from -1.0 to +1.0:
-1.0 = perfect negative/inverse correlation
+1.0 = perfect positive correlation
0.0 = no relationship
positive correlation- as one variable increases or decreases, so does the other. Example. studying & test scores.
negative correlation - as one variable increases or decreases, the other moves in the opposite direction. Example. as food intake decreases, hunger increases.
THE BETWEEN vs WITHIN SUBJECTS DESIGN
1) Between-subjects design: in this type of design, each participant participates in one and only one group. The results from each group are then compared to each other to examine differences, and thus, effective of the IV. For example, in a study examining the effect of Bayer aspirin vs Tylenol on headaches, we can have 2 groups (those getting Bayer and those getting Tylenol). Participants get either Bayer OR Tylenol, but they do NOT get both. T
2) Within-subjects design: in this design, participants get all of the treatments/conditions. For example, in the study presented above (Bayer vs Tylenol), each participant would get the Bayer, the effectiveness measured, and then each would get Tylenol, then the effectiveness measured. See the differences?
VALIDITY vs RELIABILITY
Validity - does the test measure what we want it to measure? If yes, then it is valid.
For Example - does a stress inventory/test actually measure the amount of stress in a person's life and not something else.
Reliability - is the test consistent? If we get same results over and over, then reliable.
For Example - an IQ test - probably won't change if you take it several times. Thus, if it produces the same (or very, very similar) results each time it is taken, then it is reliable.
However, a test can be reliable without being valid, so we must be careful.
For Example - the heavier your head, the smarter you are. If I weighed your head at the same time each day, once a day, for a week, it would be virtually the same weight each day. This means that the test is reliable. But, do you think this test is valid (that is indeed measures your level of "smartness")? Probably NOT, and therefore, it is not valid.
Find Psychology Confusing? It doesn't have to be. Become an AlleyDog.com Member and get the tools and resources you need to make Psychology easier to understand - and get better grades while you're at it. Find out more.