Is it Ethical to use Animals as Subjects When Conduting Psychological Research??

This week, I am going to be looking at the use of animals in psychological studies that have been conducted in the last 100 years or so. I am going to be looking at how they’re used and why they are used in these experiments as well. I will use several different (and controversial) studies to provide evidence towards my argument that animals have suffered in the search for scientific knowledge and a hand full of statistics.Note, however, that I am not making a case for action against animal testing on behalf of humans, I am simply providing a point of view to percieve this subject from.

I will start out by looking at one of the more famous studies involving animals which is Harlow’s Monkeys. Harlow (1960) used baby rhesus monkeys (which had already formed bonds with their mothers) and put them into metalic cylinders to study the effects of social isolation and preference. Some of these monkeys had been placed in isolation for over a year. In these confines the monkeys had 2 “wire mothers” one that provided food and one that was covered in cloth (a comfort mother). The study found that the baby monkeys preffered the comfort mother compared to the food mother, and (through studies of isolation in which bonds were severed) the babies leading healthy, normal childhoods had no better defence against depression than the isolated babies. This finding has been quoted by a scientist, Deborah Blum, to be “common sense findings” – basically making the point that these tests were not worth the results. These baby monkeys came out of their isolation as psychotic, antisocial animals, many did not recover. The isolation trials went on for so long that it got to the point of Harlow’s phd students questioning not only his sensibility but also his sanity.

Another horrific study conducted upon monkeys was a drug trial conducted in 1969 by researchers which to this day still stay anonymous and known infamously only as the “Monkey drug trials”  which took a group of monkeys and rats and tested the effects of addiction on them. These animals were taught to self inject certain drugs into their systems (such as Heroine, cocaine, morphine, alcohol and so on). These animals were then left to their own devices with a large amount of the drug that they had been taking, and left in a glass box. As expected the animals went beserk, many breaking their arms in attempts to escape, other mutilating themselves (believed to be due to hallucinations) and others just died due to overdose. All of this reasearch was to determine the effects of addiction on these poor animals, and although it can be argued that as a direct result of this study, life saving drugs were created by pharmaceutical companies it still does not negate the horrific treatment of these animals. This verges onto the very controversial topic of animal testing, which I will try not to enter too much into in this blog. (More information about this research can be found at http://www.councilors.co.uk/monkey-abuse/)

Other studies that provide evidence to the viewpoint I am trying to present are littered in the history of animal testing such as Sheridan and King’s adjusted obedience study (1972). As the title suggests this research project was an offshoot of Stanley Milgrams’ infamous obedience study. This research followed the same principles of the previous study, the participants were asked to provide an electric shock, but this time not to what they believe is a person, but a puppy, placed in front of them. The idea behind this experiment was that Sheridan and King believed that the participants in the Milgrams study may have believed the experiment to be faked and so “played along (demand characteristics) so they forced their participants to face facts that they were providing a shock to a live animal. 20 out of 26 participants went to the full shock level (interestingly it was found that all 6 participants that refused were male, all 13 female participants fully obeyed, although many were visibly shaken, and/or openly weeping).

There are plenty examples of other studies in which animals were horrifically treated, such as Seligman and Maier (1965) and their use of dogs in electrified cages with learned helplusness, or even Landis’s facial expressions experiment (1924) in which participants’ reactions were recorded as they were asked to behead a live rat.

As I have stated before this is not a blog to try and convice you that testing of animals should be stopped, there are studies out there that do not use animals in harmful capacities, for instance, Skinner’s various studies of operant conditioning involving pigeons. I am simply providing a fairly blunt view of how badly animals have been treated in the past with regards to psychological studies.

References:

Griffin, G. A. Harlow, H. F. (1960) Effects of three months of total social deprivation on social adjustment and learning in the rhesus monkey. Child Development. 37 (3), 533-547.

Sheridan, C.L. and King, K.G. (1972) Obedience to authority with an authentic victim, Proceedings of the 80th Annual Convention of the American Psychological Association, (7) 165-6.

For the “Monkey Drugs trials”: http://www.councilors.co.uk/monkey-abuse/

Should psychology be written for the layman or should science be exclusively for scientists?

This is a very broad question to write a topic upon, as it could mean several things. The term “psychology” in this question seems to refer to completed publications of research, but it could easily also refer to the delivery of data analysis, or even an explenation of a thesis. For this topic, I am going to be looking at the finalised publications with regards to making it more accesible to the public.

An important question has to be considered if research is going to be written in a format that the public can understand, and that is “how relevant is this research to the everyday layman”? This is an important question because there is no point changing the scientific terms of a paper if it is souly for the purpose of being used to further research in a specific area by scientists and scholars. But, of course, there are indeed papers that are very relevant to the public. For instance, the research conducted by Bangor University (Woods, 2012) which looks at the effect of cognitive stimulation within dementia patients using board games, quizzes, baking and so on.

Coming out as a paper, this research will be using lots of detailed terms reffering to the brains anatomy, and terms regarding the type of stimulation the patients are recieving. To scientists this is very useful information informing them on the future treatment of their patients with dementia. But what about the carers, or the family members of dementia patients (or even the patients themselves?) How will this paper inform them if they cannot understand what the paper is actually reffering to? This is a case in which, I personally believe, that yes the paper should be written for the layman if it can help understanding of dementia as a condition and a possible treatment that would postpone the onset of it.

This transition is usually conducted by the media, who act as a translator. They take the finished paper of the researcher and break it down into terms that are understandable (and for the terms that they cannot replace the media tends to explain them as thouroughly as possible), they remove all of the confusing statistical results of F scores and T values and replace them with an all encompassing overview that reports to them the outcome of the research as a whole.

This translation of the data is incredibly important if the layman is going to understand the data and act accordingly from it (in this case how to help patients with dementia). However there is the ever present danger of this translation going wrong, the data being misread, or (as mentioned in last weeks’ blog) causation being deduced from a correlation. This can lead to the media mis interpreting data and hence giving false information to the public. This is especially common with regards to reporting new carcinogenics. For instance a study published in the british medical journal stated that alcohol has certain carcinogenic properties. The media unanimously jumped upon this and stated that the results show drinking “more than 2 pints a day increases the risk of cancer”. The daily mail went with a direct approach of “Alcohol causes cancer and giving up won’t help”. This kind of misinterpretation of data provided in a study can cause a massive reaction within the media and hence cause the public to radically alter how it behaves. In these cases, it is extremely important that the scientists themselves provide adequate translation of the data provided in their experiment.

So, overall, should psychology papers being published be written for the layman, or just for scientists? I believe that they should indeed be written for the layman if the research has produced results that could affect how the public behaves, but not all papers should follow this pattern, if a paper is being produced purely to provide further evidence of a bigger thesis, then it should be left untouched and scientific. This is not to say that the papers that do get published with the layman in mind, are not available with their scientific terms intact and their results section fully on show for other scientists to see for themselves and deduct the significance of the data.

 

Resources:

http://www.bbc.co.uk/news/uk-wales-17031223

http://thesciencebit.net/2011/04/12/alcohol-causes-cancer-if-you-assume-so/

 

The truth about correlation studies!

Greetings fellow bloggers!

This week I am going to look at the subject of correlational studies and ask the question “Can a correlation ever prove a causality in a relationship?”

So let’s start with the basics, a correlation study is a type of study used to determine whether there is a relationships between any and all variables used. A correlation is described by the APA as “Interdependence of variable quantities.”

So what forms of correlations can you get? There are positive correlations, negative correlations. A positive correlation is where one variable will either increase or decrease and the second variable will increase or decrease along with the first (not to be misunderstood as a causality I might add, this effect could be caused from either variable). A negative correlation, on the other hand, occurs when one variable increases and the second variable decreases (or vice versa) at the same time. And of course, there are “no correlation” results, when, as the title suggests, has no discernible pattern to it. Correlations can also be obtained via such techniques as observations, archival study and questionnaires/interviews.

Correlation graphs sound great! They take all of your data and compare them, to look for traits/relationships and then explain them all to you! From this you can see what happens(ed), how it happened, and why it happened! But wait.. can a correlation really tell you how and why an effect happens(ed)?  Let’s create a scenario to envision this idea… Imagine there is a study conducted on school children and it’s discovered that at dinner time the students who pick the healthy options tend to score above average on tests in the afternoon, compared to the students who do not pick the healthy options. Now what this statement would appear to say is that eating healthily at dinner time would seem to increase how well a student performed afterwards, but this is not the case! For here, we only have a correlation, this is not a causality. There may be many factors that interfere with these statistics, maybe certain types of classes, or attentional span and so on. This is the main disadvantage of using a correlational study, you can never assume a causality from a correlation, whether positive or negative, you must only determine thatthere is a relationship between one variable and another.

Let’s try with a real world example instead. A study run by Buss (1984) looked at the correlation between the choosing of a spouse in marraige and that persons’ qualities, and also whether or not the length of a relationship/marriage affected the cohesion of a couples’ interactions. The study found that indeed there was a correlation regarding the qualities a spouse had and their being chosen by their partner (particularly with domain such as quarrelsomeness, dominance, and extraversion). The study also found that the cohesion between a couple does increase (and subsequently correlate with,) the increased length of a relationship/marriage. These finds seems to be pretty conclusive don’t they? Couples form a tighter, more on track, relationship with a partner the longer the relationship lasts, and partners pick people who rate highly with particular traits of their spouse. But once again, with these correlations, we can only assume that there is a relationship, we cannot determine which direction the effect is going.

So to conclude, when conducting correlational studies, although very useful in terms of determining relationships, we must be very careful to avoid the urge of applying causality to our data as they cannot be used to determine causality.

This pretty much sums it up.

Has Psychology Reached its Limits?!

This is an interesting topic to consider…

I have had many arguments with my friends who  take other courses at the university about how “useful” or “scientific” psychology is as a field of study. I’ve always argued how psychology is a valid study and how it can be used in so many different areas of life.

 

So, how applicable is the study of psychology to life’s mysteries and/or problems?

A friend on a psychology course in another university once told me that “psychology is involved in everything in life” now I believe that this statement is true, but to a certain extent only. It definitely is involved in cognitive processes that fuel almost all of humanities actions and psychology will have explanations for a lot of actions that human beings have made (and other animals/forms of life for that matter) but stating that psychology is the underlying cause of these actions is incorrect in my eyes. Psychology has a lot of possible theories and explanations but it is not what drives these processes, a good analogy would be to compare psychology in this situation to, say, chemistry being involved in everything we do in our daily lives, with the trasition and exchange of elements and compounds in everything we do.

Psychology is a massive subject when looked at as a whole, with parts branching out into neurological, social, cognitive, clinical, forensic psychology, and even psychodynamical areas of life. With each of these areas of study psychology can be used to extend into explaining almost all of life’s faculties and decisions. Psychology can be applied into many areas such as business management, law, product design, mental health, education and so on. With this wide ranging factor in mind, it is not hard to see why the statement “Is there anything psychology can’t measure?” There have been many examples of how psychology can be applied to other fields, for instance, Alpport, Eysenck et al. all applying psychology to personalities, Cascio (1987) looked into applied psychology and its use of management of perssonel, Duck (1988), Lott & Lott (1974) et al. applied psychology to relationships and so on

 

As you can see, psychology has branched out into almost every aspect of human lives, but I do not believe that it has reached its limits in what it can yet reach. Sadly I could not find relevant information regarding the direct analysis of how far and how extensively psychology has branched out as a subject, but will keep looking to update!