Constructivist on-line learning environment survey (COLLES)

From HLWIKI Canada
Jump to: navigation, search
Are you interested in contributing to HLWIKI International? contact: dean.giustini@ubc.ca

To browse other articles on a range of HSL topics, see the A-Z index.

Contents

Last Update

  • Updated.jpg 20 May 2017

Introduction

See also Behaviourism | Dale's cone of learning | Social learning theory | Teaching library users

COLLES comprises an economical 24 statements grouped into six scales, each of which helps us address a key question about the quality of the on-line learning environment. Taylor and Maor (2000) developed an instrument to study on-line environments called the constructivist on-line learning environment survey (COLLES) questionnaire. The survey allows the researcher to "... monitor the extent to which we ...exploit the interactive capacity of the web for engaging students in dynamic learning practices." The key qualities or dimensions measured by COLLES are:

  • Relevance: How relevant is on-line learning to students' professional practices?
  • Reflection: Does on-line learning stimulate students' critical reflective thinking?
  • Interactivity: To what extent do students engage on-line in rich educative dialogue?
  • Tutor Support: How well do tutors enable students to participate in on-line learning?
  • Peer Support: Is sensitive and encouraging support provided on-line by fellow students?
  • Interpretation: Do students and tutors make good sense of each other's on-line communications?

Each of these dimensions is then measured with a few survey questions (items), e.g.:

Statements Almost Never Seldom Some-times Often Almost Always
Items concerning relevance
my learning focuses on issues that interest me. O O O O O
what I learn is important for my professional practice as a trainer. O O O O O
I learn how to improve my professional practice as a trainer. O O O O O
what I learn connects well with my prof. practice as a trainer. O O O O O
Items concerning reflection
... I think critically about how I learn. O O O O O
... I think critically about my own ideas. O O O O O
... I think critically about other students' ideas. O O O O O
... I think critically about ideas in the readings. O O O O O

Challenges with COLLES

Let's summarize this section on concept operationalization. There are a few issues you critically should think about.

  • The gap between data and theory
  • Example: measure communication within a community of practice (eg. an e-learning group) by the quantity of forum messages
  • Remember that some students use other channels to communicate
  • Example: measure classroom usage of technology by looking at technology teacher uses e.g. power-point, demonstrations with simulation software
  • You don’t take into account technology enhanced student activities; there could be concept overloading
  • Example: include “education” in the definition of development (it could be done, but at the same you will loose an important explanatory variable for development, e.g. consider India’s strategy that "over-invested" in education with the goal to influence on development
  • Therefore: never ever collapse explanatory variables into one concept
  • Bad measures, i.e. the kind of data you are looking at do not really measure the concept
  • We will come back later to this "construct validity" issue

Measures

With somewhat operationalized research questions that may include operational hypothesis, you may have to think carefully about what kinds of data you want and which cases (populations) you will observe. Means of measurement:

  • observe properties, attributes, behaviors, etc.
  • select the cases you study (sampling)
  • sampling refers to the process of selecting "cases", e.g. people, activities, situations etc. Cases should be representative of the whole. For example, in survey research the 500 people that will answer the questionnaire should represent the whole e.g. all primary teachers in a country, all students of a university, all voters of a state. As a general rule: Make sure that "operative" variables have good variance, otherwise you can’t make any statements on causality or difference. We define operative variables as dependant (to explain) plus independent (explaining) variables.
  • sampling in quantitative research is relatively simple; select cases within a given population (the one your theory is about). The best sampling strategy is to randomly select a pool from the population. The challenge is to identify members of the mother population and have them participate. Sampling can be more complex in qualitative research. Here is a short overview of sampling strategies:
Type of selected cases Usage
maximal variation will give better scope to your result
(but needs more complex models, you have to control more intervening variables, etc. !!)
homogeneous provides better focus and conclusions; will be "safer" since it will be easier to identify explaining variables and to test relations
critical exemplify a theory with a "natural" example
according to theory,
i.e. your research questions
will give you better guarantees that you will be able to answer your questions ....
extremes and deviant cases test the boundaries of your explanations, seek new adventures
intense complete a quantitative study with an in-depth study
  • sampling strategies depend a lot on your research design

Measurement techniques

Below is a table with the principal forms of data collection (also called data acquisition):

Articulation

Situation

non-verbal
and verbal

verbal

oral

written

informal

participatory observation

information interview.

text analysis,log files analysis,etc.

formal and
unstructured

systematic observation

open interviews,semi-structured interviews,thinking aloud protocols,etc.

open questionnaire,journals, vignettes,

formal and structured

experiment simulation

standardized interview,

standardized questionnaire,log files of structured user interactions,

Reliability of measure

Let's introduce the reliability principle. Reliability is the degree of measurement consistency for the same object:

  • by different observers
  • by the same observer at different moments
  • by the same observer with (moderately) different tools

Example: measure of boiling water

  • A thermometer always shows 92 C. => it is reliable (but not construction valid)
  • The other gives between 99 and 101 C.: => not too reliable (but valid)

Sub-types of reliability (Kirk & Miller):

  • circumstantial reliability: even if you always get the same result, it does not mean that answers are reliable (e.g. people may lie)
  • diachronic reliability: the same kinds of measures still work after time
  • synchronic reliability: we obtain similar results by using different techniques, e.g. survey questions and item matching and in depth interviews

“3 Cs” of an indicator

Reliability can be understood in some wider sense. Empirical measures are used as or combined into indicators for variables. So "indicator" is just a fancy word for either simple or combined measures. That said, measures (indicators) can be problematic in various ways and you should look our for the "3 Cs":

1. Is your data complete ?

  • Sometimes you lack data ....
  • Try to find other indicators

2. Is your data correct  ?

  • The reliability of indicators can be bad.
  • Example: Software ratings may not mean the same according to cultures (sub-cultures, organizations, countries) people are more or less outspoken.

3. Is your data comparable  ?

  • The meaning of certain data are not comparable.
  • Examples:
  • (a) School budgets don’t mean the same thing in different countries (different living costs)
  • (b) Percentage of student activities in the classroom don’t measure "socio-constructive" sensitivity of a teacher (since there a huge cultural differences between various school systems)

Interpretation: validity (truth) & causality

Having good and reliable measures doesn't guarantee at all that your research is well done in the same that correctly written sentences will not guarantee that a novel is good reading. The fundamental questions you have to ask are:

  • Can you really trust your conclusions ?
  • Did you misinterpret statistical evidence for causality ?

These issues are really tricky.

  • Validity (as well reliability) determine the formal quality of your research. More specifically, validity of your work (e.g. your theory or model) is determined by the validity of its analysis components
  • In other words:
  • Can you justify your interpretations ??
  • Are you sure that you are not a victim of your natural confirmation bias ? (Meaning that people always want their hypothesis to be confirmed at whatever cost)
  • Can you really justify causality in a relationship (or should you be more careful and use wordings like "X and Y are related") ?

Validity is not the only factor of empirical research but the most important. In the table below, some elements are shown that can be judged and how they are likely to be judged.

Elements of research Judgements
Theories usefulness (understanding, explanation, prediction)
Models (“frameworks”) usefulness & construction(relation between theory and data, plus coherence)
Hypotheses and models validity & logic construction (models)
Methodology ("approach") usefulness (to theory and conduct of empirical research)
methods good relation with theory, hypothesis, methodology etc.
Data good relation with hypothesis et models, plus reliability

A good piece of work satisfies first of all an objective, but it also must be valid. The same message told differently:

  • The most important usefulness criteria is: "does it increase our knowledge"
  • The most important formal criteria are validity (giving good evidence for causality claims) and reliability (show that measurement, i.e. data gathering is serious.
  • Somewhere in between: "Is your work coherent and well constructed"?

Some reflections on causality

Let's now look a little bit more at causality and that is very much dependant on so-called "internal validity"). Correlations between data don't prove much by themselves so:

  • A correlation between 2 variables (measures) does not prove causality
  • Co-occurrence between 2 events does not prove that one leads to the other

The best protection against such errors is theoretical and practical reasoning. A conclusion that could be made from some superficial data analysis could be the following statement: “We introduced ICT in our school and student satisfaction is much higher”. However, if you think hard you might want to test the alternative hypothesis that it’s maybe not ICT, but just a reorganization effect that had impact on various other variables such as teacher-student relationship, teacher investment, etc.)

If you observe correlations in your data and you are not sure, talk about association and not cause. Even if can provide sound theoretical evidence for your conclusion, you have the duty to look a rival explanations. Note: there are methods to test rival explanations (see modules on data-analysis)

Conclusion

This entry into empirical research principles ends with a short list of advice:

  1. At every stage of research you have to think and refer to theory
  2. Good analytical frameworks (e.g. instructional design theory or activity theory) will provide structure to your investigation and bring focus to what is essential
  3. Make a list of all concepts that occur in your research questions and operationalize
  4. You can’t answer your research question without a serious operationalization effort
  5. Identify major dimensions of concepts involved, use good analysis grids
  6. Watch out for validity problems
  7. You can’t prove a hypothesis (you only can test, reinforce, corroborate, etc.) Therefore, look at anti-hypotheses
  8. Good informal knowledge of a domain will help; don’t hesitate to talk about your conclusions with a domain expert
  9. Purely inductive reasoning approaches are difficult and dangerous (unless you master an adapted (costly) methodology, e.g. "grounded theory")
  10. Watch our for your confirmation bias as humans tend to look for facts that confirm their reasoning and ignore contradictory information; test rival hypotheses (or at least think about them)
  11. Attempt some (but not too much) generalization. Show others what they can learn from your work, compare your work to other’s; use triangulation of methods, i.e. several ways of looking at the same thing. Different viewpoints (and measures) can consolidate and refine results. For example, imagine you (a) led a quantitative study about teacher’s motivation to use ICT in school or (b) administered an evaluation survey form to measure user satisfaction of a piece of software. Then run a cluster analysis through your data and identify major users, e.g. 6 types of teachers or 4 types of users). Then do in-depth interviews with 2 representatives for each type and "dig" into their attitudes, subjective models, abilities, behaviours, etc.
  12. Theory creation vs theory testing. There are different research types and each has certain advantages, e.g. qualitative methods are better suited to create new theories (exploration / comprehension); quantitative methods are better suited to test / refine theories (explication / prediction). But validity, causality, reliability issues ought to be addressed in any research. It is possible to use several methodological approaches in one piece of work.

Bibliography

  • Taylor P, Maor D. Assessing the efficacy of online teaching with the Constructivist On-Line Learning Environment Survey. In: Flexible Futures in Tertiary Teaching. Annual Teaching Learning Forum. Perth: Curtin University of Technology, 2000.
  • Taylor C. Maor D. Constructivist On-Line Learning Environment Survey (COLLES) questionnaire. http://surveylearning.moodle.com/colles/
  • Thiétart RA. Méthodes de recherche en management. Dunod, Paris.
Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox