Interaction in Online Courses: More is NOT Always Better


 

Christian J. Grandzol, Ph.D.
Bloomsburg University of Pennsylvania
cgrandzo@bloomu.edu

John R. Grandzol, Ph.D.
Bloomsburg University of Pennsylvania
jgrandzo@bloomu.edu

 

Abstract

Cognitive theory suggests more interaction in learning environments leads to improved learning outcomes and increased student satisfaction, two indicators of success useful to program administrators. Using a sample of 359 lower-level online, undergraduate business courses, we investigated course enrollments, student and faculty time spent in interaction, and course completion rates, all drivers of resource consumption. Our key findings indicate that increased levels of interaction, as measured by time spent, actually decrease course completion rates. This result is counter to prevailing curriculum design theory and suggests increased interaction may actually diminish desired program reputation and growth.

Introduction

Administrators interested in developing effective on-line instruction must recognize there are  “both technical and instructional aspects that are not necessarily intuitive or analogous to the traditional classroom” (Fredericksen, Pickett, Pelz, Swan, & Shea, 2000, p. 10). Researchers have assessed cognitive learning outcomes (Brown & Liedholm, 2002; Dellana, Collins, & West, 2000), drivers of student performance, (Syler, Cegielski, Oswald, & Rainer, 2006; Marks, Sibley, & Arbaugh, 2005), student retention (McLaren, 2004; Hiltz, Coppola, Rotter, Turoff, & Benbunan-Fich, 2000), typologies of online design (Rungtusanatham, Ellram, Siferd, & Salik, 2004), the online community of inquiry (Garrison, 2003), online best practices (Swan, 2003), and numerous other aspects of the online environment. Conflicting findings and unanswered questions fuel research efforts to distinguish pedagogical practices that enable online student success so that administrators can establish policies for course design and resource allocation consistent with these practices.

We examined whether enrollment size, learner-learner interaction, and learner-faculty interaction contribute significantly to online success. We hypothesized that smaller enrollments, greater frequency of interaction among learners, and greater frequency of interaction between faculty and learners are factors for successful student completion of online courses. We examined aggregate data from business-related courses at community colleges to determine the effects of these factors on online course completion.

Literature Review

Online Learning

Institutions rapidly expanded their online learning offerings to serve the nearly 4 million U.S. students (80% undergraduate) who took at least one online course in the Fall 2007; of these, one in five institutions offered them for the first time (Allen & Seaman, 2008). Sixty percent of Chief Academic Officers considered online learning critical to strategic positioning, and over half agreed that their faculty viewed online courses as legitimate learning experiences (Allen & Seaman).

The growth in online courses and the value, both academic and financial, they represent to institutions created a need for evidence of the equivalence of the traditional and online mediums. Russell’s (2009) “nosignificantdifference” website lists hundreds of studies that investigated these. As an example of these studies, McLaren (2004) found that significantly less students persisted in online courses, but those who did had course grades comparable to their traditional counterparts. These studies established the validity of the online medium, but they offered limited insights into pedagogically sound techniques for administering online programs or designing and executing online courses.

Other studies empirically validated best practices. Rungtusanatham and colleagues (2004) designed a typology to help administrators match education goals with design and delivery methods appropriate for the intended course level. For example, higher level courses require greater levels of interaction than introductory, overview courses. Our study examined whether quantitatively measuring items such as time spent in interaction is useful for course design and delivery and the implications for course/program administration.

Interaction

Chickering and Gamson (1987) illustrated the importance of interaction in learning. Five of their seven principles directly relate to interaction among (1) the participants in the learning process and (2) the participants with the subject matter:

Accrediting bodies also recognize the necessity for interaction in higher education. AACSB International, an accreditor of business schools, insists that interactions among participants define quality, that passive learning is not the preferred mode of higher education, and that learning communities require opportunities for students to learn from one another (2003). These statements unequivocally support interaction in the learning process.

The research questions are how to define interaction, measure it, and utilize it for pedagogical improvements. Moore (1989) provided the dominant definition framework with his three modes of interaction: learner-content, learner-instructor, and learner-learner. Learner-content interaction involves the student interacting with the subject of study. Learner-instructor interaction includes the instructor making presentations, demonstrating skills, modeling values, organizing and evaluating student learning, and providing support. Students derive learning from interaction with their peers via debate, collaboration, idea manipulation, and incidental learning. The three modes offer unique opportunities to stimulate learning, but Anderson, Rourke, Garrison, and Archer (2001) suggested that meaningful learning is achieved only through synthesis of these modes. Hannifin (1989) added the ways that interaction supports learning functions: (1) pacing, balancing student control with collaborative learning activities, (2) elaboration, linking new content to existing schema, (3) confirmation, reinforcing new skills, (4) navigation, guiding learners to acquire lifelong learning skills, and (5) inquiry, enabling students to explore individual interests. 

Because interaction is important to the learning process, it needs to be measurable. Researchers have investigated interaction using both qualitative and quantitative analyses. Each approach has its own advantages and disadvantages, but the findings consistently indicated that interaction is vital to learning.

Measurement Approaches

One group used content analysis to evaluate an online conference for the different modes of interaction. They measured levels of cognitive presence (a form of learner-content interaction) (Garrison, Anderson, & Archer, 2004), social presence (a form of learner-learner interaction) (Rourke, Anderson, Garrison, & Archer, 2001), and teaching presence (a form of learner-instructor interaction) (Anderson, Rourke, Garrison, & Archer, 2001). These researchers asserted the amount of social, teaching, and cognitive presence can be measured and applied to devise an optimal level for an online course. High levels of these forms may support deep learning, but too much may actually be detrimental to learning (Rourke et al.).

Coppola, Hiltz, and Rotter (2002) used semi-structured interviews with online instructors to assess learner-instructor interaction. Faculty believed online learning was more of a two-way process, responses to questions were more reflective and deliberate, and that more students were engaging in mental rehearsal. Faculty were more formal online, but believed their relationship with students was more intimate.

Others examined self-reported perceptions derived from survey data. Arbaugh and Rau (2007) found significant correlations between students’ perceived learning and learner-instructor interaction, learner-learner interaction, and learner-interface interaction in MBA online courses. Learner-instructor interaction had the strongest correlation with perceived learning; learner-learner interaction actually had a negative correlation with delivery medium satisfaction. The more participants a learner had to pay attention to, the less satisfaction they had with the learning environment. The authors questioned whether online faculty at the graduate level should emphasize high levels of learner-learner interaction.

Swan (2002) examined perceived learning and interaction via a self-reported survey. She found that students who had higher levels of interaction with content, interaction with their instructor, and interaction among other students had higher reported levels of satisfaction and learning. Percentage of course grade based on discussion and the frequency of instructor feedback led to higher levels of both measures.

The preceding studies indicated that interaction among participants is vital to the learning process in online courses. They also suggested there may be differences in the level of collaboration necessary depending on the course level (Rungtusanatham et al., 2004). It is unclear which form of interaction is the most important for online courses or whether all are equally important (Arbaugh & Rau, 2007).

Others applied more objective measures to evaluate how frequently students engaged in various course activities. Wang and Newlin (2000) counted total website hits on the home page to assess if students were interacting with the course. They found that total hits on the home page in the first week were positively correlated with student grades, suggesting that monitoring activity in the first week can serve as a reliable indicator of performance.

Baugher, Varanelli, and Weisbord (2003) chose total hits on the web platform in a web-augmented course. They found total hits did not significantly predict students’ course averages. Hit consistency, a measure of how routinely a student accessed the course, did predict course grade, suggesting a steady approach to interaction is most appropriate. Biktimirov and Klassen (2008) used a more precise measure of total hits by counting accesses to specific support files in various content areas. The only course activity that significantly correlated with course grade was homework solutions accessed. These authors similarly studied hit consistency and found that it was positively associated with course grade. Syler and colleagues (2006) distinguished between hits within content areas and hits within information areas. Their findings were not surprising: greater student usage of tools in content areas positively affected students’ final course grades.

Section Enrollment Size

The number of students influences student perceptions in traditional classes (Swan, 2002). Logically, the more students enrolled in an online course, the less individual attention a professor can offer and the more frustrating it may be for students to read numerous discussion board responses. Findings on the impact of section size in the online environment have been inconclusive.

Arbaugh and Duray (2002) found that large online class sizes were negatively associated with learning for MBA students. Arbaugh and Rau (2007) did not find a correlation between online class size and perceived learning, but did find a negative correlation with delivery medium satisfaction for MBA students. They noted that perhaps students were frustrated with too much to read and respond to in the larger classes. Contrary to these results, Swan (2002) and Arbaugh (2008) reported that class size did not correlate significantly with student perceptions of learning or satisfaction.

It is questionable whether there is an association between online course section size and student outcome measures. Average section sizes for these studies were typically between 20 and 30 students with relatively little variation, perhaps too little to identify significant differences. For example, outcomes from classes varying in size by only three or four students are less likely to be significantly different than those from classes differing in size by 20 or 30 students. The non-significant findings should be interpreted with caution.

Student Success

Student success has been measured in various ways with no consensus on which measure to use. Swan (2002) utilized perceived learning and perceived satisfaction via a self-reported instrument. Hiltz and colleagues (2000) used performance in course projects; others defined overall course average as the measure of success (Syler et al., 2006; Baugher et al., 2003). Klassen and Biktimirov (2007) used a student’s individual portion of their course average.

Although each of these methods has advantages, the validity of each is questionable. Course grade is often influenced by many factors and is subject to such wide dispersion by faculty prerogative that it may not readily measure direct learning. We chose a method of measuring student success that may be more useful for administrators concerned with student retention. Simply, did the student complete the course or not? Considering McLaren’s (2004) finding that students in online courses suffered decreased persistence rates, we believe student completion is an important and valid indicator of student success.

Predictor Variables

We identified other variables that may influence online student success. Arbaugh (2008) included instructor online teaching and subject matter experience, student age, gender, prior student experience with online courses, number of student credit hours, and whether the course was required or elective. Swan (2002) captured design features including percentage of course grade dedicated to discussions and structural features such as course level. Klassen and Biktimirov (2007) included GPA while Davis and Wong (2007) captured student perception of content usefulness. Eom, Wen and Ashill (2006) studied student motivation and learning. Syler and colleagues (2006) found that self-efficacy drives course performance. The effects of these, and other variables such as interface ease, technological competence, and instructor training should not be studied in isolation.   

Literature Summary

There is wide variety in the techniques used to study online courses, the predictor variables to use, the ability to capture multiple aspects simultaneously, and even the way to define student success. These are no small issues as administrators seek to document student learning and evaluate their programs (AACSB, 2003). The issues specific to interaction deserve increased attention considering their documented importance. In 1995, Kearsley asked if frequency of interaction in a course is a meaningful measure, if interaction is more important for certain groups of learners than others, if interaction affects learning outcomes such as retention, if interaction increases student comprehension or satisfaction, if a certain form of interaction is more critical than others, and if the pattern of interaction needs to change during a course or program. Despite the apparent criticality for administration of online programs, these questions remain largely unanswered.

Research Methodology

We examined data from a course management system that captured time spent in specific interaction activities – an approach that has not been attempted. It may illuminate whether time as a measure of interaction is appropriate for future research and if using this measure can provide insights to Kearsley’s (1995) questions. The limitations of the online course management system used to collect the data prohibited evaluation of a robust model including all of the predictor variables identified in the preceding review. We present a graphical representation of such a model in our conclusion.

Students successfully completing a course and earning credit for it implies some degree of learning, albeit undifferentiated by degree. Hence, we chose course completion as an objective measure of successful outcome. As for predictor variables, we chose one variable discussed in the literature review – Enrollment (class size), and, given our desire to investigate interaction, variables named Faculty Participation and Student Participation. Figure 1 organizes these variables into a path diagram, a standard graphical depiction associated with our analysis technique, structural equation modeling (SEM). The predictor variables, also known as exogenous constructs, are depicted on the left along with their respective measurable indicators. Each of the latter constructs has four indicators, Home Page, Gradebook, Email, and Discussions, measured in time. The path diagram also includes symbols representing coefficients (e.g.λ), correlations (e.g.Φ ), and measurement errors (e.g.δ) that are identified through SEM.

Figure 1. Theorized model based on available measures grouped into predictor variables Enrollment, Faculty Participation, and Student Participation

In a typical online course, the Home Page may contain announcements, special instructions, links to other sites, etc. The Gradebook contains actual grades and perhaps feedback on assignments, varying in detail based on the assignment type and difficulty, as well as the course instructor. Email is the electronic messages authored and sent through a learning management system. Discussions refer to forums that require student responses, comments, etc. and instructor participation.

Our population consisted of all online courses, wherein all or most of the content was delivered asynchronously or synchronously, at six community colleges in a U.S. mid-Western state education system over a two-year period. From these, we studied all business courses resulting in a sample of size 349.

Hypotheses

We hypothesized the following relationships (relevant parameters from the path diagram are shown in parentheses):

  1. Enrollment (class size) and Faculty Participation are negatively correlated (f21 < 0).
  2. Enrollment and Student Participation are positively correlated (f31 > 0).
  3. Faculty Participation and Student Participation are positively correlated (f32 > 0).
  4. Enrollment has a negative effect on Course Completion (g11 < 0).
  5. Faculty Participation has a positive effect on Course Completion (g12 > 0).
  6. Student Participation has a positive effect on Course Completion (g13 > 0).

Method of Analysis

Structural equation modeling (SEM) investigates concurrent dependence relationships; i.e., it considers several multiple regression models simultaneously. This technique examines dependence relationships among the original dependent variables, alone or with the original independent variables (Joreskog & Sorbom, 1979). It supports the transition from exploratory analysis to confirmatory analysis (Hayduck, 1987). Based on theory, prior experience, and research objectives a researcher first theorizes which independent variables predict each dependent variable, and then translates these relationships into a series of structural equations for each independent variable.

Structural equation modeling proceeds stepwise through a procedure that includes (Hair, Black, Babin, & Anderson, 2010):

SEM has been applied in the online learning and interaction domain to study peer interaction and learning outcomes (Anderson et al., 2001; LaPointe & Gunawardena, 2004), effects of interactions on perceived learning and satisfaction (Marks, Sibley & Arbaugh, 2005), determinants of learning and satisfaction (Eom, Wen & Ashill, 2006), drivers of course performance (Syler et al., 2006), and optimal learning experiences (Davis & Wong, 2007).

Results

Conducting exploratory factor analysis on the theorized model depicted in Figure 1 indicated that neither Professor Time in Threads (PROFT) nor Time in Threads  per Student (STUDTPER) made significant (p < .05) contributions to measuring their respective constructs, Faculty Participation (FACPART) and Student Participation (STUPART). Additionally, RMSEA, a key goodness-of-fit indicator, exceeded the acceptable level.

Only Student Participation (STUDPART) had a significant (p < .05) influence on Course Completion (COMP); higher interaction led to lower course completion rates. Correlations among the three constructs confirmed expectations. The higher the enrollment, the lower the faculty participation and the higher the student participation.

Table 1 contains results of the factor analyses for both the original and revised models; Table 2 shows correlations among the two sets (Faculty Participation, Student Participation) of measurement variables (indicators) for our revised model.

Table 1. Factor loadings for Faculty Participation (FACPART) and Student Participation (STUPART) in theorized and revised models.

Construct

Measurement Variable

Factor Loadings

Theoretical Model

 

Revised Model

FACPART

PROFHP

0.726

 
 
 
 

0.650

 

PROFGPER

0.596

0.655

PROFE

0.491

0.520

PROFT

0.227

 

STUPART

STUDHPPE

 

0.638

0.682

STUDGPER

0.742

0.680

STUDEPER

0.560

0.587

STUDTPER

0.204

 

Table 2. Pearson’s correlations for Faculty Participation (FACPART) and Student Participation (STUPART) measurement variables in revised model.

Correlations for Faculty Participation (FACPART) Measurement Variables

 

PROFHP

PROFGPER

PROFE

PROFT

PROFHP

1.000

PROFGPER

0.426**

1.000

PROFE

0.338**

0.340**

1.000

PROFT

0.242**

0.050

0.049

1.000

Correlations for Student Participation (STUPART) Measurement Variables

 

STUDHPPE

STUDGPER

STUDEPER

STUDTPER

STUDHPPE

1.000

STUDGPER

0.464**

1.000

STUDEPER

0.400**

0.400**

1.000

STUDTPER

0.037

0.260**

0.018

1.000

**Significant (p < .01)

These results suggested revising our original model to that depicted in Figure 2 which includes values for all parameters derived from our analysis. SEM confirms that:

Figure 2. Revised model having the theorized predictor constructs with reduced number of indicators.

Table 3. Goodness-of-fit measures for revised model.

Explanation & Standard for Acceptance

Measure

Strength of Measure

X2 measures the difference between observed and estimated covariance matrices but is sensitive to sample size and the number of estimated variables. The minimum acceptable ratio of x2/df is 3.0.

df = 16

Ratio = 3.718

Acceptable

X2 = 59.49 (p = 0.00)

The Root Mean Square Error of Approximation represents how well a model fits a population and corrects for both model complexity and sample size. For this sample size, 0.07 is desirable.

RMSEA = 0.086

  • 90 Percent Confidence Interval for RMSEA = (0.063 ; 0.11)

Acceptable

The Normed Fit Index (NFI) is the ratio of the difference in the X2 value for the fitted model and a null model divided by the X2 value for the null model. The closer the value is to 1, the better the fit.

NFI = 0.87

Acceptable

The Comparative Fit Index (CFI) is another incremental fit index relatively insensitive to model complexity; values ≥ 0.90 suggest good fit.

CFI = 0.90

Acceptable

The Standardized Root Mean Square Residual (SRMR) is an assessment of prediction errors not impacted by the scale of the parameters. Lower values represent better fit; values ≥ 0.10 are unacceptable.

SRMR = 0.053

Acceptable

The Goodness-of-Fit Index (GFI) is sensitive to sample size. Higher values suggest better fit; minimum acceptable value ≥ 0.90.

GFI = 0.96

Acceptable

Adjusted Goodness-of-Fit Index (AGFI) attempts to account for degree of model complexity; minimum acceptable value ≥ 0.90.

AGFI = 0.91

Acceptable

Collectively, the goodness-of-fit measures presented in Table 3 confirm the validity of the revised model; i.e., the observed covariance matrix is consistent with the estimated covariance matrix; hence, the revised model is a legitimate representation of the sample data.

Findings and Implications

This study evaluated the usefulness of measuring time as an indicator of interaction in online, business-related, associate degree-level courses. We found that learner-learner interaction was significantly, but negatively, associated with course completion rates. Learner-faculty interaction and enrollment size were not significantly related to course completion. We offer the following explanations for these findings and describe potential implications of them for those administering online courses.

When confirming the measurement models, neither student nor faculty time spent in threaded discussions made significant contributions to their respective constructs. This is contrary to what we expected as discussions are often viewed as one of the most effective practices for online courses (Swan, 2002). The time students and faculty spent in discussions was not statistically associated with their time spent in the areas of the general home page, gradebook, or email. Hence, it could not be used as a measure of participation for either students or faculty. This finding does not indicate that discussions are not important to the learning process. As Arbaugh (2008) concluded, the interactions of students in areas such as discussions are a necessary, but probably not sufficient condition, for student learning in the online environment.

Arbaugh and Rau (2007) and Arbaugh (2008) did not find a correlation between enrollment size and student perceptions of online learning. Similarly, we did not find a significant correlation between enrollment size and online course completion rates. This finding indicates that calls for enrollment caps may be more arbitrary than fact-based. Very large sections were removed from analysis after having been identified as outliers, with the result being a majority of classes in this study had between 14 and 30 students. Perhaps significant results would be found with the inclusion of larger classes.

Significant relationships were found between enrollment size and student participation and faculty participation. Enrollment size was positively associated with student participation, suggesting that the larger the section size, the more time students had to invest in the course. Enrollment had a negative association with faculty participation, suggesting that as section size increased, faculty actually spent less time accessing course activities. This is counterintuitive, but may indicate that the time intensiveness of managing courses with larger class sizes (Easton, 2003) leads faculty to seek efficiencies through standardized content presentations.

No significant relationships were found between faculty participation and course completion rates. This finding contradicts those that found the role of the instructor in course interactions was among the most critical for success in online courses (Arbaugh, 2008; Eom et al., 2006; Marks et al., 2005). For example, the amount of time professors spend in a gradebook feature would seemingly contribute to the development of individualized feedback for students, but the finding was not significant in terms of adding to completion rates. Efforts to include extensive faculty feedback and interaction in online courses (Bocchi, Eastman, & Swift, 2004) may actually be counterproductive.

Learner to learner interaction was previously found to be positively correlated with students’ perceived learning (Arbaugh & Rau, 2007; Swan, 2002). Our study found a significant, negative, relationship between student participation and course completion. The relationship was weak and surprising. How could more student participation be associated with lower course completion rates? We offer three possible explanations.

Arbaugh and Rau (2007) found that increased learner-learner interaction had a negative correlation with delivery medium satisfaction. The more discussions students had to pay attention to, the less satisfied they were with the learning environment. Perhaps we witnessed a similar effect in our study. Students who invested a lot of time in certain course website areas may have been frustrated with the medium, or perhaps the courses were more difficult. Either way, courses where students had to spend more time were associated with lower completion rates.

Second, Rungtusanatham and colleagues (2004) proposed that higher level courses (e.g. MBA level) require more interaction levels; introductory courses need little interaction. Our sample consisted of community college courses. Do they require higher levels of interaction when the content may not need interpretation or further analysis? Arbaugh and Rau (2007) posited that even graduate course faculty should not necessarily push high levels of learner-learner interaction.

Third, the factors that loaded on student participation may have contributed to this finding. The amount of time a student spends on a course home page may have little to do with course completion. We cannot be certain a student is actively engaged or whether they just had the page open. The gradebook and email interpretations are more interesting. Perhaps the students that spent the most time in gradebook happened to be in the most rigorous courses with many graded assignments. The rigor of these courses may have contributed to the lower course completion rates, not the time spent reading a gradebook. Courses where students spent much time interacting via email may have contributed to lower completion rates. Email is a time intensive way to communicate, and may have led to less rewarding class experiences.

These results indicate several implications for administrators.

Limitations and Future Research

This study utilized an objective measure, time, as its predictor variable. However, one cannot confirm true intent or activity of either faculty or student participants. Time spent as a measure is problematic. What takes one student ten minutes to complete may take another student twenty. Our findings are based on business courses at community colleges and may not be generalizable to other course levels or disciplines. Several goodness-of-fit statistics were borderline acceptable. Although this did not undermine analysis, it suggests an incomplete model. We recognize that we did not capture all relevant predictors; we were limited by the data enabled by the course management program we had access to.

Future researchers must focus on adjusting expectations of both students and faculty based on the level and rigor of a course. We arguably demonstrated that more interaction is not always better. Researchers could study the effects of interaction broken down by gradebook, email, threaded discussions, etc. Researchers could also merge data from sources, something we were unable to do. This would enable matching instructor experience, student GPAs, and the like with the data we were able to capture. We proposed using a completed course as the measure of student success. This will not document learning for accrediting bodies, but it will help organize conclusions that have been based on too many different measures.    

Finally, researchers must examine the application of empirical evidence. We evaluated the use of time just as others tracked Website hits. We cannot suggest that time by itself is a meaningful measure as a result of this study, but it did potentially expose some problem areas (e.g. student email). As designers expand learning management system capabilities, researchers should guide them toward useful indicators. Figure 3 depicts what may be the “ideal” model incorporating all relevant predictor variables.

Figure 3. Ideal model with relevant measures grouped into theorized predictor variables

Conclusion

Although none of Kearsley’s (1995) questions were answered directly, our analysis informs administrators and faculty alike about whether interaction affects retention, whether frequency and intensity of interaction in a course is a meaningful measure, what forms of interaction are the most critical, and whether the pattern of interaction should change over the course of a program. This study reveals that time as a measure of interaction may have some utility. We caution that simply measuring interaction via an “objective” measure does not adequately capture what we need to know about online learning. 

References


AACSB International. (2003). Eligibility procedures and standards for business accreditation. Retrieved February 13, 2009 from: http://www.aacsb.edu/accreditation/standards.asp.

Allen, E., & Seaman, J. (2008). Staying the course: Online education in the United States, 2008. Needham, MA: Sloan-C.  Retrieved February 13, 2009 from http://www.sloan-c.org/publications/survey/pdf/staying_the_course.pdf.

Anderson, T., Rourke, L., Garrison, D. R., & Archer, W. (2001). Assessing teaching presence in a computer conferencing context. Journal of Asynchronous Learning Networks, 5(2), 1-17.

Arbaugh, J. B. (2008). Does the community of inquiry framework predict outcomes in online MBA courses? International Review of Research in Open and Distance Learning, 9(2).

Arbaugh, J. B., & Duray, R. (2002). Technological and structural characteristics, student learning and satisfaction with web-based courses: An exploratory study of two on-line MBA programs. Management Learning, 33(3), 331-347.

Arbaugh, J. B., & Rau, B. L. (2007). A study of disciplinary, structural, and behavioral effects on course outcomes in online MBA courses. Decision Sciences Journal of Innovative Education, 5(1), 65‐95.

Baugher, D., Varanelli, A., & Weisbord, E. (2003). Student hits in an internet‐supported course: How can instructors use them and what do they mean? Decision Sciences Journal of Innovative Education, 1(2), 159‐179.

Biktimirov, E. N., & Klassen, K. J. (2008). Relationship between use of online support materials and student performance in an introductory finance course. Journal of Education for Business, January/February, 83(3), 40‐48.

Bocchi, J., Eastman, J. K., & Swift, C. O. (2004). Retaining the online learner: Profile of students in an online MBA program and implications for teaching them. Journal of Education for Business, 79(4), 245-253.

Brown, B. W., & Liedholm, C.E. (2002). Can web courses replace the classroom in principles of microeconomics? The American Economics Review, 92(2), 444-448.

Chickering, A. W., & Gamson, Z. (1987). Seven principles for good practice in undergraduate education. Racine, WI.: The Johnson Foundation.

Coppola, N. W., Hiltz, S. R., & Rotter, N. G. (2002). Becoming a virtual professor: Pedagogical roles and asynchronous learning networks. Journal of Management Information Systems,
18(4), 169-189.

Davis, R., & Wong, D. (2007). Conceptualizing and measuring the optimal experience of the elearning environment. Decision Sciences Journal of Innovative Education, 5(1), 97‐126.

Dellana, S., Collins, W., & West, D. (2000). Online education in a management science course– effectiveness and performance factors. Journal of Education for Business, 76(1), 43-47.

Easton, S. S. (2003). Clarifying the instructor’s role in online distance learning. Communication Education, 52, 87-105.

Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative education. 4(2), 215‐235.

Fredericksen, E., Pickett, A., Pelz, W., Swan, K., & Shea, P. (2000). Student satisfaction and perceived learning with online courses: Principles and examples from the SUNY learning network. In J. Bourne (Ed.) Online Education: Learning effectiveness and faculty satisfaction: Volume 1. (p. 7-36). Needham, MA.: Sloan-C.

Garrison, D. R. (2003). Cognitive presence for effective asynchronous online learning: The role of reflective inquiry, self-direction, and metacognition. In J. Bourne and J. C. Moore
(Eds.) Elements of Quality Online Education: Practice and direction. (p. 47-58). Needham, MA: Sloan-C.

Garrison, D. R., Anderson, T., & Archer, W. (2004). Critical thinking, cognitive presence, and computer conferencing in distance education. Retrieved May 28, 2010 from: http://www.communityofinquiry.com/files/CogPres_Final.pdf

Hannifin, M. (1989). Inter‐action strategies and emerging instructional technologies: Psychological perspectives. Canadian Journal of Educational Communication, 18, 167‐179.

Hair, J.F., Black, W.C., Babin, B.J., & Anderson, R.E. (2010). Multivariate Data Analysis (7th ed.). Upper Saddle River, NJ: Pearson Prentice-Hall.

Hayduck, L.A. (1987). Structural equation modeling with LISREL. Baltimore: Johns Hopkins University Press.

Hiltz, S. R., Coppola, N., Rotter, N., Turoff, M., & Benbunan-Fich, R. (2000). Measuring the importance of collaborative learning for the effectiveness of ALN: A multi-measure, multi-method approach. In J. Bourne (Ed.) Online Education: Learning effectiveness and faculty satisfaction: Volume 1. (p. 101-119). Needham, MA: Sloan-C.

Jöreskog, K. G. & Sörbom, D. (1979). Advances in factor analysis and structural equation models. New York: University Press of America.

Kearsley, G. (1995). The nature and values of interaction in distance education. In Third Distance Education Research Symposium (Ed.), Third Distance Education Research Symposium College Park: American Center for the Study of Distance Education.

Klassen, K. J., & Biktimirov, E. N. (2007). Relationship between student performance and specific online support materials in an operations course. Journal of the Academy of Business Education, Summer, 40-48.

LaPointe, D. K., & Gunawardena, C. N. (2004). Developing, testing and refining of a model to understand the relationship between peer interaction and learning outcomes in computer‐mediated conferencing. Distance Education, 25(1), 83‐106.

Marks, R. B., Sibley, S. D., & Arbaugh, J. B. (2005). A structural equation model of predictors for effective online learning. Journal of Management Education, 29(4), 531‐563.

McLaren, C. H. (2004). A comparison of student persistence and performance in online and classroom business statistics experiences. Decision Sciences Journal of Innovative Education, 2(1), 1-10.

Moore, M. G. (1989). Three types of interaction, American Journal of Distance Education, 3(2), 1-6.

Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2001) Assessing social presence in asynchronous text-based computer conferencing. Journal of Distance Education, 14 (2).

Rungtusanatham, M., Ellram, L. M., Siferd, S. P., & Salik, S. (2004). Toward a typology of business education in the Internet Age. Decision Sciences Journal of Innovative Education, 2(2), 101-120.

Russell, T. L. (2009). No significant difference phenomenon. Retrieved February 13, 2009 from: http://www.nosignificantdifference.org/.

Swan, K. (2002). Building learning communities in online courses: The importance of interaction. Education, Communication & Information, 2(1), 23-49.

Swan, K. (2003). Learning effectiveness: What the research tells us. In J. Bourne and J. C. Moore (Eds.) Elements of Quality Online Education: Practice and direction. (p. 13-45). Needham, MA.: Sloan-C.

Syler, R. A., Cegielski, C. G., Oswald, S. L., & Rainer, R. K. (2006). Examining drivers of course performance: An exploratory examination of an introductory CIS applications course. Decision Sciences Journal of Innovative Education, 4(1), 51-65.

Wang, A. Y. & Newlin, M. H. (2000). Characteristics of students who enroll and succeed in psychology web-based courses. Journal of Educational Psychology, 92(1), 137-143.


Online Journal of Distance Learning Administration, Volume XIII, Number II, Summer 2010
University of West Georgia, Distance Education Center
Back to the Online Journal of Distance Learning Administration Contents