Table Of ContentInternational Journal of ePortfolio 2016, Volume 6, Number 1, 45-57
http://www.theijep.com ISSN 2157-622X
Using ePortfolios to Assess Applied and Collaborative Learning and Academic
Identity in a Summer Research Program for Community College Students
Karen Singer-Freeman, Linda Bastone, and Joseph Skrivanek
Purchase College, SUNY
We evaluate the extent to which ePortfolios can be used to assess applied and collaborative learning
and academic identity among community college students from underrepresented minority groups
who participated in a summer research program. Thirty-eight students were evaluated by their
research sponsor and two or three naïve faculty evaluators. Faculty sponsors evaluated students
based on personal interactions and the students’ ePortfolios. Naïve faculty evaluated students using
only the ePortfolios. We found: (1) The rubrics designed to assess applied and collaborative learning
and academic identity had good internal consistency, (2) naïve evaluators found some evidence of all
learning outcomes, (3) faculty sponsors found evidence of more learning outcomes than naïve
evaluators, and (4) individual ePortfolios varied in the extent to which they documented learning
outcomes. We conclude that ePortfolios can be used as a reliable means of documenting applied and
collaborative learning and academic identity.
Research experiences help students develop an We focus on assessment because we believe that
academic identity that increases underrepresented this constitutes a unique contribution to the existing
minority (URM) student persistence in STEM literature on ePortfolios. Despite reasonable theoretical
disciplines (Jackson, Starobin, & Laanan, 2013). justification for using ePortfolios, there is little
Participation in research has also been found to improve empirical evidence for their effectiveness. In their
the persistence of URM women who begin their studies review of the empirical literature, Bryant and Chittum
at community colleges (Jackson et al., 2013) and is (2013) found that only 15% of peer-reviewed articles
associated with higher levels of perceived support and addressed student outcomes, and only half of those
academic persistence among all students from articles assessed academic learning outcomes
underrepresented groups (Barlow & Villarejo, 2004; specifically. Bryant and Chittum (2013) argued that
Maton & Hrabowski, 2004; Yelamarthi & Mawasha, researchers should assess the extent to which
2008). The ability to document learning that occurs in ePortfolios are linked to student learning outcomes,
the context of research experiences would be useful for especially in STEM disciplines. Rhodes, Chen, Watson,
both research purposes and efforts to credit prior and Garrison (2014) also pointed to the need for more
learning. In the current paper we describe the rigorous research on the benefits of ePortfolios.
evaluation of our efforts to use ePortfolios created Our examination of ePortfolio use was conducted
during a research program to assess applied learning in the context of a residential summer research
and academic identity in URM community college program. Since 2000, Purchase College of the State
students. By comparing assessments made by faculty University of New York has offered the Baccalaureate
who worked directly with students to those made by and Beyond program to support URM students as they
faculty who were unfamiliar with the students, we are transition from community colleges to four-year
able to evaluate the extent to which ePortfolios institutions. Each year, the program serves
document actual learning. approximately 25 students from six community
As more learning is taking place outside traditional colleges. Students work full-time in small groups
higher education settings, learners need means of conducting original research in biology, chemistry,
documenting their knowledge and skills (Travers, 2012). computer science, environmental science, neuroscience,
Portfolio review (both electronic and traditional) is or psychology.
commonly used by academic institutions to evaluate prior In the current work, we elected to use the applied
learning (Conrad & McGreal, 2012). Prior learning and collaborative learning proficiency from the Degree
assessments are relatively high stakes and should afford Qualifications Profile (DQP) as the basis for our
students the opportunity to present their learning fully assessment. The DQP was developed to provide a
(Stenlund, 2013). ePortfolios allow students to include a complete description of the proficiencies that students
variety of documents along with reflection, providing a should obtain in higher education. The DQP purpose
rich account of the learning that has taken place (Travers, has been described as “what students should know and
2012). The opportunity for students to provide context for be able to do once they earn their degrees—at any level,
their work is especially important because many in any field of study” (Adelman, Ewell, Gaston, &
institutions review only the work products, without having Geary Schneider, 2014, p. 7). As such, the DQP is well
any contact with the student. suited to assessing students who are transitioning
Singer-Freeman, Bastone, & Skrivanek Applied and Collaborative Learning 46
between community college and a four-year school. assess learning in a first-year seminar. Using rubrics,
Applied and collaborative learning, one of the five along with examination of students’ narratives, trained
areas of proficiency outlined in the DQP, includes faculty unfamiliar with the students whose work they
research and creative activities that involve “innovation assessed were able to provide reliable evidence of
and fluency in addressing unscripted problems in learning. We adapt Buyarski and Landis’s (2014)
scholarly inquiry, at work and in other settings outside methodology to examine the efficacy of rubrics as a
the classroom” (Adelman et al., 2014, p. 6). At the means for assessing applied and collaborative learning
bachelor’s level, applied and collaborative learning and academic identity in ePortfolios.
includes four areas that are present in summer research In a review of research examining the value and
programs: (1) presents a project, (2) negotiates a educational consequences of rubric use, Jonsson and
strategy for group research, (3) writes a design, and (4) Svingby (2007) found that analytic, topic-specific
completes a substantial project (Adelman et al., 2014). rubrics were most useful in assessment of student
Because applied and collaborative learning is the performance. In general, rubric use makes assessment
proficiency that is most likely to occur in non- by multiple evaluators more reliable. Jonsson and
traditional settings (Adelman et al., 2014), its Svingby (2007) reported that inter-rater reliability rates
assessment can present a challenge. ePortfolios have the ranged from 4-100%, with the majority falling in the
potential to serve as a useful tool in the reliable range of 55-75%. In general, reliability rates were lower
assessment of applied and collaborative learning. in instances in which tasks or products were not
ePortfolios are useful as a means of documenting uniform. More extensive rater training was generally
learning from non-traditional activities such as research associated with higher levels of reliability. However,
experiences (Wang, 2009) and are hypothesized to high reliability was not necessarily associated with high
support reflection, engagement, and active learning validity. Validity of rubrics varies widely and depends
(Yancey, 2009). Accordingly, ePortfolio use in higher on the care with which rubrics are developed to align
education has become increasingly prevalent (Rhodes, with a construct of interest. Rubrics also enhance
et al., 2014). There is evidence that ePortfolios help learning and instruction by making expectations clear to
students and faculty evaluate growth and reflect on both students and faculty. Accordingly, a good rubric
students’ academic achievements (Buzzetto-More, that is in alignment with a construct might guide
2010). Eynon, Gambino, and Török (2014) found instruction to align more fully with the construct.
evidence that ePortfolio use correlates positively with In the current work, we assessed whether evaluators
student success indicators (course pass rates, GPA, and could reliably use an ePortfolio to assess applied and
retention rates) and can help advance and support deep collaborative learning and academic identity. We utilized
thinking, integration, and personal growth. The creation two sets of evaluators, with different levels of direct
of ePortfolios has also been found to help students knowledge of the students, in order to gain insight into
develop a sense of academic identity, future orientation, the extent to which ePortfolios capture authentic
and belonging to a community of scholars (Nguyen, learning. Faculty who directly supervised students in the
2013; Singer-Freeman, Bastone, & Skrivanek, 2014). research lab (faculty sponsors) evaluated students using
Unfortunately, to date, much of the research examining their complete knowledge of the student from personal
ePortfolio use as an assessment tool has focused on student interactions as well as from the student’s ePortfolio.
and faculty attitudes about ePortfolios rather than on the Faculty unfamiliar with the student are referred to as
usefulness of ePortfolios as a means of reliable assessment naïve evaluators because they assessed learning using
(Rhodes et al., 2014). For example, Curtis and Wu (2012) only the ePortfolio and were naïve with regard to
reported that healthcare educators have become more performance in the research lab. We hypothesized that
accepting of ePortfolio use as an assessment tool. Similarly, naïve evaluators would see less evidence of learning than
Garrett, MacPhee, and Jackson (2013) found that nursing faculty sponsors because they had less information.
faculty considered ePortfolio evaluation an effective means However, should naïve evaluators report stronger
of assessment. When ePortfolios were used as the primary evidence of learning than faculty sponsors, this would
means of assessment in a Child Development class, 84% of raise the possibility that students might be able to inflate
students felt they encouraged reflection, 77% felt they their proficiency artificially in an ePortfolio. We
provided a permanent record of learning, and 87% felt their hypothesized that ePortfolios would document applied
use in the class should be continued (Singer-Freeman & and collaborative learning and academic identity, but
Bastone, 2015). Bryant and Chittum (2013) cautioned that that direct faculty knowledge of students would
attitudes are not necessarily a good indicator of usefulness provide more robust evidence than ePortfolios alone.
as a learning tool. We were also interested in assessing the extent to
Additional evidence for the usefulness of which the created rubrics were valid and reliable
ePortfolios is provided by Buyarski and Landis (2014), measures of applied and collaborative learning and
who examined the efficacy of using ePortfolios to academic identity.
Singer-Freeman, Bastone, & Skrivanek Applied and Collaborative Learning 47
Method clarify expectations for each of the learning outcomes.
The six faculty sponsors piloted the rubrics, using them
Participants to assess 20 ePortfolios. Results of this pilot work and a
focus group with the participating faculty revealed that
Students. The summer research program included the ePortfolios did not include sufficient information
22 students in 2013 and 22 students in 2014. Four for assessment of the quality of revisions or the
students who participated in the research program in development of collaborative strategies. Accordingly,
2013 were excluded due to the death of their research these learning outcomes were removed (see Figures 1
sponsor, and two students were excluded due to failure and 2).
to share their ePortfolios. This resulted in a sample of ePortfolio creation. We instructed students to
38 students (29 females and 9 males). Our sample create ePortfolios that would document their summer
included 24 students who identified as African experiences. Students were provided with the expected
American, 10 who identified as Latino, two individuals learning outcomes and products and shown how to
who reported mixed African American and Latino create and share pages. Work on the ePortfolios
ancestry, one who identified as Asian, and one who occurred after the students’ work day was complete.
identified as Native American. Fifteen students Students created their ePortfolios independently,
completed research in psychology or neuroscience, 13 without direct supervision from the research sponsor.
in biology, seven in biochemistry, and three in Program staff held weekly workshops in which the
environmental science. students were required to contribute a minimum of one
Faculty sponsors. Eight faculty served as research journal entry, one image that documented learning in
sponsors: Four of these were sponsors in both 2013 and some area, and one piece of writing that documented
2014, two sponsored students only in 2013, and two learning. Students were required to provide a caption
sponsored students only in 2014. All sponsors had for each image that explained how it documented
PhDs in a STEM discipline and were full-time faculty learning. Additionally, students were required to
members at the college. There were five assistant respond each week to a specific written prompt (see
professors, two associate professors, and one full Appendix). Students were not provided with a template.
professor. There were five males and three females. All Instead, staff worked individually with students to
faculty identified as White. develop content and design the ePortfolio. The
Naïve evaluators. We decided to have faculty ePortfolios were not graded. However, program faculty,
sponsors evaluate both the students they sponsored and staff, and students provided comments on the
students with whom they were unfamiliar to ensure that ePortfolios that individual students shared with the
similar standards would be used for both sets of group.
evaluations. When rating unfamiliar students, faculty Evaluator training. Faculty sponsors were
are referred to as naïve evaluators. Three additional instructed on the use of the rubrics during a faculty
STEM faculty who did not sponsor students served as meeting that took place one week before the faculty
naïve evaluators. They all had PhDs and were full-time were to begin evaluating. Two naïve evaluators who
faculty members (one lecturer, one associate professor, were not faculty sponsors attended the group instruction
and one full professor). There were two females and session. The remaining naïve evaluator who was not a
one male, and all identified as White. faculty sponsor received individual instruction. During
the training, evaluators reviewed the rubrics and
Materials and Procedure discussed possible products that could be used to
document proficiency. Evaluators were instructed to
Learning outcome and rubric development. To read reflective writing carefully as a source of
identify expected proficiencies, six faculty research information about academic identity proficiencies. The
sponsors individually created lists of learning outcomes evaluators completed the rubrics by selecting from five
that were then discussed in a focus group with the possible ratings: exceeds expectations, meets
authors and two outside experts in rubric construction. expectations, approaches expectations, does not meet
The group reached consensus on the desired learning expectations, and cannot evaluate. All 38 students were
outcomes and the products that could be used to assessed by their respective research sponsors. Twenty-
evaluate mastery of these outcomes. The outcomes one students were assessed by three naïve evaluators
included items associated with both applied and and 17 students were assessed by two naïve evaluators.
collaborative learning and academic identity (see Table We elected not to have faculty reach consistency on
1). The applied and collaborative learning outcomes sample ePortfolios because we were interested in the
were brought into alignment with the applied and rubrics’ utility in a minimal training environment.
collaborative learning proficiency from the DQP Although research has established higher inter-rater
(Adelman et al., 2014). Rubrics were developed to reliability with practice sessions, this sort of training is
Singer-Freeman, Bastone, & Skrivanek Applied and Collaborative Learning 48
Table 1
Initial Learning Outcomes Generated by Faculty Research Sponsors
Construct Learning outcome Evidence
Academic identity Identifies hopes and goals for experience Journal entries
Demonstrates confidence sharing ideas Interim and final report
Engages with the research
Identifies learning from experience
Constructs plans for academic future
Refines ideas about possible careers
Applied and Literature Annotations
collaborative learning Summarizes research literature Interim and final report
Articulates contribution to existing knowledge
Research Design Interim and final report
States project goals Journal entries
Articulates research hypothesis
Describes research design
Data Collection Experimental results
Successfully implements methodology Interim and final report
Documents data collection Journal entries
Data Analysis Experimental results
Organizes data Interim and final report
Performs calculations correctly
Draws appropriate conclusions
Communicates results
Collaboration Interim and final report
Demonstrates collaboration skills Journal entries
Works with team to draft research abstract
Works with team to draft final presentation
Revisions Abstract
Uses faculty feedback to revise work Interim and final report
Conference submissions
Oral Presentation Final report
Presents work orally with confidence and clarity
not associated with improved validity (Jonsson & evaluators reported that a learning outcome had been
Svingby, 2007). Additionally, in real world met (grouping meets expectations and exceeds
applications, rubrics are frequently used by individuals expectations) or not met (grouping approaches
who have not received reliability training. expectations, does not meet expectations, and cannot
evaluate). We treated cannot evaluate as an indication
Results that a learning outcome had not been met because this
response was given in instances in which material
Coding related to an outcome was not present in the ePortfolio.
Because our primary goal was to determine Inter-Item Reliability
whether ePortfolios could provide reliable information
that would enable ePortfolio-based assessment of prior To determine inter-item reliability we calculated
learning and academic identity, we focused on whether Cronbach’s alphas for ePortfolio-based assessment of
Singer-Freeman, Bastone, & Skrivanek Applied and Collaborative Learning 49
Figure 1
Academic Identity Rubrics
Exceeds expectations Meets expectations Approaches Does not meet
expectations expectations
Hopes and Provides a fully Provides a good Identifies some goals Does not identify
goals developed discussion discussion of goals for experience. goals for experience.
of goals for experience. for experience.
Confidence Demonstrates Demonstrates some Demonstrates limited Does not demonstrate
sharing ideas confidence in sharing confidence in sharing confidence in sharing confidence in sharing
intellectual ideas. intellectual ideas. intellectual ideas. intellectual ideas.
Engagement Notes indicate full Notes indicate good Notes indicate some Notes indicate limited
with research engagement with the engagement with the engagement with the or no engagement
research process. research process. research process. with the research
process.
Learning from Offers fully developed Offers good insights Offers some insights Does not identify
experience insights into learning into learning gained into learning gained learning gained from
gained from from experience. from experience. experience.
experience.
Careers Shares fully developed Shares somewhat Shares poorly Does not share ideas
ideas about possible developed ideas developed ideas about about possible
careers. about possible possible careers. careers.
careers.
Academic Shares fully developed Shares somewhat Shares poorly Does not share plans
future plans for academic developed plans for developed plans for for academic future.
future. academic future. academic future.
each construct. We only included ePortfolio-based reported by Jonsson and Svingby (2007). With this
ratings in these calculations because we were interested naïve evaluator excluded, the remaining naïve
in the use of the rubrics as measures of proficiencies evaluators had reliability scores of between 0.62 and
demonstrated by the ePortfolio and not as measures of .72. In instances in which there were only two naïve
proficiencies demonstrated by direct knowledge of the evaluators, reliability was 0.71 (SD = 0.14), with
student. We observed alphas of .78 for applied and reliability scores ranging from 0.50 to 0.93. In instances
collaborative learning and .69 for academic identity. in which three evaluators assessed a single ePortfolio
we calculated the reliability between the two evaluators
Inter-Rater Reliability who had the highest level of agreement. We found that
the average reliability was 0.86 (SD = 0.11), with
To determine the level of agreement between scores ranging from 0.64 to 1.00.
faculty sponsors and naïve evaluators, we calculated the
reliability between faculty sponsors and naïve Individual Learning Outcomes
evaluators responding to the same student. We found
that the average sponsor-evaluator reliability was 0.45 Tables 2 and 3 report the number and percentage of
(SD = 0.19). This low reliability score is consistent with instances in which sponsors and naïve evaluators
our presupposition that sponsors would have available credited each learning outcome. Because two or three
to them substantially more information than naïve naïve evaluators assessed each ePortfolio, there were a
evaluators when evaluating students. total of 90 naïve evaluator assessments. As can be seen
To determine whether naïve evaluators were in Tables 2 and 3, faculty sponsors were aware of
assessing the ePortfolios similarly, we calculated the learning that was not evident to naïve evaluators,
reliability between naïve evaluators responding to the explaining the low levels of sponsor-evaluator
same ePortfolio. We excluded the scores provided by reliability reported above. Faculty sponsors credited
the single naïve evaluator with a reliability score of between 47% and 87%, and naïve evaluators credited
0.42 because this is outside the typical reliability range between 7% and 66%, of individual outcomes. In fact,
Singer-Freeman, Bastone, & Skrivanek Applied and Collaborative Learning 50
Figure 2
Applied and Collaborative Learning Rubrics
Exceeds expectations Meets expectations Approaches Does not meet
expectations expectations
Understands Demonstrates an Demonstrates a good Demonstrates some Does not demonstrate
literature excellent understanding of the understanding of the understanding of the
understanding of the literature. literature. literature.
literature.
Analyzes Very effectively Effectively analyzes Offers limited Does not analyze
literature analyzes literature. literature. analysis of literature. literature.
Project goals Demonstrates Demonstrates a good Demonstrates some Does not demonstrate
excellent understanding of the understanding of the understanding of the
understanding of the project significance. project significance. project significance.
project significance.
Project Demonstrates an Demonstrates a good Demonstrates some Does not demonstrate
hypothesis excellent understanding of the understanding of the understanding of the
understanding of the research hypothesis. research hypothesis. research hypothesis.
research hypothesis.
Research design Demonstrates an Demonstrates a good Demonstrates some Does not demonstrate
excellent understanding of the understanding of the understanding of the
understanding of the project research project research project research
project research design. design. design.
design.
Data collection Collects and records Collects and records Collects and records Does not collect and
data with no errors. data with very few data with some record data.
errors. errors.
Data analysis Fully understands the Generally understands Understands some of Does not understand
analyses. the analyses. the analyses. the analyses.
Draws Fully understands the Generally understands Understands some of Does not understand
conclusions relation between the relation between the relation between the relation between
results and hypothesis. results and results and results and
hypothesis. hypothesis. hypothesis.
Table 2
Evaluations Crediting Academic Identity Outcomes
Faculty sponsor Naïve evaluator
Learning outcome n % n %
Hopes and goals 18 47% 27 30%
Confidence sharing ideas 31 82% 59 66%
Academic future 31 82% 43 48%
Careers 33 87% 50 56%
Learning from experience 27 71% 43 56%
Engagement with research 32 84% 50 56%
Note. Faculty sponsor n = 38. Naïve evaluator n = 90.
for all outcomes assessed, faculty sponsors reported between 27% and 59% of the time and included
higher rates of acquisition than naïve evaluators. evidence of applied and collaborative learning
Differences between faculty sponsor and naïve outcomes between 7% and 41% of the time. Although
evaluator assessments of individual learning outcomes the evidence for individual applied and collaborative
ranged from 15% (learning from experience) to 69% learning outcomes was low, because there were eight
(data collection). unique outcomes, students could show evidence of
The naïve evaluators reported that ePortfolios applied and collaborative learning without having
included evidence of academic identity outcomes provided evidence of every learning outcome. Naïve
Singer-Freeman, Bastone, & Skrivanek Applied and Collaborative Learning 51
Table 3
Evaluations Crediting Applied and Collaborative Learning Outcomes
Faculty sponsor Naïve evaluator
Learning outcome n % n %
Understands literature 25 66% 20 22%
Analyzes literature 21 55% 15 17%
Project goals 32 84% 37 41%
Project hypothesis 33 87% 32 36%
Research design 29 76% 29 32%
Data collection 29 76% 6 7%
Data analysis 22 58% 22 24%
Draws conclusions 20 53% 20 22%
Note. Faculty sponsor n = 38. Naïve evaluator n = 90.
evaluators agreed that five (13%) of the 38 ePortfolios credited students with learning that the majority of
failed to show evidence of any applied and naïve evaluators did not to the frequency with which
collaborative learning. However, the faculty sponsors of the majority of naïve evaluators credited a student with
these students reported evidence of between four and learning that the faculty sponsors did not. Because the
eight of the applied and collaborative learning faculty sponsors had the benefit of both ePortfolio
outcomes. We conclude that these five students review and personal knowledge, we expected there
demonstrated applied and collaborative learning but would be more instances in which sponsors credited
failed to document mastery in their ePortfolios. learning than the reverse. We found that there were far
more instances in which sponsors credited students with
ePortfolio Capture Rates learning when the naive evaluators did not (18%-58%)
than there were instances in which naïve evaluators
To determine the extent to which naïve evaluators credited students with learning when the sponsors did
were aware of students’ mastery of learning outcomes, not (0%-11%). The most common applied and
we limited our sample for each learning outcome to collaborative learning outcomes credited by naïve
students who had been credited by their faculty evaluators but not sponsors were draws conclusions
sponsor. These numbers are reported in Tables 4 and 5. (11%), project goals (8%), and research design (8%).
We then calculated the number of times at least one The most common academic identity outcomes that
naïve evaluator credited each of these students in order were credited by naïve evaluators but not faculty
to determine the percentage of times naïve evaluators sponsors were confidence sharing ideas (11%) and
credited learning that had been credited by sponsors. plans for academic future (8%).
We believe that these percentages are the best measure
of the extent to which naïve evaluators were able to see Differences between Students
evidence of actual learning. We will refer to this
measure as ePortfolio capture. Although all of the students who participate in our
As can be seen in Table 4, the ePortfolio capture program are enrolled in community colleges when they
rates for academic identity outcomes were excellent, apply, some students enter our program ready to attend
with rates ranging from 67%-87%. However, as can be a 4-year school while others plan to return to
seen in Table 5, ePortfolio capture rates for applied and community college. We divided our sample into
collaborative learning outcomes were lower, ranging students who would be attending a 4-year institution
from 14%-67%. Capture rates were over 50% for after completion of the summer program (n = 16) and
project goals, project hypothesis, and research design. students who would be returning to their two-year
Capture rates ranging from 30%-40% were observed institution (n = 22). We hypothesized that more
for understands literature, analyzes literature, data advanced students might have been better able than less
analysis, and draws conclusions. The lowest capture advanced students to master (or document) applied and
rate of 14% was observed for data collection. collaborative learning at the bachelor’s level. Table 6
reports the average number of learning outcomes
Differences in Sponsor and Naïve Evaluator Ratings associated with applied and collaborative learning and
academic identity as a function of academic status and
Another way to determine whether naïve evaluator.
evaluators can reliably assess student learning is to To investigate the effects of academic status and
compare the frequency with which faculty sponsors evaluator on applied and collaborative learning we
Singer-Freeman, Bastone, & Skrivanek Applied and Collaborative Learning 52
Table 4
Academic Identity Outcomes: Frequency of Naïve Evaluator Credit for Outcomes Credited by Faculty Sponsor
Learning outcome Number credited by sponsor No. and % credited by naïve evaluator
Hopes and goals 18 12 (67%)
Confidence sharing ideas 31 27 (87%)
Academic future 31 21 (68%)
Careers 33 23 (70%)
Learning from experience 27 20 (74%)
Engagement with research 32 27 (84%)
Table 5
Applied and Collaborative Learning: Frequency of Naïve Evaluator Credit for Outcomes Credited by Faculty Sponsor
Learning outcome Number credited by sponsor No. and % credited by naïve evaluator
Understands literature 25 10 (40%)
Analyzes literature 21 08 (38%)
Project goals 32 20 (63%)
Project hypothesis 33 22 (67%)
Research design 29 15 (52%)
Data collection 29 04 (14%)
Data analysis 22 08 (36%)
Draws conclusions 20 06 (30%)
Table 6
Average Number of Learning Outcomes Associated with Applied and Collaborative Learning and Academic Identity
Reported as a Function of Academic Status and Evaluator
Community 4-year college
college students students All students
Learning outcome Evaluator M (SD) M (SD) M (SD)
Applied and collaborative Faculty sponsor 5.41 (2.40) 5.75 (2.05) 5.58 (0.35)
learning (8 items) Naïve evaluators 1.61 (1.82) 2.77 (2.57) 2.19 (0.24)
Academic identity (6 items) Faculty sponsor 4.23 (0.37) 4.94 (0.44) 4.02 (0.27)
Naïve evaluators 2.98 (0.23) 3.01 (0.32) 3.61 (0.22)
calculated a 2 (academic status: 2-year school, 4-year credited. We observed a significant effect of Evaluator,
school) x 2 (Evaluator: faculty sponsor, naïve F(1, 127) = 19.74, p < .001, η 2 = .14, such that faculty
p
evaluator) ANOVA on the total number of learning sponsors reported more evidence of identity (M = 4.58)
outcomes credited. We observed a significant and than naïve evaluators (M = 3.04). We failed to observe
strong effect of evaluator, F(1, 127) = 63.11, p < .001, an effect of academic status or an interaction between
η 2 = .33, such that faculty sponsors reported more academic status and evaluator.
p
evidence of learning (M = 5.58) than naïve evaluators
(M = 2.19). We also observed a marginally significant Discussion
effect of academic status, F(1, 127) = 3.11, p = .08, η 2
p
= .03, such that more advanced students were credited In the current work, we describe our use of rubrics
with higher levels of applied and collaborative learning to assess applied and collaborative learning and
(m = 4.26) than less advanced students (m = 3.51). We academic identity in ePortfolios that were created
failed to observe an interaction. during a summer research program for community
To investigate the effects of Academic Status and college students from underserved groups. Faculty
Evaluator on Academic Identity we calculated a 2 sponsors evaluated individual students using their
(Academic Status: 2-year school, 4-year school) x 2 knowledge from personal interactions as well as the
(Evaluator: faculty sponsor, naïve evaluator) ANOVA student’s ePortfolio. Naïve evaluators evaluated
on the total number of academic identity outcomes students using only the ePortfolio. As hypothesized, we
Singer-Freeman, Bastone, & Skrivanek Applied and Collaborative Learning 53
found evidence that ePortfolios document applied and We were primarily interested in determining the
collaborative learning and academic identity. We also extent to which actual learning could be assessed via
found that the rubrics we designed to assess these ePortfolios. Naïve evaluators credited individual
constructs appear to be reliable measures of the students with 2.19 outcomes on average (out of eight
constructs under investigation. possible). On the basis of this finding we conclude that,
The rubrics were developed following practices in general, naïve evaluators require a high standard of
that support their validity. Experienced mentors who proof before credit is awarded. Additionally, when we
were familiar with the research program and its goals limited our sample for each learning outcome to
worked with assessment experts to develop and refine students who had been credited with the outcome by
the rubrics over a year during which they used them to their faculty sponsor, we observed relatively high
assess students. Construct reliability is supported by the ePortfolio capture rates (30%-67%) for all individual
adequate levels of inter-item similarity obtained for learning outcomes except data collection (14%). We
each measure. We observed variations in the quality of believe that high ePortfolio capture rates indicate that
individual naïve evaluators’ work. The range of quality naïve evaluators were able to credit actual learning.
was demonstrated by the improvement in inter- Individual ePortfolios varied in their quality and
evaluator reliability when three evaluators assessed a completeness. We found that there was a trend in which
single student and the two with the highest levels of students who were ready to transfer to four-year schools
agreement were included in the calculation. demonstrated evidence of more applied and
Nonetheless, the observed levels of inter-rater collaborative learning outcomes than students who were
reliability are well within the range of reliability planning on returning to community colleges. Given
generally seen with rubric use (Jonsson & Svingby, that we assessed applied and collaborative learning at
2007). We conclude that these rubrics are valid and the bachelor’s level, it is not surprising that students
reliable tools that might be used in the assessment of who were prepared to transfer demonstrated more
student ePortfolios. proficiency than those who were not yet ready to
As expected, faculty sponsors credited students transfer. This finding can be interpreted in several
with more learning outcomes than naïve evaluators. We ways. It is possible that more advanced students learned
believe that this level of disagreement should be more than less advanced students. However, it is also
interpreted as evidence of the rubrics’ sensitivity. An possible that students who were about to transfer to a 4-
effective rubric should differentiate between the year school were more able or more motivated to
learning evidenced in an ePortfolio alone and the document their learning than students returning to
learning evidenced in many hours of shared work in community college.
addition to an ePortfolio. This interpretation is We found that, regardless of academic level,
supported by the finding that, for every measured students were similarly able to document academic
learning outcome, faculty sponsors gave credit more identity in their ePortfolios. We believe that this is an
frequently than naïve evaluators. Similarly, faculty important finding. In previous work we have argued
sponsors gave credit when naïve evaluators did not that the development of academic identity in summer
more frequently than naïve evaluators gave credit when research programs may be a central element that leads
faculty sponsors did not. to increased academic persistence (Singer-Freeman et
The few instances in which naïve evaluators gave al., 2014). Students with an enhanced sense of
credit when faculty sponsors did not may provide academic identity who return to community college
insight into the standards employed by the two types of may be more likely to transfer to a four-year school in
evaluators. It is possible that faculty sponsors have the future and those who go on to four-year schools
higher standards of proof than naïve evaluators. We may be more likely to complete their bachelor’s
believe this is likely to have affected the evaluation of degrees.
the applied and collaborative learning outcomes. The
applied and collaborative learning outcomes that were Limitations and Implications
most often credited by naïve evaluators but not faculty
sponsors were draws conclusions, project goals, and This research occurred in an applied setting.
research design. For each of these learning outcomes, Consequently, it was subject to lower levels of control
two or three students received credit from naïve than a more structured experiment would be. We
evaluators but not faculty sponsors. These three elected to provide relatively little training to our
outcomes are the most conceptually difficult applied evaluators. By examining the use of the rubrics without
and collaborative learning outcomes and the most extensive training we were able to determine the extent
specific to the individual research group. Accordingly, to which the rubrics themselves enabled reliable
faculty sponsors may have been somewhat less lenient evaluations. We found that levels of inter-rater
than naïve evaluators in crediting these outcomes. reliability were within ranges reported elsewhere.
Singer-Freeman, Bastone, & Skrivanek Applied and Collaborative Learning 54
Nonetheless, if these rubrics were to be used with the be advisable to develop a template that was organized
purpose of crediting prior learning, training on sample by learning outcome and specified the products to be
ePortfolios would be advisable. If such training is not included to demonstrate mastery. In contrast, for those
conducted, using three naïve evaluators would be using ePortfolios as a means of developing and
preferable to using two. documenting academic identity, it may be important to
Because this research was conducted in the context allow students more independence in the creation of
of a summer research program, we compared the their ePortfolios.
ratings of faculty sponsors to naïve evaluators. In Miller and Morgaine (2009) found that the
addition to having more knowledge of the student’s reflective practices embedded in ePortfolio creation
abilities, faculty sponsors also had a personal helped students to develop academic identity as they
relationship with the student. A more controlled engaged in complex projects. The use of writing
evaluation would compare the ratings of two sets of prompts (see Appendix) appeared to encourage
naïve evaluators: one that reviewed only the ePortfolio expressions of academic identity in student ePortfolios.
and a second that reviewed the student’s work over the Our current evidence of academic identity in student
entire program. We hypothesized and found that for ePortfolios replicates Singer-Freeman et al. (2014). The
most outcomes, faculty sponsors credited students with rubrics tested in this work appear to be a reliable means
greater learning than naïve evaluators. We interpreted of assessing academic identity. The construct of
this to reflect the fact that faculty sponsors had more academic identity is similar to those of academic self-
genuine knowledge of students’ abilities. However, it is efficacy and academic goals, which have been found to
also possible that faculty sponsors credited more be moderately related to academic persistence (Robbins
learning because of the influence of having a personal et al., 2004). We believe that academic identity should
relationship with the student. We believe that this be a central element of student ePortfolios and that the
interpretation is unlikely for several reasons. First, evaluation of academic identity in student ePortfolios is
faculty sponsors were not sharing their evaluations with an important area for future research.
the student. This should reduce faculty sponsors’ focus We found evidence to support our hypothesis that
on their personal relationship with their students. ePortfolios would be useful in the evaluation of
Second, faculty sponsors also served as naïve academic identity and applied and collaborative
evaluators. Serving as a naïve evaluator should help the learning at the bachelor’s level. As expected, direct
faculty sponsor take a neutral perspective when faculty knowledge of students provided more robust
considering the work of his/her own students. Third, evidence of learning and identity than ePortfolios alone.
faculty sponsors knew that their students were also Nonetheless, ePortfolio assessment did provide
being assessed by naïve evaluators. This knowledge evidence of prior learning and identity in the current
should encourage impartiality. Finally, the few study. Proficiencies similar to those evidenced by our
instances in which faculty sponsors did not credit students are likely to be present in other high-impact
learning that was credited by naïve evaluators involved activities (e.g., internships, global learning, learning
outcomes that were the most conceptually difficult. communities). The use of rubric-based assessment of
This suggests that in these instances faculty sponsors ePortfolios by trained evaluators in these contexts could
used their more extensive knowledge of the student to have similar value.
determine that the evidence in the ePortfolio was not
sufficient to document mastery. It seems unlikely that if References
faculty were biased to credit students because of a
personal relationship they would exhibit this bias only Adelman, C., Ewell, P., Gaston, P., & Geary Schneider, C.
for the less difficult learning outcomes. (2014). The degree qualifications profile 2.0, Draft.
To give students a sense of autonomy in the Indianapolis, IN: Lumina Foundation for Education.
creation of their ePortfolios, we allowed students to Retrieved from https://www.luminafoundation.org/files
create their ePortfolios without using a template. We /resources/dqp.pdf
believe that this may have increased the extent to which Barlow, A. E. L., & Villarejo, M. (2004). Making a
students documented and developed academic identity difference for minorities: Evaluation of an
by encouraging them to fully engage with the ePortfolio educational enrichment program. Journal of
as a creative project. However, students’ different Research in Science Teaching, 41, 861-881.
organizational choices likely made it difficult for some doi:10.1002/tea.20029
evaluators to locate evidence of individual learning Bryant, L. H., & Chittum, J. R. (2013). ePortfolio
outcomes. We hypothesize that inter-rater reliability effectiveness: A(n ill-fated) search for empirical
was impeded by the unstructured nature of the students’ support. International Journal of ePortfolio, 3(2),
ePortfolios. Were students creating ePortfolios for the 189-198. Retrieved from
purpose of receiving credit for prior learning, it would http://www.theijep.com/pdf/ijep108.pdf