Click here to login if you're an NAE Member
Recover Your Account Information
These surveys were first pilot-tested at six engineering programs across the United States. Tests of validity and reliability were conducted on both instruments. The instruments were then refined and shortened based on the psychometric properties of the items in the original instruments. The revised instruments were tested at five engineering programs and analyzed using factor analysis as well as correlations of test-retest reliability.
The Faculty version of the survey (E-FSSE) asked faculty members to “Think about graduating seniors in your program. Please rate their ability, on average, to do the following:” and listed 50 learning outcomes derived from ABET, Inc., the EC2000 study, and the original versions of the NSSE and FSSE. Another section asked the faculty to respond “based on one particular upper-level undergraduate engineering course section you are teaching or have taught in the past five years.” This section listed instructional practices as well as student behaviors. The final section asked faculty to rate various instructional practices in terms of both perceived importance and actual completion.
The Student version of the survey (E-NSSE) asked students to rate themselves on the same learning outcomes and behaviors as in the FSSE. Students were also asked to rate the frequency of various instructional practices their engineering faculty demonstrated during the courses they took in their engineering major. Both surveys included demographic information about the respondents. The current round of testing required respondents to complete the surveys twice, and 19 faculty members and 261 students had full data.Overall, the test-retest reliability of the E-FSSE and E-NSSE was satisfactory. The student survey items were all correlated from Time 1 to Time 2, as were a majority of the faculty survey items. Interestingly, while the faculty responses yielded several different factors that describe student outcomes in engineering education, the student responses showed one large scale that was labeled “General Engineering Skills” because it encompassed a majority of the learning outcomes. Future research should examine the reasons behind this difference.
The validity of the individual scales was also satisfactory, with most of the factors having Cronbach’s alpha scores above the generally-accepted .7 level. However, future testing is needed to determine whether the weaker factors should remain as-is in the surveys or should be modified to yield stronger scales. In addition, confirmatory factor analyses should be conducted with large groups of respondents. The small sample size of faculty respondents precluded this confirmatory analysis in the present study, although the exploratory analysis previously conducted indicated the 25 different factors.
These results indicate that the E-NSSE and E-FSSE may be used to determine elements of student engagement in engineering departments. In particular, the Student Outcomes scales in the E-FSSE had acceptable reliability (with the exception of the “Strong Work Ethic” scale), as did the highly inclusive “General Engineering Skills” scale in the E-NSSE. The items of the E-NSSE also had significant test-retest reliability, indicating that the survey items will give consistent and dependable results across respondents. On the other hand, the Instructional Practices scales on the E-FSSE were less reliable, and several of the individual items did not have significant test-retest reliability. This indicates that further testing of the E-FSSE may be necessary.
Project made possible with support from the National Science Foundation (via grants DUE- 0404802 and DUE-0618125).