The Democracy Empowerment Rubric (In Early Childhood Education): A Summary and Analysis of Early Educators’ Feedback

Introduction

The Democracy Empowerment Rubric (DER) (see DER on page 20 here) was developed by Ben Mardell and Michael Hanna at Lesley University as a way to look at whole group conversations in early childhood education settings, measure if a classroom is building knowledge collaboratively, and measure if the group is a democratic learning group. Standards and measurements have become common in early childhood education and often influence practices. The DER is designed to fill a gap its creators believe exist as there is no current measurement of the democratic nature of a classroom discussions (Mardell & Hanna, 2016). The DER has not been put into wide spread use at this time. The purpose of this research in this study will be to get feedback about the DER from early educators currently in the field primarily to find if there is interest in using the DER , if they understand the DER, and ways that it might be improved. While Mardell and Hanna noted there is no current measure of democracy related to group discussions, many widely used tools exist which evaluate early education classrooms; often found in these tools are measures of teacher and child interactions and child to child interactions. This paper will begin with a review that summarizes the current state of measuring group interactions in early childhood education settings. The review will summarize what currently used tools measure and analyze where there might be overlap with the DER and when the DER measures aspects of discussions that these currently used tools do not measure at all.

Review of Current Assessments and Standards

CLASS (Classroom Assessment Scoring System)

            The CLASS is a widely used  standardized instrument which can be employed by researchers, policy makers, and practitioners to evaluate the quality of classroom environments  from preschool to third grade (Pianta, La Para, & Hamre, 2008). The class looks at a broad range of aspects of the classroom environment including materials and space. This review will only look at aspects of CLASS that may be related to group discussions which is the focus of the DER. Under the section of “Instructional Learning Formats” class has measurements of students’ interest and the clarity of the learning objectives during lessons. CLASS measures student interest. Specifically, CLASS looks to see if students are actively participating, listening, and have focused attention during activities and then rates the level of student interest from one to seven on a scale. Similarly, CLASS rates how effectively the teacher focuses and orientates children on learning objectives and the focus on the lesson  (Pianta et al., 2008).  Here we have an example where the DER differs from CLASS. In CLASS there is focal point on making sure students are focused on predetermined learning objectives while the DER focuses on measuring collective meaning making that centers on children’s questions, ideas, theories, and stories (Mardell & Hanna, 2016). Moreover, there is no aspect of the DER that attempts to measure how a teacher orientates students toward a specific learning objective.  

            CLASS has another section that measures concept development and specifically how teachers use instructional discussions to promote higher order thinking skills and rates whether teachers rarely, sometimes, or often allows students to generate their own ideas and products (Pianta et al., 2008). On the surface, this appears to overlap with section 3.d on the DER which asks “Does The Conversation yield generative ideas? (Mardell & Hanna, 2016). However, when we look deeper at what each tool is measuring, there are key differences. CLASS  includes examples of questions  teachers can ask to help children generate their own ideas. In a separate measure, CLASS scores the feedback the teacher gives during these conversation and whether the teacher asks students to explain their thinking. (Pianta et al., 2008). The DER focuses on measuring whether students are collectively collaborating in conversations which allow them to create ideas and make meaning (Mardell & Hanna, 2016). Here is where we find the key difference of what is being measured. In CLASS rating scale, the teacher is still at the center of guiding the creation of ideas and there is no mentioned of evaluating how collaborative the idea generation is. In the DER we find a measurement of whether or not the collaboration between the students is occurring and if the teacher is making an effort to create a culture of collaborative learning. This is a different focus in what is being measure and what is being measured often influences practice.

Caregiver Interaction Scale

            The Arnett Caregiver Interaction Scale is a frequently used tool to measure caregiver and child interactions in early childhood settings. The Massachusetts Department of Early Education and Care recommends it’s use and makes it available for free download on its website. [1] Unlike the DER, The Arnett does not focus on group times but on general classroom interactions between caregivers and students. Though they have different focuses, it is in the opinion of the researcher that comparisons between the DER and Arnett should be made given its popularity in measuring interactions between children and caregivers.

            The Arnett is a Likert-type scale which has twenty six statements that are rated from one to four with one being not at all true and four being very much true. The statements relate to caregiver and child interactions with a focus on attentiveness, harshness, and permissiveness. For this literature review, the researcher will focus on the statements of the scale which seem most relevant to potentially analyzing group time discussion. These statements are as follows:

“4. Places high value on obedience

  8. Encourages children to try new experiences

  9. Doesn’t try to exercise much control over the children

 11. Seems enthusiastic about the children’s activities and efforts

 14. Pays positive attention to children as individuals

 21. Doesn’t seem interested in the children’s activities

 22. Seems to prohibit many of the things that children want to do (Arnett, 1989).”

            Unlike the DER, The Arnett has no focus on the distribution of the talk between children or encouraging children to collaborate. Additionally, the Arnett does not measure whether or not any ideas are being generated between the interactions between the caregiver and child/children; nor does the Arnett attempt to analyze the type of language being used.

            There are some aspects of the Arnett which can be looked at having mild overlap with the DER. Statements 9 , 11, 14, 22,  could all be thought to relate to DER section two which is titled “Who is in Charge” (Mardell & Hanna, 2016) . If the teacher is trying to exercise control over the group times and is prohibiting the things that they want to do as stated in the Arnett, then that could correlated to statements in section two of the DER such as “teacher controls all aspects of the meetings, calling on children, and deciding the agenda.”

            Despite the possibility of aspects of Arnett being used to measure group times discussion in a manner similar to the DER, overall there appears to little overlap between how the DER and Arnett are used to evaluate interactions in the classroom. Thus, an argument could be made that both of these instruments could be used concurrently to measure classroom interactions with both instruments measuring completely different aspects of classroom interaction.   

 

Massachusetts Guidelines For Preschool Learning Experiences

            The Massachusetts Department of Education created guidelines for preschool learning experiences. The researcher reviewed these guidelines and attempted to determine which guidelines were relevant to group discussions. The guidelines that appear to be relevant to group discussions will be listed below and the researcher will then analyze how the guidelines compare to the DER.

Language Standard 1.Observe and use appropriate ways of interacting in a group (taking turns in talking; listening to peers; waiting until someone is finished; asking questions and waiting for an answer; gaining the floor in appropriate ways).

Language  Standard  2. Participate actively in discussions, listen to the ideas of others, and ask and answer relevant questions

History and Social Science Standard 3:  Identify and describe cause and effect as they relate to personal experiences and age-appropriate stories.

History and Social Science Standard 6: Discuss examples of rules, fairness, personal responsibilities, and authority in their own experiences and in stories read to them.

History and Social Science Standard  7:  Talk about the qualities we value in a person’s character such as honesty, courage, courtesy, willingness to work hard, kindness, fairness, trustworthiness, self-discipline, loyalty, and personal responsibility.

History and Social Science Standard 8: Discuss classroom responsibilities in daily activities.

History and Social Science Standard 9: Discuss roles and responsibilities of family or community members who promote the welfare and safety of children and adults.

Social and Emotional Health Standard 17: Talk about ways to solve or prevent problems and discuss situations that illustrate that actions have consequences.

Social and Emotional Health Standard 18: Talk about how people can be helpful/hurtful to one another. (Council, 2003)”     

            In the Massachusetts guidelines language standards one and two appear to be most relevant to group discussions. The MA guidelines make no mention of the distribution of the talk, collective meaning making, connectedness of the conversation, and others factors that the DER attempts to measure. Instead, the MA guidelines seem focused on helping children learn socially acceptable rules of entering and participating in group conversations without a more in depth analysis of those conversations.

            In the MA Guidelines sections under History and Social Science and Social Emotional Health there are standards related to class discussions. The standards are written as guides to teachers on what to have children discuss such as discussing character, problem solving, and how people can be helpful. These guidelines could possibly result in collective meaning making and the generation of ideas both of which are measured by the DER. In this case, it can be argued that there is some overlap. On the other hand, the predetermined goals of the direction of the instruction and the fact that guidelines are meant to shape teacher practice suggests that the guidelines would be encouraging teacher control of the conversation and the teacher steering the meaning making. Thus, it could be postulated that the end goals of the MA guidelines and a highly rated group interaction as rated by the DER are not congruent.

Massachusetts Standards For Preschool and Kindergarten: Social and Emotional Learning, and Approaches to Play and Learning

            In 2015, the state of Massachusetts expanded on previous guidelines by creating then Massachusetts Standards For Preschool and Kindergarten: Social and Emotional Learning, and Approaches to Play and Learning. The researcher looked over these standards to see which ones were most relevant to group discussions. Those standards are listed below.

“StandardSEL7:The child will demonstrate the ability to communicate with others in a  variety of ways.

Standard SEL8: The child will engage socially, and build relationships with other children and adults

Standard APL 4: The child will demonstrate creativity in thinking and use of materials

Standard APL5: The child will cooperate with others in play and learning (Care, 2015)

            All of these standards written above are given more clarity with examples underneath the standards. While there are arguments to be made that these standards apply to group discussion, in the details given under each standard there is little or no reference to group discussions. Nor, do the standards focus on similar aspects to that of the DER. Here we can say with certainty that the DER is measuring aspects of group discussion which are not highlighted in the Massachusetts Standards For Preschool and Kindergarten: Social and Emotional Learning, and Approaches to Play and Learning.

 

 

 

 

 

 

National Association For The Education of Young Children (NAEYC) Standards

            Receiving accreditation from NAEYC has long been considered the gold standard in early education and care for programs that want to demonstrate quality. As part of the process to get NAEYC accreditation, programs are observed and create a portfolio where they must prove that they meet a variety of standards. The standards cover all aspects of the program but the researcher will highlight below standards which may relate to group discussions and then compare these standards to what is measured by The DER.

“ Standard 3.G..09: Teachers engage in collaborative inquiry with individual children and small groups of children.

Standard 2.L.06: Children have varied opportunities to engage in discussions about, fairness, friendship, responsibility, authority, and differences. (Children, 2014)

            Standard 3.G.09 is of note in that it discusses collaborative inquiry with individual children and small groups. If it included large groups than one might be able to find some overlap with aspects of the DER. Standard 2.L. 06 deals with topics which might be discussed in group settings, but does not discuss the nature of those topics. This sets it apart from the DER which focuses on the distribution of the discussion and the collective meaning making that can happen during group discussions.

            Having compared the DER to some common standards and assessment tools related to classroom interactions, Mardell and Hanna’s claim that the DER is a unique instrument looks to be valid. The DER only has minimal overlap with other instruments and standards and the DER’s primary focuses are absent from these other common tools. Having shown that the DER is a distinctive instrument, the researcher will now attempt to get feedback from early educators about the usefulness of The DER, whether early educators would want to use it, how it could be improved, and what training the educators would want before using it.

 

Recruitment

The research goal was to get feedback about The DER) from a variety of professionals in the field of early childhood education in order to assess the potential usefulness of the rubric from a practitioner’s perspective and receive constructive criticism on how the rubric could possibly be improved.

The interview subjects were chosen based on a sample of convenience. The researcher through working as a preschool teacher, early education professor, and as an early education college student at 4 difference universities in the Boston area had built up a large network of connection in the early education field. Potential interviewees were recruited by the researcher by sending personalized emails to potential research participants who were current and former early education colleagues and classmates.  All emails contained a PDF copy of the Democracy Empowerment Rubric and a link to a journal article by Ben Mardell and Micheal Hanna called "The Democracy Empowerment Rubric: Assessing Whole Group Conversations in Early Childhood Classrooms" which was published in the fall 2016 issue of Lesley Universities' Journal of Pedagogy, Pluralism and Practice. Participants were also given a summery of the rubric and asked to respond to the email if they were interested in being interviewed either face to face or via an online platform such as Google/Facebook/Skype about the rubric after having time to review rubric. In all 25 emails were sent out and fifteen people expressed initial interest in being interviewed. Of those fifteen people, eight interviews were actually conducted. The seven people who initially expressed interest but did not get interviewed dropped out of the process for a variety of reasons including no longer having the time, no longer having interest, or in some cases they stopped responding to emails without explanation. All of the participants who were interviewed were from the Boston area and female. That all participants were female was expected due to the demographics of the field of early education which is dominated by women. The participants had a broad range of backgrounds in the field of early education. The following is a short summery of each participants experience.

Participant 1: They have six years experience as a prek teacher at an early education program based out of a Jewish community center and a bachelor's degree in psychology.

Participant 2:  They have fifteen years as a preschool teacher. In addition, they have fifteen years of experience as an early education professor at an associate degree early childhood education program in the Boston area and a master's degree in early childhood education.

Participant 3: They have seven years experience as a preschool teacher at a nonprofit early education center in Boston and have a bachelor's degree in Early Childhood Education.

Participant 4: They have two years of experience as an assistant preschool teacher. Prior to that, she was a volunteering mother in her children's early childhood program for three years. They have an associate's degree in early childhood education.

Participant 5: They have twenty years experience as a preschool teacher at a nonprofit early education center in Boston and a bachelor's degree in early childhood education 

Participant 6: They have thirteen years of experience as preschool teacher and five years experience as an early intervention specialist and a master's degree and post master's degree in early childhood education 

Participant 7:  They have seven years experience as a preschool teacher and 2 years experience as an assistant director of an early education center. They have a bachelor’s degree in psychology and have taken early education courses at another institution as well.

Participant 8: They have four years of experience as a preschool teacher at a nonprofit early education center and a bachelor degree with a double major of psychology and sociology.

Interview Methods

Five of the interviews were conducted in a one on one interview format. For 3 of the interviews who worked at the same nonprofit early education center were interviewed at the same time in a focus group format. It should be noted that the education level of the interview subjects on average is on the higher end of the amount of education in the early childhood education field outside of the public school system. None of the participants had any prior knowledge of the Democracy Empowerment Rubric prior to being recruited to be interviewed. All of the interview subjects had most of their experience working at early education programs in the Boston area outside of the public school system. Participants were not compensated for agreeing to be interviewed aside from an offer to be bought a coffee/tea/smoothie. 

Interview subjects signed a consent formed (see appendix) before being interviewed and understood that they could opt out of the process at any time and their names or the names of where they worked would not be used in the write up of the research findings. 

Participants were all asked the following questions:

1. What positions have you held related to the field of early childhood education: 

2. Having read over the The Democracy Empowerment Rubric, did the rubric cause you to think about discussion during group times in new ways? If yes, how?

3. Would the Democracy Empowerment Rubric be something you would be interested in using to assess interactions in a classroom? Why or why not? 

4. Are there any ways you think the rubric (or any section of the rubric) could be improved? 

5. Are there any aspects of the rubric which you find confusing or would want clarification on? 

6. What type of training (if any) would you want related to the rubric before using it as a tool to evaluate group conversations in your program?

Additional specific and follow up questions were asked based on interviewees’ experiences and specific responses. 

Data and Analysis

            After conducting the interviews, the interviewer coding responses into similar themes when more than one participant expressed similar view points. In addition, the researcher highlighted individual responses that gave unique feedback about the rubric. The following paragraphs will first discuss broad themes that were found in the responses and then focus on individual participants’ responses which could be helpful to analyzing the central research questions of the study.

            All the participants responded that reading over the rubric caused them to think about group time discussions in new ways. A frequent refrain from the respondents related to the balance of power of who was doing the speaking during group discussions. Participant six noted “ It causes me to reflect on the balance of power in who is speaking. I generally do not think the children are in a democratic society in the classroom. I am now wondering more about how group discussions can be more democratic and the usual circle time of being just thought as instructional.” The three participants who worked at the same nonprofit all noted in their responses that in their practice they will often ask a single question such as “what did you do this weekend or what can you do with snow” and then go around the classroom and have every child respond to the same prompt. When asked if they were explicitly taught to conduct group discussions this way in college they said they were not. Participant Three remarked that her practice was influenced more by observing the teacher she worked with during her student teaching experience and modeling her practices during group discussions as opposed to what she was taught in college; she joked that a lot of what one learns in college leaves your brain soon after and your experiences in the classroom and having something like the DER handy to look at could remind you of some of the things that you may have forgotten.

            Reading the DER caused many of the participants to reflect on their practice and remember things they had learned or wanted to do before. Participant One noted how the rubric made her remember past things she has read and learned about classroom discussions but did not always put into practice saying “ the rubric reminds me topics I was already aware to be useful during group discussions but was helpful in reiterating the importance in some of those ideas and making me think more carefully about things I know I want to do but I don’t always notice whether or not I am doing those things or encouraging them in my classroom.” Participant Four also touched on the idea of reflecting stating “ it made me go back and reflect on our classroom discussions, specifically how conversations get started, like are the kids starting the conversations and how much influence children have in starting or continuing these discussions or are the teachers controlling everything. It made me really think about the role the teacher plays in the beginning, middle, and end of group discussion.” The power dynamics and control of the group discussion was a common theme with Participant Five noting that in her twenty years of experience in the field that things have become more “cookie cutter “ and children seem to have less control than when she first got started. Participant Two who has years observing a variety of programs as her role as a professor and advisor pointed out that the places she observes with emergent curriculum would score higher on the rubric and that she is not sure if centers that do not believe in emergent curriculum and go more by standards and guidelines would consider using something like the DER because it may conflict with their programs philosophy and goals. Particularly she mentioned a medium size for profit early education center chain where teachers must follow a more scripted curriculum as opposed to focusing on children’s interests. She said reading over the rubric made her stop and think about how different the goals and methods are at the different programs she has observed.  

            Six of the eight participants expressed interest in possibly using the rubric in their practice. Of those that were interested a common theme was that it would help them improve their teaching practice to have the DER as a guide post. Participant One noted that she sometimes records and transcribes classroom conversations and then analyzes them on her own. She believes that the DER “could be helpful tool during that time of re-assessing and examining documentation in order to drive a co-constructed curriculum in the classroom that best reflects students ideas and interests.,” Participant Seven who currently is the assistant director of a program stated she printed out a couple copies of the rubric and gave it to her two interns to observe each other’s practice in the preschool and prek classrooms where they were doing their internships. She reported the students found using the DER to be an interesting experiences and that she may try to use it again in the future. [2]

            For the two respondents who did not have interest in using the DER both of them mentioned a feeling of being overloaded with different assessment tools and standards. Participant Six suggested that the concepts that are in the rubric are interesting and should possibly be made to be a part of existing performance reviews and or social and emotional learning standards instead of being made into a separate evaluation tool.

            When discussing both improving the rubric and the need for clarifications, many common suggestions were given by the participants. The most common had to do with the issue of terminology in the rubric that was unfamiliar to the participants. Some specific issues where brought up regarding terminology. Three percipients mentioned the phrase “language of thinking” in section 3b of the DER as being especially confusing because they had not heard that term before or where unclear what it meant. Participant eight mentioned that when she first read 3b she was confused about what it meant but when she read level four under 3B it gave her some clarification about 3b was looking for but that she was still a little confused. Participant 3 mentioned a similar issue with section 1b of the rubric. She noted that in the description of 1b and the first three levels of 1b there was no mention of non verbal participation in the conversation, however at level 4 non verbal was mentioned. She wanted clarification if non verbal communication would be part of each level or only looked at when a classroom reached closer to the higher level during group discussion.

            This leads to another aspect of the rubric which caused confusion among many respondents. Five of the respondents mentioned having worked with nonverbal students in their classrooms and they wondered how the rubric would relate to those students in the class who are non-verbal. Participants six and eight both suggested more explicit examples of what non-verbal communication and participation might look like in a group discussion could be added to the rubric. In addition, the topics of English Language Learners (ELLs) , children of different races, and genders  were mentioned by Participant Eight  and if there were ways the DER to specifically look at the level of participation levels of ELLs, children of different races, and different genders. She shared an anecdote of how her current group times are often dominated by a couple of boys who were on the louder and talkative side. She wandered if things like natural personality differences and societal factors that affect children of different genders and ethnicities willingness and eagerness to participate in group discussions should somehow be incorporated into the design of the DER.

            All of the participants who expressed interest in using the rubric as either a tool to evaluate their own practice or evaluate the practice of others said they would like both training and coaching on how to use the rubric. Participant Three specifically mentioned that she would not want  “one of those hour long online training when you just watch a video; I would want to be able to talk to someone who has used the rubric before so I could ask them questions and even get advice and coaching after I started using it.” On the other hand, Participant Seven specifically mentioned that she would love to watch videos of what each level looks like under each section. She brought up section 3a as an example of the different levels of connectedness in the conversation and how that would look because she often works with children who go off topic which completely changes the directions of the subsequent responses. She wants to know would one be ranked higher if they tried to focus the conversation back onto the original topic or if it would be better to let each child’s aside lead the conversation in new directions. Three other participants also suggested they would like to see some sort of video which explains the rubric and how to use it further.  Participant One suggested that she would like someone else who was more experienced to watch videos or listen to recordings of her group discussions and then evaluate them using the rubric so she could have a better idea of how it worked when using it to evaluate her own practice.

            Another aspect of the rubric that three participants wanted more information on was when the rubric could be used. Participant Eight mentioned that she likes to do a lot of small group activities and discussions with four to six children and she wondered if the DER could be used during those situations or if it was only relevant to large group discussions,. Participant Four articulated something similar wondering if the rubric could be used as a measure throughout the day even evaluating things like lunch time where she feels a few students dominate the conversation. 

 

Limitations

            This study only included a small number of early educators. The early educators where not representative of the field as a whole as none worked in public schools and all of their primary experiences where working in preschool or pre-k settings. All the participants except for one had attained a bachelor’s degree in early education or a related subject. This level of education is not representatives of early educators at center based programs as a whole.  Moreover, since all of the participants were previously familiar with the researcher as colleagues or classmates, there is a chance that those prior relationships added some level of bias to the responses. 

Conclusions

            The early educators surveyed felt like The DER helped them reflect on their practice and analyze their group discussions in ways that they do not do so currently. The majority of respondents would be interested in using The DER in the future. Those that were interested would like to receive ongoing coaching and training on how to use the rubric in order to feel confident enough to use it effectively. Further research is needed to confirm and attempt to generalize these findings. A wider range of early educators could be interviewed and or surveyed to get more feedback about the rubric. For those that were interested in implementing the rubric, a study could be conducted after they attempted to implement the rubric to get feedback about their experiences using The DER.

Reference

Arnett. (1989). Arnett Caregiver Interaction Scale: FPG Child Development InstituteUNC-Chapel Hill Retrieved from http://fpg.unc.edu/sites/fpg.unc.edu/files/resources/assessments-and-instruments/SmartStart_Tool6_CIS.pdf

Care, T. M. D. o. E. E. a. (2015). Massachusetts Standards For Preschool and Kindergarten: Social and Emotional Learning, and Approaches To Play and Learning  Retrieved from http://www.doe.mass.edu/kindergarten/SEL-APL-Standards.pdf

Children, N. A. F. T. E. o. Y. (2014). NAEYC Early Childhood Program Standards and Accreditati on Criteria& Guidance for Assessment  Retrieved from http://www.naeyc.org/files/academy/file/AllCriteriaDocument.pdf

Council, E. C. A. (2003). Guidelines for preschool learning experiences.  Malden, MA: Massachusetts Department of Education Retrieved from http://www.eec.state.ma.us/docs1/curriculum/20030401_preschool_early_learning_guidelines.pdf.

Mardell, B., & Hanna, M. (2016). The Democracy Empowerment Rubric: Assessing Whole Group Conversations in Early Childhood Classrooms. The Journal of Pedagogy, Pluralism and Practice, VIII(1).

Pianta, R., La Para, K., & Hamre, B. (2008). Classroom Assessment Scoring System Manual Pre-K. (Vol. Third Printing). Baltimore: Paul H Brookes Publishing

[1] http://www.eec.state.ma.us/docs1/qris/20110121_arnett_scale.pdf

[2] This information was received in a follow up email after the initial interview was conducted. An effort was made to possibly interview the two interns but logistics prevented this from happening as they were focused on finishing their internship and finals in their college classes.

  Trends In Research: Focus on The Journal of Early Childhood Research and Practice  

  Trends In Research: Focus on The Journal of Early Childhood Research and Practice  

“Smart From The Start’s” 10 Year Anniversary Celebration will Be Thursday, October 25, 2018, from 6:30-10:00 PM at the Boston Center For The Arts Cyclorama

“Smart From The Start’s” 10 Year Anniversary Celebration will Be Thursday, October 25, 2018, from 6:30-10:00 PM at the Boston Center For The Arts Cyclorama