Technology always seems to be a divisive topic. My thoughts on what causes that conflict in 249 words below:
I am not exactly sure how to curate my many forays into technological conflict. I have had the prerequisite Google Docs battles. I have seen the "but how do I print and mark a Google doc?". Argued the many variations of "But what happens when..." and ignored the "my binder isn't offline". I have pushed for change for online bookings and digital cameras and even let students use the fearsome personal cell phone and countless other "new ideas". I will admit I push for change. Often with the response "it is easy for you, you are a geek" or the ever present "but you are good at that". I sometimes clarify I choose to be good at that. I learned how to answer my own questions and trouble shoot. I saw the value and work to improve that. I read an article that when summarized said "we need to stop saying "I'm not good with computers" It isn't acceptable for a teacher to say "I'm not good with books" it should be the same thing. Is the source of the conflict fear or resistance or the unknown? Are the excuses and barriers just a basic difference in belief that one can understand and trouble shoot technology or one can't. As I read this week I realized the reasons for resistance are multifaceted and differ between individuals but they must all have some part in a belief that the inputs do not create enough benefit in the results to warrant change.
Teaching and Technology
Tuesday 13 May 2014
Sunday 6 April 2014
Saturday 22 March 2014
Collecting Data - Survey Assignment 5
DATA vs...
Ok so not that data.
( I tried to take some screen shots of the before but it is multiple pages and it was really messy and I will blame my interpreter driod malfunctioning. I will describe what changed and post the final survey at the bottom and hopefully that works.)
( I tried to take some screen shots of the before but it is multiple pages and it was really messy and I will blame my interpreter driod malfunctioning. I will describe what changed and post the final survey at the bottom and hopefully that works.)
The biggest question to come up on my survey was anonymity. The results included the student name. My test students did not have any problems directly but identified that this could be a concern if a student had a concern and were unsure if the instructor would take offence. It might be important to improve the intro to the survey to clarify the use of the data and the anonymity. Doing it anonymous was suggested but with the rank information and skill progression that might be difficult. It would not be hard to figure out who the students likely were. Also with such wide demographics in the club with age and fitness it would be very difficult to analyze the data without specific details. Important trends might surface in different groups. This could be solved with a third party collecting the data and assigning numbers to names.The sample size of four or five was difficult as the group is so diverse the results were significantly varied.
Some small scale changes were suggested to deal with specifics like adding an explanation for stripes and wording on some questions. The what was your “motivation” for joining karate was changed to “reason” for joining to help clarify the intent of the question.
The temptation to ride the middle of the scale was mentioned by one participant so I decided to change the scale from seven to six. This might make it easier to look at the data in the future.
Further, some discussion with one participant was about if I need more information on in class communication vs. outside. How might we question about in class communication while teaching. This might need to be a separate section in a future survey.
Some participants might be under age so parental consent section needed to be added or modified. This requires a signature so would need to be done on paper before or with alternate technology.
There were also the prerequisite technology problems such as having the grid rows and columns backwards and having some text entry issues. However, the overall response was positive when talking about the digital format. Google Forms has a refined experience that makes answering the questions easy without becoming monotonous as some online sites can become. The pages breaks gave respondents a variety of types on each topic which broke up the repetitive data type questions. The gauge on how close a person is was useful for some respondents and the pages allowed for them to focus on each part of the questions clearly. In the future I think some of the data verification could be used to make sure responses fit within the norm.
Overall the students that tried this survey felt it was clear and thought the questions related well to the subject matter. The scope of the survey might be too large and in a final draft each area might need to be expanded. Take a look below.
THE SURVEY LINK
Labels:
809,
assignment,
ecurr,
etad,
karate,
program evaluation,
survey
Saturday 8 March 2014
Program Evaluation Plan - Assignment Three and Four
Labels:
809,
assignment,
ecurr,
etad,
karate,
program evaluation
Monday 10 February 2014
Assignment 2
Assignment 2
Evaluating a program effectively is difficult. I will endeavour to keep this response concise and coherent.Approach
I think if I need to evaluate this program I would use primarily the Provus model with some consideration of the Bennett's hierarchy of evidence for extension. Bennett's model is not an evaluation theory but a means of simplifying a program to look at it's logic and how the inputs plan to lead to results. The Provus model would allow for a thoughtful review of the design and implementation using the DIPP process. This would highlight changes to the procedure as the needs and target demographic has been well documented and researched.Rational
This program is attempting to deal with a complex multifaceted medical issue. While dealing with the complexities of this medical issue this program is also addressing a difficult social issue. This leads to a challenging evaluation.The provost model focuses on three main areas; a)defining program standards, b) determining discrepancy and c) using info to change/modify. As I read I am not sure the program has specific outcomes. The goal and need is made clear however the information does not address any way of knowing if that is happening. I know this may be intentional as assessment is specifically what I am attempting to discern, but I feel that the program outline has failed to outline any expectations of what success would look like. This first stage is where I found the Provus model difficult apply. The program has taken many steps to improve the different areas of their program but seems to lack a plan for change implementation.
As a bit of an aside, but hopefully relevant, my wife studied program design recently in her work to achieve her own masters and in her health care focus almost all programs are effectiveness based programs. They need to, on a fundamental level, direct goals to the improvement of the patient. To frame the rest of my thoughts I would evaluate with the end person in mind. The Provus model does not put any pressure on specific people and looks at what is different from what is intended. This model might help in a situation where many people are working together on a sensitive issue.
If one looks at the 7 levels of the hierarchy you could ascertain some more specific areas to help create clearer program standards and areas for evaluation and discrepancies.
Fist I think the resources (yellow) are a great area of focus. The program uses many people and programs to attract participants. A standard for what success in resources looks like should be clear.
Participation and Activities (green) have been carefully considered and the structure seems to be working so this area many not need much work.
For the greatest change I think the area of reactions and learning needs to be addressed. This is an area where data could be collected to help shape the program in the future. What learning is being done and how does that look compared to the programs goals.
This leads to the focus question of does this program change behaviour. The whole design is to improve health by changing behaviour so I think this would need to be addressed in the evaluation. A lot of work needs to go into how exactly this could be measured but I think that is where the assessment focus needs to end up.
Overall the Provus system would allow a look at accountability and help find areas where what is happening does not match what the program wants to happen. I think that it would provide a solid response that would not offend or ostracize people and help define success in a complex area.
Sunday 2 February 2014
809 Assignment 1
After reading a few different reports I found this very interesting report. I have always, as many teachers have, been involved with youth and underprivileged youth. My time living on the Elizabeth Metis Settlement in Northern Alberta provided me an eye opening view into some of the issues in this review. I found this topic to be highly relevant and interesting but after my second read through I was not entirely sure it is a program evaluation. A panel review is not what comes to mind when I think of program evaluation but according to some reading“Program evaluation is carefully collecting information about a program or some aspect of a program in order to make necessary decisions about the program” The review seems to follow the work of Michael Scriven. It has the formative evaluation piece that looks not to decide to stop a program but to improve one in place. It looks at goals and roles but the focus is on roles and working for change. It definitely shows the “collaborative or participatory or empowerment evaluation” (Scriven, 27). It does not however seem to focus on the checklist system that often is associated with Scriven. In theory it could be considered goal free as the review panel is not part of the system directly.
This review seems to have been very well thought out. It has a large diversity and excellent consultation with stakeholders. The report is divided into three sections. Section 1 is the Mandate Review and Process.The initial report goes into length about the reasons behind the review and the challenges that they face. This section clearly states the intention and premise. The strength here is a large diversity and people dedicated to making changes.
Section 2 is Child Welfare: Background and Context. This review seems to be well rooted in the current practice and issues of child care. There are some strong statics to fuel the need for change. I was surprised by the 9% per year growth in caseload each year and that 56% of interventions are for neglect. It paints a strong picture of a need for change. Which makes this very successful.
Section 3 is What we Heard: Recommendations and Rationale. This section shows a strong connection to the Rippey transactional format. Each recommendation in this section has a focus area and and a discussion of supporting actions. This section changed my view of the review process. The strength of this section is its great organization and feedback.
Overall this review is very comprehensive, takes political and social factors under consideration while maintaining an understanding of historical context and challenges.The biggest weakness may not be the result of the review. it is actually a side effect of the It is such a large program and requires some very definitive and ideological changes. It clearly states the goals but I feel that achieving that change requires so much work around each of the 12 recommendations outlined. I left this report with a better understanding but felt the task of implementation was massive and complex.
Subscribe to:
Posts (Atom)