Friday, July 14, 2017 + Monday, July 17, 2017

I am thankful to say that data entry is over, and all responses have been recorded on the excel sheet! Now, the real work begins!

Friday consisted mostly of completing data entry, cleaning the data, and investigating best methods for analysis. In fact, that same day I conducted an informational interview with a freelance Visitor Studies consultant who forwarded me some very helpful sources to aid me in approaching analysis and structuring the eventual report.  One such source is, Informal Science, a website that was introduced to us at the begging of the summer with our informational/introductory materials that provides educational, cultural, and science institutions free, online resources in support of program evaluations: http://www.informalscience.org/.

This website has been great help to me this morning/afternoon in my continued exploration of understanding the data we gathered. In a meeting with Jess Bicknell today, she summarized some general standards for coding conversational data and provided me some great starting points from which to organize themes in the most optimal way. I realized that the coding I had undertaken in my thesis was not nearly as detailed, and I am so grateful to learn now what a standardized form of coding entails.

In constructing a preliminary rubric for coding, I have started to confront a problem of deciding the number of responses that constitute their own code category. Some themes seem very clear cut, for instance the responses to the art vs. artifact question can be easily delineated into four distinct codes:
  • Geography
  • Old vs. Timeless
  • Found vs Made
  • Function vs decoration
However, these categories, too can be refashioned differently depending on the objectives of the person coding and analyzing the data. Jess explained that a good way to establish whether a rubric is sufficient enough to reflect trends within the data is to have two individuals construct separate rubrics and compare their results. This way, rubrics that are too general can be adjusted before being put to use. I hope to have the time to adjust for any bias in the eventual rubric in order to optimize the integrity of our ultimate results.

I also took the time to research some ways of reporting this data so that it satisfies all of the intended stakeholders. In reading an article on Informal Science, I learned that stakeholders can be primary, secondary, or tertiary to the project, and that their proximity to the project can help determine the most helpful ways to report findings. 

I have determined that primary stakeholders include Jess Bicknell and Monique, as they will be utilizing the data and findings directly in the future for their own research as well as content production for the Africa Galleries. As a result, I plan to present a 1-2 report with my findings and a helpful visual, as well as appendices with the evaluation materials. 

I have determined that secondary stakeholders include the general curatorial team, who may be tangentially utilizing the findings. I have been thinking that structuring a clear 5-10 slide powerpoint describing methods, results, and possible significance to the project with be most useful for their purposes as they move forward with the project.

Finally, the tertiary stakeholder group could include the public and all other interested parties (i.e. other evaluators). I am not sure how I want to communicate my results to them yet, but I am thinking a short summary paragraph or abstract open to all on the Penn website. At the moment I am very open to suggestions!

Comments

Popular posts from this blog

Isabella: Exhibiting Blackness and an Exploration of African Exhibitions Across the US

Lara: Thursday, June 9th