We are pleased to announce that we are launching the History Conference Report Card Project.
This is a collaborative, crowd-sourced effort to get a handle on the state of History Conferences as events, and to evaluate them both individually and collectively. This project is directly inspired by Secret Frequency‘s Canadian Festival Report Card project. To quote their website project, “Our aim is to point out to artistic directors and the community at large the dearth of women-identifying and non-binary artists on stages across the country and encourage every to do better and demand better. There’s a wealth of incredible women and non-binary artists making music in Canada, so booking high-quality talent is easy – if festivals have the will to do it, they can.” The goal of this project is to apply that idea to historical conferences, to collect and provide data so that conference organizers can be encouraged to do more, and to do better. We will be producing a ‘Report Card’ of sorts which will break down the information and provide a basis for discussions going forward.
It is clear just from anecdotal evidence that history conferences have similar issues. Representation, diversity and inclusion, affordability, and accessibility are key concerns. Further, conferences can be unwelcoming and even not entirely safe places for women, queer people, and people of color. The goal of this project is to collect data on conferences, workshops, seminars and other events and then, in the new year, to publish that information so we can get a data-driven sense of the state of history conferences as a whole and hopefully engage with colleagues about strategies for tackling these issues at conferences. The intention will be to then collect the same information in 2020, 2021 and going forward in order to see what if any changes have occurred
Because of the sheer number of events, this will have to be a crowd-sourced project. This applies to the rubric and criteria for evaluation, as well as the data about the conferences (please feel free to comment on this post, or email our social media editor Samuel McLean). The more data we collect, the better the analysis can be. While the feedback we gather will be qualitative and possibly anecdotal in nature, we believe it will still provide strong evidence for the experiences of attending academic conferences. However, it is also critical to create a good rubric for evaluating conferences. Accordingly, this project will have four phases in 2019.
Phase 1: Rubric Creation
Completion Target: 1 July 2019
At present, the plan is to divide the conferences and events into the following categories. The first category will be annual conferences and events. Putting these events together, whether they are large or small, will provide a sense of how organizations are making changes year over year, or not. The second category will be large, unique conferences. Although they don’t occur annually, they represent an important snapshot as they are high-prestige events. The third category will include smaller conferences with fewer days, and fewer parallel streams of panels. The final category will include single-day workshops, seminar series conducted over multiple weeks, and everything else that does not fit into the categories listed above. We interpret ‘History’ rather broadly here, given the important of multidisciplinary and interdisciplinary studies and so would encourage the input data about more events rather than fewer.
Also at present, conferences will be evaluated on the following criteria:
A) Diversity: This category will look at the diversity amongst the presenters. For example, what percentage of presenters identify as women, people of colour, first nations, LGBTQ etc. What percentage of a conference’s panels were ‘manels’ (or men-only-panel)? Did the event’s CFP mention diversity as a goal?
B) Accessibility: This category will look at what efforts the conference organizers have made in order to make their events easy to attend, and engage with. For example, do they provide accessible rooms? Are there accessible washrooms? Do they provide microphones and speakers? Are there any rooms/spaces used that are inaccessible? Do they provide transcriptions of papers? Is the conference live-streamed? Is there childcare available? Do they inquire about presenters’ food allergies and make an effort to accommodate them?
C) Affordability: This will look at the costs of attending the conference, including the registration fee, costs for events such as conference dinners or dances, the costs for associated hotels if any, and also the availability of student/other discounts and bursaries.
D) Environmental Impact: This will evaluate the efforts to mitigate or reduce the environmental impact of a conference. For example, is the conference in a location that requires most of the attendees to fly there or is otherwise readily accessible? Is the conference live-streamed or broadcast in such a way that individuals in other locations can participate?
E) Other Factors: This will provide an opportunity to make mention of anything particularly good, bad or noticeable about a conference that doesn’t fit into other categories.
I would very much like to improve upon these criteria for evaluation and all input is welcome.
Phase 2: Creating the Database and Interface
Completion Target: 15 July 2019
An online form will be created based on the final rubric so that data can be collected, and put into a database hosted on this website.
Phase 3: Collection of Data
Completion Target: 31 January 2020
This process will be open until the end of January 2020 in order to collect data about as many 2019 events as possible. The online form will be anonymous (will not require a user name or any identification), so it can be filled out by an organizer or attendee without them being identified. Also, once the basic information for an event has been entered, there will be another form so that other attendees can comment on their experiences anonymously.
Phase 4: Report Card Release
Completion Target: Mid February 2019
The goal is to release the full data for 2019 events sometime in mid February 2020. This may or may not involve ‘grades’ as such, but it will be designed to provide a picture of the data as a whole, and to put individual events in context of both their categories and of all events for which we have data.
We hope that the History Conference Report Card Project can provide context for the ongoing discussions about the state of history conferences and events, and provide some direction for improving them.
Please feel free to use the comment section below to make suggestions for improving either the categories of events, or the criteria used to evaluate them. If you’d like to become involved in this project, or if you’d prefer to make your comments privately, please email Social Media Editor Sam McLean