Nothing exciting but to save me from explaining this again and again….
Yesterday the Agile on the Beach conference committee finalised the speaker lineup. I’ve just this morning sent the acceptances and the “Sorry” e-mails. Since there were about 150 submissions and only 41 slots to speak in we cold only accept less than a third of the submissions – even fewer actually because some sessions are doubles.
Here is how we came to our decisions.
We have five committee members – you can see who on the website. Each of these was given electronic copies of all the submissions – including long and short synopsis, speaker bio, travel origin (implying how expensive it might be for us to bring someone in) and other details.
Each committee member independently scored each submission on a scale from -2 (I don’t think this should be at the conference) to +2 (I really want to see this myself.) By making Zero the default any reviewer who didn’t review a session or felt they did not have the knowledge to pass judgement didn’t bias the results. Sometimes reviewers made a comment as to why they had given this score but not always.
I took these scores and added them together. In the first review meeting (two weeks ago) we reviewed the total scores, debated a few sessions and the top scoring sessions were short listed.
In the product track 17 were shortlisted, business 13 and for the team track 27 made the shortlist. Again each reviewer independently reviewed the shortlist. Software was slightly different because we again decided to make extra space to keep our technical side, more to follow. Since Product expanded from one day to two days this year it means the conference has grown again.
But this time instead of scoring the sessions reviewers ranked them. Each track has 9 sessions (if all are single) and each reviewer ranked the sessions knowing this.
For the second meeting I took these rankings and averaged them for each session then in each tack ordered the track. At this point we had some clear accepts. At the 9 mark things became fuzzy, in one track we had a double in position 9, in another track one person had three slots in the top 9 and so on. So some manual adjustments were made. We also made a call on some things we thought should be in the conference or developed mini-themes.
In the end a lot of sessions didn’t make the cut simply because they were out competed. Everything that made the shortlist was strong, a lot that didn’t make the shortlist was strong too. We just don’t have space for everything we would like and can’t expand the conference every time, sorry.
Anyway, I hope that explanations helps.