Most of you probably already saw this at Larry’s site, but this is for a good cause…
I did a couple last time, and just finished doing two more this time.
It seems our occasional cabal of writers has taken a hiatus from rabble-rousing. So I would like to take this opportunity to thank everyone who promoted my lab survey last year. We received over 1100 individual responses to the survey, which kicked the results well into the “statistically significant” territory. [1100 responses / 20 different surveys meant 50-60 responses per survey image – the statistical goal was >30 per image.]
Over the last year we’ve been working with that data – and it is *great* – but we discovered that a database of just 500 pictures is not enough. We need 1000. So I’ve put together a new survey of another 500 pictures – again, there’s only 25 pictures per survey, but if enough people take the survey we will get significant results.
I’d love to have your help again this year to promote the survey. Larry’s MHI fans alone probably promoted around half of the responses.
The info to announce the survey is below – or you can link/share my recent post on Facebook.
-Speaker / s2la, Speaker to Lab Animals
Dear Friends and Colleagues:
Last year, the Hampson Laboratory Webpage (http://hampsonlab.org), conducted a survey of 500 pictures. We received over 1100 individual responses, for which we are extremely grateful to all who participated, and the people who rallied their own social media groups to our benefit.
This year we have another 500 pictures to classify. We are asking for volunteers to go to the Hampsonlab.org page and take a survey consisting of 25 images from our set.
Our laboratory is identifying “categories” and/or “features” of pictures that we use to examine how the brain encodes information according to a number of different characteristics – is it a cartoon? Photograph? Silhouette or drawing? Is it in color or black & white – if color, which colors? Are there specific items visible in the picture?
We know that different people categorize pictures in different ways. Thus, we need to conduct a survey of as many people as possible to find the most likely common classification from a fixed set of categories.
There are instructions and hints on the URL web page (http://hampsonlab.org). Clicking on “Take the Survey” will bring up a random selection of 25 pictures. Enter your responses by clicking next to the features that you think fit the image. The listed features will not perfectly suit all pictures. We know this. The features were chosen for reasons based on the psychology and physiology of human memory. Therefore, we ask that you choose the closest match(es) from the list of options. At the end of the page, “Submit” the Survey, and your responses will be written to our server.
If you have time, click on “Take Another Survey” and the webpage will return to the beginning. Each time you click “Take the Survey,” you should see a new page of 25 pictures. [You can choose your survey page by bypassing the default screen and editing the URL to read: http://hampsonlab.org/v3Survey1.html , …v3Survey2.html, …v3Survey3.html, etc. through …v3Survey20.html. Again, all results are logged on our server automatically once you select “Submit.”]
Disclaimer: The survey is completely anonymous – we record only the picture name and 1’s or 0’s representing your choices (you can briefly see the data in the box on the Submit page). All pictures in the survey are purchased or used under fair use, non-commercial research purposes only. Your response data contains no personal information. We conduct no diagnosis or analysis of the participants or individual responses. The data is used solely to develop anonymous population classifications.
Thank you for participating in the survey. We appreciate your help.
This is REAL research, and helps their foray into neuroscience. For real ‘statistical significance’, you want a minimum of 100 examples per dataset (e.g. picture set). It takes about 10 minutes to do a set and if all my readers kicked in one set, that would be 75 samples per dataset…
Just asking… 🙂