Art-A-Hack hackathon: building an Accessible Brain
Art-A-Hack is a hackathon that brings together artists and technologists to build something that will have a positive impact under the umbrella of a specific theme. The theme of this hackathon was emotionally intelligent artificial intelligence and brain computer interfaces.
I led a team consisting of an engineer, visual designer, and audio artist. My role was leading the team’s research efforts. This was my process:
During our brainstorming process of potential uses of brainwaves, my team often returned to the idea of visuals controlled by brain frequencies in a sonic environment. We decided that our overarching goal was to make a data visualizer that responded to brainwave activity, as well as head and neck movements, that could be used by people with disabilities.
1) Concept Analysis
In order to test the viability of our concept, I researched the market space to understand what was already out there. It was important to me that we were building upon existing research, so I produced a comprehensive analysis of our concept’s viability. While I normally would have dug into many aspects of the concept, due to the short nature of the hackathon, I decided to manage my time by analyzing the strengths, weaknesses, opportunities, and threats. This analysis provided the groundwork that I would need to conduct user testing.
2) Informal Interviews
I decided to conduct informal interviews with our initial prototype to gauge the user’s reaction to the concept. I led formal user research interviews during my internship at Facebook last summer, so I applied the frameworks and tools I learned there to conduct interviews in this setting. The goal of these interviews was to provide feedback on the early version of the product, specifically in regards to the reaction of users with disabilities to the audio and light elements of the product.
In order to conduct these interviews, I recruited people with disabilities that I knew to come to one of the hackathon days and test our early prototype. The users put on the Muse on and I turned on the immersive experience (the visuals are shown below.) The feedback that I received from these interviews was incredibly valuable because it indicated that the current light elements associated with the prototype were too rapid and causing distress among users.
Based on these research findings, the team recreated the data visualizer to monitor two “mind states” of meditation and attention, which were mapped to custom controls in the Muse. For example, leaning left slowed the visuals and attributed sound, while leaning right accelerated both the visuals and sound. Therefore, the research that I conducted was critical in building the final product because the feedback we received shifted our approach to tailor each interaction with the Muse to each user’s needs.