The 2019 DIAGRAM Hackathon Recap
On May 15-16, DIAGRAM held its 4th annual code sprint which was co-hosted with Partnership on Employment and Accessible Technology (PEAT). This year’s sprint took place at the LightHouse for the Blind and Visually Impaired in San Francisco who generously donated space in their building. Sponsored by Microsoft, without whom the hackathon couldn’t have happened, the event took place during the 16th International Web4All Conference and attracted a professionally and geographically diverse crowd.
Participants came from as close as San Francisco, Palo Alto, Sunnyvale, and Redwood City, and as far as the UK, India, Switzerland and Portugal, with backgrounds in academia, industry, government, and, nonprofits. Regardless of where they were from and what their background was, everyone came together to learn and explore the potential of new technologies to personalize the web and provide an accessible user experience for all. It was a fun, informative, educational and productive two days. One participant and longtime DIAGRAM community member Neil Soiffer summed it up well stating:
“The hackathon was a great opportunity to learn and do some good work. I knew something about accessibility, but nothing about Jupyter notebooks [one of the projects]. My hackathon partner Paul knew something about Jupyter notebooks, but nothing about accessibility. By working together, we both learned something and were able to make lots of small but significant accessibility improvements to Jupyter that we expect will land in main Jupyter code.”
The work done on Jupyter is important not just because of the exchange of new information, but because Jupyter is a new endeavor for DIAGRAM with huge potential for impact. As long-time and often-quoted community member Sina Bahram pointed out:
“Jupyter is used by millions of people around the world. It is a foundational technology in data science, finance and many more fields. By enhancing the accessibility of this environment, we are able to begin moving the needle slowly but surely towards a day where students and professionals with disabilities are not excluded from the most foundational and essential tool required to participate in school and work.”
Evan Yamanishi, DIAGRAM community member representing publisher W.W. Norton, went on to say,
“I wish I could have worked on every project at the hackathon this year and really would’ve liked to learn more about Jupyter Lab. My group focused on defining a production-ready version of the enhanced visual descriptions from 2018, hacking on previously unsolved UX issues such as keyboard dragging and both improved and reduced motion. Clayton Lewis was especially great at documenting everything and asking challenging questions about affordances and customization things that are required to gain wider adoption of enhanced descriptions.”
Day One kicked off at 8:00 am on a very rainy Wednesday. Despite the early hour and shockingly bad weather for May, the excitement in the room was palpable. Not only were people pumped to hack for good, but in a twist on our usual code sprint format, this year, there would be judging and prizes — computer bags and Amazon HD fire tables donated by Educational Testing Services (ETS) — for the winning teams. The room was abuzz with lively conversations, brainstorming, strategizing and the occasional breaks for cheeky antics and a ton of food.
One of the highlights of the day was a tour of the LightHouse facility. The beautiful space boasts huge windows and panoramic views of downtown San Francisco and City Hall. It was designed by Chris Downey, LightHouse board president, and realized by architect Mark Cavagnero, both of whom are blind. The space is completely accessible using different floor textures to designate the type of room a person is in. It also has walls that are lined with tactile art and tactile maps as well as conference rooms, a lab, craft room, exercise space, teaching kitchen and more. Everyone was extremely impressed with the space, both the layout and the history behind it, as well as the mission of LightHouse for the Blind and Visually Impaired. You can learn more about their work on the LightHouse website.
Day One ended around 9:00 pm with hackers completing their initial code and one participant, Kesavan, generously offering to help event organizers bring the leftover food to a local food bank. We couldn’t have asked for a better start to the hackathon.
On Day Two the room was much quieter with the hackers intent on writing and cleaning up code in preparation for the presentations and judging. The atmosphere in the room was tense though teams still somehow found time to goof off. They were also treated to demos by Stanford graduate students Alexa Fay Siu and Tayo Falase. Alexa, who is working toward her PhD in mechanical engineering and human-computer interaction, showed off her project, shapeShift. This process takes a 3D STL file and renders a 3D object using a tactile array of small cylinders that can be raised, rotated and resized in real time. This project can also be used by students with blindness, visual impairments or a learning style better supported through touch to be able to fully explore graphs and other mathematical concepts in a way they can better understand.
Tayo, who is receiving a master’s degree in engineering, human-centered design, demonstrated her project, Tactile Code Skimmer, that represents the tab indentations within computer code on a physical device that showed eight lines of where the indentations were for each line of code. The goal is to track your place in the code and identify the level of indentation on this external device by feeling where the sliders are located. As you step through your code, the sliders will move to correspond to the tab indentation for that line of code. The level of indentation is meaningful when programming. For example, an inner loop of instructions should all be at the same indentation level when the tab indexing is incorrect. If it is not, it could indicate a potential bug in the software.
The afternoon wrapped with the teams putting the finishing touches on their code and then, finally, the presentations happened and judging took place. If you are interested in the presentations, they were recorded and can be accessed on DIAGRAM’s YouTube channel. Participants waited with bated breath (aka eating street tacos while the judges deliberated). As it turns out, all teams did extremely well making significant progress over the two coding marathon days, with many producing usable code. The judges had quite the challenge on their hands and tensions ran high (low) as the teams anxiously (calmly) waited to learn how they placed.
The teams and their projects were as follows:
- JupyterLab: Over half of the participants worked on one of three JupyterLab projects (making menus & dialogs accessible [low hanging fruit], high-level architectural changes required between the main project and its libraries, and improving the overall Notebook Experience) – an interactive development environment that enables users to create and share documents that combine live code with narrative text, mathematical equations, visualizations, interactive controls, and other rich output. Four teams worked on adding various features to Jupyter Lab that allow screen readers to recognize and verbalize menus, tabs, buttons, modal dialogs, and fields.
- Accessible Code Repository (Accessible Interactives) featuring Charles LaPierre (Benetech), Markku Hakkinen (ETS) and Candida Haynes (Independent) – The DIAGRAM Center is assembling a repository of open source code for common interactions such as synchronized text-to-speech highlighting, carousels, page settings, date pickers, drag and drop, and so on. The team worked on improving the layout and functionality of the repository and plans to add more best-in-class code that can be used to make applications more accessible.
- Accessible Extended Image Descriptions featuring Evan Yamanishi (W.W. Norton), Matt Nupen (Benetech), Clayton Lewis (Boulder Colorado), and Tammy Speed (Founders and Coders) – They say a picture is worth a thousand words, but when you can’t see that picture or understand it due to a visual or cognitive disability, it’s as if the image is not there. Authors and publishers need a way to add simple image descriptions as well as more detailed descriptions that appear in an unobtrusive yet easily accessible way. The team worked on an interface with Accessible Rich Internet Applications (ARIA) web standards that allow a user to create extended descriptions that can be toggled on/off and repositioned.
- Accessibility Conformance Testing featuring Carlos Duarte from (University of Lisbon), Pawan Kumar Patel (IIT Kanpur), Marie Trudelle (Empowerment Through Integration), Damien Engels (Google), and Ramit Garg (Intuit) – the team proposed new rules for testing the accessibility of links, link text, headings, widgets, and other elements of HTML to ensure adherence to ARIA standards that allow screen readers to be able to verbalize and interact with web pages. Automating this function makes it easier for web developers to make sites accessible from the very beginning.
In the end, the judges could only pick three winners. In third place was the Accessibility Conformance Testing project. In second place was one of the Jupyter Lab’s teams that made menus and dialogs within Jupyter Lab accessible, and in first place was the Accessible Extended Image Descriptions team which showcased a number of options for providing extended image descriptions in an inclusive way as well as documentation to support this work. Of course, the real winners are the people who will benefit from the code produced over the two days so we would like to again extend a huge thanks to Microsoft for their sponsorship allowing us to host the hackathon and providing swag for all of the teams, ETS for providing prizes, to the LightHouse for the Blind and Visually Impaired SF for providing the space , and to all the participants who volunteered two days of their time to come up with solutions to accessibility problems. Their dedication and passion are truly inspiring.