Here’s the link to the video documentary of past year’s Ars Electronica event “How to become a high-tech anti-discrimination activist collective” that was co-organized with the Johannes Kepler University in Linz (A). Both my colleague Doris Allhutter and me organized a workshop as part of this bigger event. My workshop was concerned with AI technology and open source alternatives: “How to create your own AI device with SUSI.AI – An Open Source Platform for Conversational Web”. It was a great opportunity to catch up with my co-workshop leader Hong Phuc Dang from SUSI.AI (one of my case studies of my current search engine project; see research). Many thanks to the organizers!! It was fun!
This year I have the pleasure to give a workshop together with Hong Phuc Dang as part of the ARS Electronica Festival 2020. The title is How to create your own AI device with SUSI.AI – An Open Source Platform for Conversational Web and its part of an overall event Waltraud Ernst and colleagues from the University of Linz have organized. The whole event is dealing with bias and discrimination in algorithmic systems: “How to become a high-tech anti-discrimination activist collective” with awesome keynotes by Lisa Nakamura and Safiya Noble; more infos can be found on the ARS/ Uni Linz website. There you can also register if you’re interested in participating! I’m already looking forward to this event!!!
Thanks for your interest and great response to the FOSSASIA 2019 workshop I advertised in my previous blog post! Are you a SUSI.AI developer/ contributor? Are you up for an experiment? Would you be willing to write a short piece of text on how the social appears in the technical development of SUSI.AI/ your daily work practices? This text should only be half a page or a page and you should’t think about it too hard; rather: you just find a nice spot (like I did last spring in Berlin, where the picture above was taken) and quickly write down what comes to your mind when you hear the following question:
When and how did you encounter SUSI (standing for the social in terms of social biases, user imaginations, gender relations, your own desires and expectations, or something else that comes to your mind..) when developing/ contributing to SUSI.AI and how did you handle SUSI back then?
Please send your memories to me (astrid.mager(at)oeaw.ac.at) so that we can discuss/ work with them during the workshop. Based on these texts we’ll be able to draw out how to (better) handle SUSI in the future, but also how SUSI can be made productive in terms of creating more “open”, “transparent” or “fairer” (AI) technology more generally.
If you don’t find the time to write such a memory, don’t worry! I’d still be happy to see you at the workshop and learn about your ideas on the way SUSI figures in your work and how you usually deal with it!
Remember: The workshop titled “Where is SUSI in the AI?” will take place on Saturday, 16th March, 18-18.55, at the Event Hall 2-1. I’m already looking forward to seeing you there!!! Please use this link to sign up for the workshop! Thank you!
If you’re interested in learning more about working with memories in software design, I’d be happy to give you further insights in the method “mind scripting” I’ve been toying around with just recently. It’s a method developed by my colleague Doris Allhutter, who particularly created this method to investigate (and potentially also to intervene in) software practices.
If you’re a SUSI.AI developer I’d love to get in touch with you to learn about your work practices, your ideas about SUSI.AI and open source more generally, and to discuss what role the social – in terms of social biases, user imaginations, gender relations, your own desires, or something else that is important for you as a coder – plays in the technical development of SUSI.AI/ your own work. I’ve organized a workshop to provide a space for mutual learning experiences and to initiate a dialogue between informatics and social sciences; an interface I find tremendously important in times of growing social biases, discrimination and surveillance corporate tech triggers. Please let me know if you’d like to participate in the workshop and what you’re interested in to better prepare it in advance! Also, please spread the word and motivate other SUSI.AI developers to show up! The more participants, the better! 😉 If you don’t have time to participate in the workshop – I’m sure you guys will be busy over there – I’d still be happy to hear from you and find some other opportunity to chat at the summit. It’s going to be my first Asian Tech Summit so I’m really looking forward to be there and learn more about your great work!! Thanks also to Michael Christen and Mario Behling for supporting my work so far! I’m of coures looking forward to meeting you guys in Singapore too!!! YAY!
This ethnographic study on SUSI.AI is part of my ongoing research project “Algorithmic Imaginaries. Visions and values in the shaping of search engines”; funded by the Austrian Science Fund (FWF). A short – bit outdated – description of my project can be found at the ITA website. I’m happy to explain it further once we meet, of course!
Here’s the abstract for the workshop titled “Where is SUSI in the AI” (Saturday 16th March, 18-18.55, Event Hall 2-1). Please use this link to sign up for the workshop.
There is a long research tradition in the field of science and technology studies (STS) showing the importance of the social in technical design processes. The notion sociotechnical design practices, for example, stands for tight entanglements and co-shaping processes of technical and social elements. Following this basic assumption critical algorithm studies, infrastructure studies, and software studies have started to investigate how social biases in big data, preferences of designers and coders, or imaginations of future users shape digital tools, software, or artificial intelligence. Moreover, innovative methods have been developed to not only analyze, but also problematize and intervene in software practices. “De-biasing” has become an issue of concern bringing together computer scientists and social scientists to learn from each other in the attempt to bring fairness, accountability and transparency to the table of software design.
Following this research tradition the proposed workshop tries to bring together developers, coders, researchers and other contributors working on SUSI.AI to address the following question: “Where is SUSI in the AI”? During the workshop the participants are invited to show and share how SUSI (standing for the social in terms of social biases, user imaginations, gender relations, developers’ own desires, or something else that is important for the SUSI.AI team) actually figures in the design process and how they deal with SUSI/ or hope to deal with SUSI in the future. While the workshop mainly invites contributors working on SUSI.AI, it is open to developers working on similar AI projects as well.
If you’re up for experimenting with a method using memory work before and during the workshop, please check out my next blog post! To be continued.. 😉