Here’s the link to the video documentary of past year’s Ars Electronica event “How to become a high-tech anti-discrimination activist collective” that was co-organized with the Johannes Kepler University in Linz (A). Both my colleague Doris Allhutter and me organized a workshop as part of this bigger event. My workshop was concerned with AI technology and open source alternatives: “How to create your own AI device with SUSI.AI – An Open Source Platform for Conversational Web”. It was a great opportunity to catch up with my co-workshop leader Hong Phuc Dang from SUSI.AI (one of my case studies of my current search engine project; see research). Many thanks to the organizers!! It was fun!
This year I have the pleasure to give a workshop together with Hong Phuc Dang as part of the ARS Electronica Festival 2020. The title is How to create your own AI device with SUSI.AI – An Open Source Platform for Conversational Web and its part of an overall event Waltraud Ernst and colleagues from the University of Linz have organized. The whole event is dealing with bias and discrimination in algorithmic systems: “How to become a high-tech anti-discrimination activist collective” with awesome keynotes by Lisa Nakamura and Safiya Noble; more infos can be found on the ARS/ Uni Linz website. There you can also register if you’re interested in participating! I’m already looking forward to this event!!!
Thanks for your interest and great response to the FOSSASIA 2019 workshop I advertised in my previous blog post! Are you a SUSI.AI developer/ contributor? Are you up for an experiment? Would you be willing to write a short piece of text on how the social appears in the technical development of SUSI.AI/ your daily work practices? This text should only be half a page or a page and you should’t think about it too hard; rather: you just find a nice spot (like I did last spring in Berlin, where the picture above was taken) and quickly write down what comes to your mind when you hear the following question:
When and how did you encounter SUSI (standing for the social in terms of social biases, user imaginations, gender relations, your own desires and expectations, or something else that comes to your mind..) when developing/ contributing to SUSI.AI and how did you handle SUSI back then?
Please send your memories to me (astrid.mager(at)oeaw.ac.at) so that we can discuss/ work with them during the workshop. Based on these texts we’ll be able to draw out how to (better) handle SUSI in the future, but also how SUSI can be made productive in terms of creating more “open”, “transparent” or “fairer” (AI) technology more generally.
If you don’t find the time to write such a memory, don’t worry! I’d still be happy to see you at the workshop and learn about your ideas on the way SUSI figures in your work and how you usually deal with it!
Remember: The workshop titled “Where is SUSI in the AI?” will take place on Saturday, 16th March, 18-18.55, at the Event Hall 2-1. I’m already looking forward to seeing you there!!! Please use this link to sign up for the workshop! Thank you!
If you’re interested in learning more about working with memories in software design, I’d be happy to give you further insights in the method “mind scripting” I’ve been toying around with just recently. It’s a method developed by my colleague Doris Allhutter, who particularly created this method to investigate (and potentially also to intervene in) software practices.
If you’re a SUSI.AI developer I’d love to get in touch with you to learn about your work practices, your ideas about SUSI.AI and open source more generally, and to discuss what role the social – in terms of social biases, user imaginations, gender relations, your own desires, or something else that is important for you as a coder – plays in the technical development of SUSI.AI/ your own work. I’ve organized a workshop to provide a space for mutual learning experiences and to initiate a dialogue between informatics and social sciences; an interface I find tremendously important in times of growing social biases, discrimination and surveillance corporate tech triggers. Please let me know if you’d like to participate in the workshop and what you’re interested in to better prepare it in advance! Also, please spread the word and motivate other SUSI.AI developers to show up! The more participants, the better! 😉 If you don’t have time to participate in the workshop – I’m sure you guys will be busy over there – I’d still be happy to hear from you and find some other opportunity to chat at the summit. It’s going to be my first Asian Tech Summit so I’m really looking forward to be there and learn more about your great work!! Thanks also to Michael Christen and Mario Behling for supporting my work so far! I’m of coures looking forward to meeting you guys in Singapore too!!! YAY!
This ethnographic study on SUSI.AI is part of my ongoing research project “Algorithmic Imaginaries. Visions and values in the shaping of search engines”; funded by the Austrian Science Fund (FWF). A short – bit outdated – description of my project can be found at the ITA website. I’m happy to explain it further once we meet, of course!
Here’s the abstract for the workshop titled “Where is SUSI in the AI” (Saturday 16th March, 18-18.55, Event Hall 2-1). Please use this link to sign up for the workshop.
There is a long research tradition in the field of science and technology studies (STS) showing the importance of the social in technical design processes. The notion sociotechnical design practices, for example, stands for tight entanglements and co-shaping processes of technical and social elements. Following this basic assumption critical algorithm studies, infrastructure studies, and software studies have started to investigate how social biases in big data, preferences of designers and coders, or imaginations of future users shape digital tools, software, or artificial intelligence. Moreover, innovative methods have been developed to not only analyze, but also problematize and intervene in software practices. “De-biasing” has become an issue of concern bringing together computer scientists and social scientists to learn from each other in the attempt to bring fairness, accountability and transparency to the table of software design.
Following this research tradition the proposed workshop tries to bring together developers, coders, researchers and other contributors working on SUSI.AI to address the following question: “Where is SUSI in the AI”? During the workshop the participants are invited to show and share how SUSI (standing for the social in terms of social biases, user imaginations, gender relations, developers’ own desires, or something else that is important for the SUSI.AI team) actually figures in the design process and how they deal with SUSI/ or hope to deal with SUSI in the future. While the workshop mainly invites contributors working on SUSI.AI, it is open to developers working on similar AI projects as well.
If you’re up for experimenting with a method using memory work before and during the workshop, please check out my next blog post! To be continued.. 😉
In January I was kindly invited to give a lecture on my habilitation project “Algorithmic Imaginaries“. This talk was part of the lecture series “Aspects of the Digital Transformation” at the The Centre for Informatics and Society (CIS) of the Faculty of Informatics. Thanks a lot to Florian Cech and Hilda Tellioglu for the warm welcome including fine wine and bread! Thanks also to the audience who triggered really interesting discussions! You can find the video on the C!S website if you want to watch it (in English):
Suchmaschinen in Europa – europäische Suchmaschinen?
Suchmaschinen sind gesellschaftspolitischen Entwicklungen unterworfen. Doch welche Rolle spielt Europa dabei?
Enjoy reading the the full text here (in German)!
If you want to learn more about all the great members of the Young Academy, check out the summer series portraits of new YA members. Mine is titled “Kleine Davids gegen Google Goliath“. It’s a fine compilation of interdisciplinary research my young colleagues are doing.
I’m so (so so so) happy that my project “Algorithmic imaginaries. Visions and values in the shaping of search engines” will finally come true! After a really long application process the Austrian Science Fund (FWF) decided to fund this awesome habilitation project! You’ll find the abstract below; more information will follow once the project has started (November 2016 since I’m still on maternity leave). For all of you who have projects under review (or rejected already): don’t give up! It’s a nerv-wrecking process, but if you finally manage to succeed, it’s all worth it!!! (Of course, in times like these peer review has become some sort of strage academic lottery, which does not make the practice any better..)
Visions and values in the shaping of search engines
Search engines like Google are developed in the US-American context, but are used around the globe. Their business models are based on user-targeted advertising. They collect user data, turn it into user profiles, and sell them to advertising clients. Since the NSA affair practices of user profiling are critically discussed; especially in European contexts with diverse data protection laws, historically shaped notions of privacy, and very different tax systems. The ongoing reform of the EU data protection legislation is an important arena where tensions between global search engines and European policy visions and values can be observed. Besides, European search engines emerge that aim to provide users with alternative styles of search. Some are explicitly developed as a European competitor to US-based search engines (Quaero or Independent Web Index). Others are developed in Europe, but draw on other value-systems to distinguish themselves from big search engines, such as respecting users’ privacy (e.g. Ixquick), protecting the environment (e.g. Ecosia), or creating a non-commercial search engine owned by the public (e.g. YaCy).
This poses important questions: What motivations, value-systems, and visions guide the development of European search engines? How are these imaginations translated into sociotechnical design practices? What power struggles, negotiations, and compromises may be observed? How do place and cultural context matter in the design process? Researchers in Science and Technology Studies (STS) investigated the politics of search engines, the relevance of algorithms, and internet governance. What is missing is an in-depth analysis of the shaping of search engines in specific cultural contexts and the role shared value systems and visions play in it. Rooted in the discipline of STS the suggested habilitation project will fill in this gap by investigating design practices of European search projects using a case-study approach (qualitative interviews, workshops, ethnographic observations).
Results from this analysis will be compared to and cross-analyzed with results from my past research on capitalist ideologies driving global search engines like Google and my present research on visions and values guiding European search engine governance. This overall analysis will result in a typology of algorithmic imaginaries, which describes visions and values in the development and governance of search engines in global, European, and local contexts. It will show how search technologies and society co-emerge in specific economic, political, and cultural settings. The primary focus on European contexts is a particular strength of the project since tensions between global search engines and European governance structures and search projects are growing, but have not been systematically studied yet, both in the field of STS and internet research.
This week I spent two sunny days in Graz to attend the STS conference “Critical Issues in Science and Technology Studies”. Doris Allhutter and I organized a panel on the “politics of ICTs”, which turned out to be really interesting! Great presentations, great topics, great participants. Also, we discovered quite a number of overlapping issues and shared interests, which is not always the case with regard to conference panels. I particularly liked the presentations on the material/ technological dimension of ideology and gender relations, sociotechnical/ digital work practices and cultural specificities, and questions on power relations in design practices of ICTs. Anne Dippel struggling with computer problems while talking about bugs in the CERN software and how they affect physicists’ work practices was just one highlight of our panel 😉 I still hope Doris and I will manage to put together a special issue on the fascinating co-emergence of social and digital cultures.
The second highlight of the week was the arrival of the Society of the Query Reader (eds René König & Miriam Rasch; Institute of Network Cultures (INC) reader #9). It’s great to see my contribution on big search and its alternatives in such a nicely designed book. Didn’t the conference designers even get an award for the beautiful flyers, badges and stuff? Anyway, the reader is a wonderful compilation of essays on corporate search engines and alternative styles of search. If interested, you can order or download the book for free (!) more information here..
A preprint of my Society of the Query #2 article has been published in the ITA manu:scripts series. The article is related to the talk I gave at the SOTQ conference in Amsterdam, November 2013. It’s concerned with the ideology of Google and alternative search engines. A final version of the paper will be published in the Society of the Query Reader edited by René König and Miriam Rasch (Geert Lovink as editor of the Institute of Network Cultures (INC) Reader series; spring 2014). I’d like to thank the conference participants, Georg Aichholzer as editor of the ITA manu:scripts series, and both the reviewers of the INC reader and the ITA manu:scripts for their helpful comments and feedback. That’s the abstract:
Google has been blamed for its de facto monopolistic position on the search engine market, its exploitation of user data, its privacy violations, and, most recently, for possible collaborations with the US-American National Security Agency (NSA). However, blaming Google is not enough, as I suggest in this article. Rather than being ready-made, Google and its ‘algorithmic ideology’ are constantly negotiated in society. Drawing on my previous work I show how the ‘new spirit of capitalism’ gets inscribed in Google’s technical Gestalt by way of social practices. Furthermore, I look at alternative search engines through the lens of ideology. Focusing on search projects like DuckDuckGo, Ecosia, YaCy and Wolfram|Alpha I exemplify that there are multiple ideologies at work. There are search engines that carry democratic values, the green ideology, the belief in the commons, and those that subject themselves to the scientific paradigm. In daily practice, however, the capitalist ideology appears to be hegemonic since 1) most users employ Google rather than alternative search engines, 2) a number of small search projects enter strategic alliances with big, commercial players, and 3) choosing a true alternative would require not only awareness and a certain amount of technical know-how, but also effort and patience on the part of users, as I finally discuss.
That’s the link to the full article. I would love to hear what you think about it!