I’m very happy that our article “Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective” by Doris Allhutter, Florian Cech, Fabian Fischer, Gabriel Grill and me (from the ITA, TU Wien, University of Michigan) is published now!! It’s part of the special issue “Critical Data and Algorithm Studies” in the open access journal Frontiers in Big Data edited by Katja Mayer and Jürgen Pfeffer! Thanks Katja for a speedy review process!! Surprisingly, the article triggered quite some resonance within the academic, but also in the public sphere. Lot’s of journalists etc got interested in this “first scientific study” on “the AMS Algorithm”. Since we’re currently working on an additional study comprising a deeper analysis of our own materials (including our own data inquiry to the AMS), we’re not able to talk much about this paper in public at this specific moment. But new insights will follow by the end of Mai or mid-June at the latest, so keep posted!!! Here’s the project description of the current study funded by the Arbeiterkammer OÖ.
Last year, my work has been covered by various media outlets and events. First, the Austrian Academy of Sciences (ÖAW) made a portrait/ interview with me on the way visions and values shape search engines as part of their series “Forschen für Europa”. This piece included a fancy foto shooting, as you can see here. Second, I was invited to take part in the panel discussion of the ORF Public Value event “Occupy Internet. Der gute Algorithmus” (together with Tom Lohninger from epicenter.works, Matthias Kettemann from the Hans-Bredow-Institut and Franz Manola from the ORF Plattformmanagement). The live discussion took place at the “Radiokulturhaus” and was aired in ORF 3 thereafter. Here you can find the abstract, the press release and the video in case you want to watch the whole discussion. Finally, I was invited as a studio guest to the radio broadcast “Punkt 1” at Ö1 “Das eingefärbte Fenster zur Welt“, where I spoke about alternative search engines and people could phone in and ask questions per email. Talk radio it is! 😉 – all in German.
Today, the Internet Governance Forum started in Berlin. As part of this huge event the edited volume “Busted! The Truth About the 50 Most Common Internet Myths“ will be launched. This wonderful volume – edited by Matthias Kettemann & Stephan Dreyer – is a compilation of common Internet myths and their deconstructions. Here is the link to the whole book: https://internetmythen.de (English and German; including summaries in all five UN languages). Enjoy!!
I’ve contributed Myth #19: Search engines provide objective results:
This is the abstract for my introductory course into Science and Technology Studies using digital technology as an exemplary case (data, algorithms & prognosis more specifically). I’m already looking forward to heated discussions on social media, AI, self-driving cars, recommender systems and their sociopolitical dimensions and governance implications! (@ the Deptartment of Science and Technology Studies, University of Vienna; in German).
Technik im Alltag am Beispiel von Daten, Algorithmen und Prognosen
Suchmaschinen, soziale Netzwerke und eine Vielzahl von Apps am Handy sind aus unserem Alltag nicht mehr wegzudenken. Sie haben sich in unsere alltäglichen Praktiken eingenistet, gestalten aber gleichzeitig auch welche Informationen wir finden, wie wir über Distanz kommunizieren, und wie wir unseren Körper wahrnehmen, wenn wir zum Beispiel an Gesundheitsapps denken. Sie werfen aber auch eine Reihe gesellschaftspolitischer Fragen auf: Was bekommen wir in Suchmaschinen-Ergebnissen, Newsfeeds und Online-Recommendations zu sehen und was nicht? Welche neuen Formen von Bias und Diskriminierung entstehen dabei? Wie können auf Basis gesammelter Daten Zukunftsprognosen erstellt werden und welche Konsequenzen gehen damit einher? Was bedeutet die zunehmende Quantifizierung unterschiedlicher Lebensbereiche für Individuen und Gesellschaft? Wie können wir global agierende Technologie-Unternehmen und deren Geschäftsmodelle (Stichwort ‘Datenhandel’) regulieren und welche gesellschaftliche Teilhabe ist dabei möglich?
Diese Fragen möchten wir in unserem Kurs anhand von klassischen Einführungstexten aus der Wissenschafts- und Technikforschung (STS), sowie aktuellen Texten aus den kritischen New Media Studies behandeln. In jeder Einheit wird die Lehrveranstaltungsleiterin zunächst ein klassisches STS-Konzept – soziale Konstruktion von Technologie, Politik von Technologie, Actor-Network Theory, Technikentwicklung und Geschlecht, Partizipation etc – vorstellen und zur Diskussion aufbereiten (Pflichttext). Darauf aufbauend werden wir einen Text aus den Themenfeldern Daten, Algorithmen und Prognosen diskutieren, der das jeweilige Konzept zur Anwendung bringt (Referatstext). Dieser wird von Studierenden in der Gruppe aufbereitet und zur Diskussion gestellt/ moderiert. Zusätzlich dazu werden zwei schriftliche Arbeitsaufgaben gestellt, die wir im Seminar diskutieren werden. Voraussetzungen für den Zeugniserwerb sind Anwesenheit, Mitarbeit, mündliche Präsentation (Textdiskussion oder Position in der Bürgerkonferenz), schriftliche Arbeitsaufgaben, sowie die Absolvierung der schriftlichen Abschlussprüfung. Da der Kurs größtenteils auf englischsprachigen Texten basiert sind grundlegende Englischkenntnisse erforderlich. Die Unterrichtssprache ist deutsch.
More information can be found at the University of Vienna website.
Together with Katja Mayer I wrote an article about quantified self, big data and social justice in the health context. The title is “Body data-data body: Tracing ambiguous trajectories of data bodies between empowerment and social control in the context of health” and it has just recently been published by the wonderful open access journal Momentum Quarterly!! Here is the link to the full text (completely free of charge!)! Don’t get irritated by the German title and abstract, the article is in English, no worries! 😉
Thanks for your interest and great response to the FOSSASIA 2019 workshop I advertised in my previous blog post! Are you a SUSI.AI developer/ contributor? Are you up for an experiment? Would you be willing to write a short piece of text on how the social appears in the technical development of SUSI.AI/ your daily work practices? This text should only be half a page or a page and you should’t think about it too hard; rather: you just find a nice spot (like I did last spring in Berlin, where the picture above was taken) and quickly write down what comes to your mind when you hear the following question:
When and how did you encounter SUSI (standing for the social in terms of social biases, user imaginations, gender relations, your own desires and expectations, or something else that comes to your mind..) when developing/ contributing to SUSI.AI and how did you handle SUSI back then?
Please send your memories to me (astrid.mager(at)oeaw.ac.at) so that we can discuss/ work with them during the workshop. Based on these texts we’ll be able to draw out how to (better) handle SUSI in the future, but also how SUSI can be made productive in terms of creating more “open”, “transparent” or “fairer” (AI) technology more generally.
If you don’t find the time to write such a memory, don’t worry! I’d still be happy to see you at the workshop and learn about your ideas on the way SUSI figures in your work and how you usually deal with it!
Remember: The workshop titled “Where is SUSI in the AI?” will take place on Saturday, 16th March, 18-18.55, at the Event Hall 2-1. I’m already looking forward to seeing you there!!! Please use this link to sign up for the workshop! Thank you!
If you’re interested in learning more about working with memories in software design, I’d be happy to give you further insights in the method “mind scripting” I’ve been toying around with just recently. It’s a method developed by my colleague Doris Allhutter, who particularly created this method to investigate (and potentially also to intervene in) software practices.
If you’re a SUSI.AI developer I’d love to get in touch with you to learn about your work practices, your ideas about SUSI.AI and open source more generally, and to discuss what role the social – in terms of social biases, user imaginations, gender relations, your own desires, or something else that is important for you as a coder – plays in the technical development of SUSI.AI/ your own work. I’ve organized a workshop to provide a space for mutual learning experiences and to initiate a dialogue between informatics and social sciences; an interface I find tremendously important in times of growing social biases, discrimination and surveillance corporate tech triggers. Please let me know if you’d like to participate in the workshop and what you’re interested in to better prepare it in advance! Also, please spread the word and motivate other SUSI.AI developers to show up! The more participants, the better! 😉 If you don’t have time to participate in the workshop – I’m sure you guys will be busy over there – I’d still be happy to hear from you and find some other opportunity to chat at the summit. It’s going to be my first Asian Tech Summit so I’m really looking forward to be there and learn more about your great work!! Thanks also to Michael Christen and Mario Behling for supporting my work so far! I’m of coures looking forward to meeting you guys in Singapore too!!! YAY!
This ethnographic study on SUSI.AI is part of my ongoing research project “Algorithmic Imaginaries. Visions and values in the shaping of search engines”; funded by the Austrian Science Fund (FWF). A short – bit outdated – description of my project can be found at the ITA website. I’m happy to explain it further once we meet, of course!
Here’s the abstract for the workshop titled “Where is SUSI in the AI” (Saturday 16th March, 18-18.55, Event Hall 2-1). Please use this link to sign up for the workshop.
There is a long research tradition in the field of science and technology studies (STS) showing the importance of the social in technical design processes. The notion sociotechnical design practices, for example, stands for tight entanglements and co-shaping processes of technical and social elements. Following this basic assumption critical algorithm studies, infrastructure studies, and software studies have started to investigate how social biases in big data, preferences of designers and coders, or imaginations of future users shape digital tools, software, or artificial intelligence. Moreover, innovative methods have been developed to not only analyze, but also problematize and intervene in software practices. “De-biasing” has become an issue of concern bringing together computer scientists and social scientists to learn from each other in the attempt to bring fairness, accountability and transparency to the table of software design.
Following this research tradition the proposed workshop tries to bring together developers, coders, researchers and other contributors working on SUSI.AI to address the following question: “Where is SUSI in the AI”? During the workshop the participants are invited to show and share how SUSI (standing for the social in terms of social biases, user imaginations, gender relations, developers’ own desires, or something else that is important for the SUSI.AI team) actually figures in the design process and how they deal with SUSI/ or hope to deal with SUSI in the future. While the workshop mainly invites contributors working on SUSI.AI, it is open to developers working on similar AI projects as well.
If you’re up for experimenting with a method using memory work before and during the workshop, please check out my next blog post! To be continued.. 😉
In January I was kindly invited to give a lecture on my habilitation project “Algorithmic Imaginaries“. This talk was part of the lecture series “Aspects of the Digital Transformation” at the The Centre for Informatics and Society (CIS) of the Faculty of Informatics. Thanks a lot to Florian Cech and Hilda Tellioglu for the warm welcome including fine wine and bread! Thanks also to the audience who triggered really interesting discussions! You can find the video on the C!S website if you want to watch it (in English):