algorithmic profiling

amsI’m very happy that our article “Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective” by Doris Allhutter, Florian Cech, Fabian Fischer, Gabriel Grill and me (from the ITA, TU Wien, University of Michigan) is published now!! :) It’s part of the special issue “Critical Data and Algorithm Studies” in the open access journal Frontiers in Big Data edited by Katja Mayer and Jürgen Pfeffer! Thanks Katja for a speedy review process!! Surprisingly, the article triggered quite some resonance within the academic, but also in the public sphere. Lot’s of journalists etc got interested in this “first scientific study” on “the AMS Algorithm”. Since we’re currently working on an additional study comprising a deeper analysis of our own materials (including our own data inquiry to the AMS), we’re not able to talk much about this paper in public at this specific moment. But new insights will follow by the end of Mai or mid-June at the latest, so keep posted!!! Here’s the project description of the current study funded by the Arbeiterkammer OÖ.

media coverage

prfLast year, my work has been covered by various media outlets and events. First, the Austrian Academy of Sciences (ÖAW) made a portrait/ interview with me on the way visions and values shape search engines as part of their series “Forschen für Europa”. This piece included a fancy foto shooting, as you can see here. Second, I was invited to take part in the panel discussion of the ORF Public Value event “Occupy Internet. Der gute Algorithmus” (together with Tom Lohninger from epicenter.works, Matthias Kettemann from the Hans-Bredow-Institut and Franz Manola from the ORF Plattformmanagement). The live discussion took place at the “Radiokulturhaus” and was aired in ORF 3 thereafter. Here you can find the abstract, the press release and the video in case you want to watch the whole discussion. Finally, I was invited as a studio guest to the radio broadcast “Punkt 1” at Ö1 “Das eingefärbte Fenster zur Welt“, where I spoke about alternative search engines and people could phone in and ask questions per email. Talk radio it is! 😉 – all in German.

50 internet myths

Today, the Internet Governance Forum started in Berlin. As part of this huge event the edited volume Busted! The Truth About the 50 Most Common Internet Myths will be launched. This wonderful volume – edited by Matthias Kettemann & Stephan Dreyer – is a compilation of common Internet myths and their deconstructions. Here is the link to the whole book: https://internetmythen.de (English and German; including summaries in all five UN languages). Enjoy!!myth

I’ve contributed Myth #19: Search engines provide objective results:

Myth: Search engines deliver objective search results. This is the founding myth of the leading search engine in the Western World: Google. 20 years later this founding myth still exists in Google’s company philosophy. More importantly, however, it resonates in people’s minds. Without knowing how the search engine actually works, many users say that the best websites can be found on top.

Busted: In 1998, the founding year of Google, Sergey Brin and Larry Page described their search engine’s central aim as follows: „The primary goal is to provide high quality search results over a rapidly growing World Wide Web.“ (Brin and Page 1998: 115). Accordingly, the notions “quality” and “search quality” feature over 30 times in their research paper. The authors depict the PageRank algorithm – originally using the number and quality of hyperlinks a website gets, anchor text and proximity to determine the quality of a website and rank it accordingly – as their main competitive advantage. They describe the algorithm as “objective measure” corresponding well to “people’s subjective idea of importance” (Brin and Page 1998: 109). Interestingly, this seems to be the case indeed. Having asked people why they use Google to find online health information in the context of my PhD project, most people answered with saying that Google delivered the best search results, implicitly shaping the search engine as a tool for quality assurance. Without knowing – or even thinking about – how the search engine actually works, Google’s founding myth was reproduced in people’s stories.

But it is a myth. Search engines are no neutral, objective technologies, but rather tightly intertwined with societal norms, values and ideologies; the capitalist ideology most importantly. Over the past decades, Google’s “techno-fundamentalist” ideology of neutral ranking was aligned with and overshadowed by non-objective considerations. New media scholars started to deconstruct the myth of objectivity soon after the search engine’s successful market entry. At first, they challenged the PageRank algorithm by arguing that it would threaten the democratic ideal of the web (#28) by systematically preferring big, well-connected, often commercial websites at the expense of smaller ones. Later they switched over to questioning search engines’ business models based on user-targeted advertising and the commercialization of search engine results and privacy issues these trigger. A major criticism in this body of work concerns the ‘consumer profiling’ conducted by Google – and others like Bing – that enable search engines to adjust advertisements to users’ individual interests. (#21; #22)

Due to the growing amount of user data these companies acquired, the search algorithm and the “organic” search results changed too. Besides hyperlinks other factors were thrown into the measuring of a website’s quality including user profiles and click behaviour most particularly, but also the structure of a website, timeliness, and the amount of keywords and content. Accordingly, new media researchers, but increasingly also journalists, criticized the intensified personalization of search engine results, search engine biases and discrimination. This illustrates that search algorithms are tightly intertwined with the business models their companies rely on. The capitalist ideology is embedded in search engines and “acts through algorithmic logics and computational systems“ (Mager 2014: 32).

Truth: It is important to keep in mind that search engines and their algorithms are no neutral technologies, but rather incorporate societal values and ideologies; the capitalist ideology most importantly. Only then may we come up with forward-looking governance models respecting local regulations and resonating with human rights (especially in Europe, where data protection is enshrined as a fundamental right).

 


Source: Sergey Brin and Lawrence Page, The anatomy of a large-scale hypertextual Web search engine, Computer Networks and ISDN Systems 30: 107- 117 (1998); Astrid Mager, Defining Algorithmic Ideology: Using Ideology Critique to Scrutinize Corporate Search Engines, tripleC 12(1): 28-39 (2014).

intro course STS & digital tech

This is the abstract for my introductory course into Science and Technology Studies using digital technology as an exemplary case (data, algorithms & prognosis more specifically). I’m already looking forward to heated discussions on social media, AI, self-driving cars, recommender systems and their sociopolitical dimensions and governance implications! (@ the Deptartment of Science and Technology Studies, University of Vienna; in German).

Technik im Alltag am Beispiel von Daten, Algorithmen und Prognosen

Suchmaschinen, soziale Netzwerke und eine Vielzahl von Apps am Handy sind aus unserem Alltag nicht mehr wegzudenken. Sie haben sich in unsere alltäglichen Praktiken eingenistet, gestalten aber gleichzeitig auch welche Informationen wir finden, wie wir über Distanz kommunizieren, und wie wir unseren Körper wahrnehmen, wenn wir zum Beispiel an Gesundheitsapps denken. Sie werfen aber auch eine Reihe gesellschaftspolitischer Fragen auf: Was bekommen wir in Suchmaschinen-Ergebnissen, Newsfeeds und Online-Recommendations zu sehen und was nicht? Welche neuen Formen von Bias und Diskriminierung entstehen dabei? Wie können auf Basis gesammelter Daten Zukunftsprognosen erstellt werden und welche Konsequenzen gehen damit einher? Was bedeutet die zunehmende Quantifizierung unterschiedlicher Lebensbereiche für Individuen und Gesellschaft? Wie können wir global agierende Technologie-Unternehmen und deren Geschäftsmodelle (Stichwort ‘Datenhandel’) regulieren und welche gesellschaftliche Teilhabe ist dabei möglich?

Diese Fragen möchten wir in unserem Kurs anhand von klassischen Einführungstexten aus der Wissenschafts- und Technikforschung (STS), sowie aktuellen Texten aus den kritischen New Media Studies behandeln. In jeder Einheit wird die Lehrveranstaltungsleiterin zunächst ein klassisches STS-Konzept – soziale Konstruktion von Technologie, Politik von Technologie, Actor-Network Theory, Technikentwicklung und Geschlecht, Partizipation etc – vorstellen und zur Diskussion aufbereiten (Pflichttext). Darauf aufbauend werden wir einen Text aus den Themenfeldern Daten, Algorithmen und Prognosen diskutieren, der das jeweilige Konzept zur Anwendung bringt (Referatstext). Dieser wird von Studierenden in der Gruppe aufbereitet und zur Diskussion gestellt/ moderiert. Zusätzlich dazu werden zwei schriftliche Arbeitsaufgaben gestellt, die wir im Seminar diskutieren werden. Voraussetzungen für den Zeugniserwerb sind Anwesenheit, Mitarbeit, mündliche Präsentation (Textdiskussion oder Position in der Bürgerkonferenz), schriftliche Arbeitsaufgaben, sowie die Absolvierung der schriftlichen Abschlussprüfung. Da der Kurs größtenteils auf englischsprachigen Texten basiert sind grundlegende Englischkenntnisse erforderlich. Die Unterrichtssprache ist deutsch.

More information can be found at the University of Vienna website.

body data – data body

Together with Katja Mayer I wrote an article about quantified self, big data and social justice in the health context. The title is “Body data-data body: Tracing ambiguous trajectories of data bodies between empowerment and social control in the context of health” and it has just recently been published by the wonderful open access journal Momentum Quarterly!! Here is the link to the full text (completely free of charge!)! Don’t get irritated by the German title and abstract, the article is in English, no worries! 😉

mq

Thanks to Leonhard Dobusch and Dennis Tamesberger!! I’m happy to be part of this great Momentum Quarterly editorial team!

Where is SUSI in the AI?

Screenshot_2019-02-25 astridmager ( astridmager) TwitterThanks for your interest and great response to the FOSSASIA 2019 workshop I advertised in my previous blog post! :) Are you a SUSI.AI developer/ contributor? Are you up for an experiment? Would you be willing to write a short piece of text on how the social appears in the technical development of SUSI.AI/ your daily work practices? This text should only be half a page or a page and you should’t think about it too hard; rather: you just find a nice spot (like I did last spring in Berlin, where the picture above was taken) and quickly write down what comes to your mind when you hear the following question:

When and how did you encounter SUSI (standing for the social in terms of social biases, user imaginations, gender relations, your own desires and expectations, or something else that comes to your mind..) when developing/ contributing to SUSI.AI and how did you handle SUSI back then?

Please send your memories to me (astrid.mager(at)oeaw.ac.at) so that we can discuss/ work with them during the workshop. Based on these texts we’ll be able to draw out how to (better) handle SUSI in the future, but also how SUSI can be made productive in terms of creating more “open”, “transparent” or “fairer” (AI) technology more generally.

If you don’t find the time to write such a memory, don’t worry! I’d still be happy to see you at the workshop and learn about your ideas on the way SUSI figures in your work and how you usually deal with it!

Remember: The workshop titled “Where is SUSI in the AI?” will take place on Saturday, 16th March, 18-18.55, at the Event Hall 2-1. I’m already looking forward to seeing you there!!! :) Please use this link to sign up for the workshop! Thank you!

If you’re interested in learning more about working with memories in software design, I’d be happy to give you further insights in the method “mind scripting” I’ve been toying around with just recently. It’s a method developed by my colleague Doris Allhutter, who particularly created this method to investigate (and potentially also to intervene in) software practices.

 

Call for workshop participants @ open tech summit singapore

If you’re a SUSI.AI developer I’d love to get in touch with you to learn about your work practices, your ideas about SUSI.AI and open source more generally, and to discuss what role the social – in terms of social biases, user imaginations, gender relations, your own desires, or something else that is important for you as a coder – plays in the technical development of SUSI.AI/ your own work. I’ve organized a workshop to provide a space for mutual learning experiences and to initiate a dialogue between informatics and social sciences; an interface I find tremendously important in times of growing social biases, discrimination and surveillance corporate tech triggers. Please let me know if you’d like to participate in the workshop and what you’re interested in to better prepare it in advance! Also, please spread the word and motivate other SUSI.AI developers to show up! The more participants, the better! 😉 If you don’t have time to participate in the workshop – I’m sure you guys will be busy over there – I’d still be happy to hear from you and find some other opportunity to chat at the summit. It’s going to be my first Asian Tech Summit so I’m really looking forward to be there and learn more about your great work!! Thanks also to Michael Christen and Mario Behling for supporting my work so far! I’m of coures looking forward to meeting you guys in Singapore too!!! YAY! :)

fossThis ethnographic study on SUSI.AI is part of my ongoing research project “Algorithmic Imaginaries. Visions and values in the shaping of search engines”; funded by the Austrian Science Fund (FWF). A short – bit outdated – description of my project can be found at the ITA website. I’m happy to explain it further once we meet, of course!

Here’s the abstract for the workshop titled “Where is SUSI in the AI” (Saturday 16th March, 18-18.55, Event Hall 2-1). Please use this link to sign up for the workshop.

There is a long research tradition in the field of science and technology studies (STS) showing the importance of the social in technical design processes. The notion sociotechnical design practices, for example, stands for tight entanglements and co-shaping processes of technical and social elements. Following this basic assumption critical algorithm studies, infrastructure studies, and software studies have started to investigate how social biases in big data, preferences of designers and coders, or imaginations of future users shape digital tools, software, or artificial intelligence. Moreover, innovative methods have been developed to not only analyze, but also problematize and intervene in software practices. “De-biasing” has become an issue of concern bringing together computer scientists and social scientists to learn from each other in the attempt to bring fairness, accountability and transparency to the table of software design.

Following this research tradition the proposed workshop tries to bring together developers, coders, researchers and other contributors working on SUSI.AI to address the following question: “Where is SUSI in the AI”? During the workshop the participants are invited to show and share how SUSI (standing for the social in terms of social biases, user imaginations, gender relations, developers’ own desires, or something else that is important for the SUSI.AI team) actually figures in the design process and how they deal with SUSI/ or hope to deal with SUSI in the future. While the workshop mainly invites contributors working on SUSI.AI, it is open to developers working on similar AI projects as well.

If you’re up for experimenting with a method using memory work before and during the workshop, please check out my next blog post! To be continued.. 😉

lecture @ technical university of vienna

In January I was kindly invited to give a lecture on my habilitation project “Algorithmic Imaginaries“. This talk was part of the lecture series “Aspects of the Digital Transformation” at the The Centre for Informatics and Society (CIS) of the Faculty of Informatics. Thanks a lot to Florian Cech and Hilda Tellioglu for the warm welcome including fine wine and bread! Thanks also to the audience who triggered really interesting discussions! You can find the video on the C!S website if you want to watch it (in English):

tu wien