50 internet myths

Today the Internet Governance Forum started in Berlin. As part of this huge event the edited volume Busted! The Truth About the 50 Most Common Internet Myths will be launched. This wonderful volume – edited by Matthias Kettemann & Stephan Dreyer – is a compilation of common Internet myths and their deconstructions. Here is the link to the whole book: https://internetmythen.de (English and German; including summaries in all five UN languages). Enjoy!!myth

I’ve contributed Myth #19: Search engines provide objective results:

Myth: Search engines deliver objective search results. This is the founding myth of the leading search engine in the Western World: Google. 20 years later this founding myth still exists in Google’s company philosophy. More importantly, however, it resonates in people’s minds. Without knowing how the search engine actually works, many users say that the best websites can be found on top.

Busted: In 1998, the founding year of Google, Sergey Brin and Larry Page described their search engine’s central aim as follows: „The primary goal is to provide high quality search results over a rapidly growing World Wide Web.“ (Brin and Page 1998: 115). Accordingly, the notions “quality” and “search quality” feature over 30 times in their research paper. The authors depict the PageRank algorithm – originally using the number and quality of hyperlinks a website gets, anchor text and proximity to determine the quality of a website and rank it accordingly – as their main competitive advantage. They describe the algorithm as “objective measure” corresponding well to “people’s subjective idea of importance” (Brin and Page 1998: 109). Interestingly, this seems to be the case indeed. Having asked people why they use Google to find online health information in the context of my PhD project, most people answered with saying that Google delivered the best search results, implicitly shaping the search engine as a tool for quality assurance. Without knowing – or even thinking about – how the search engine actually works, Google’s founding myth was reproduced in people’s stories.

But it is a myth. Search engines are no neutral, objective technologies, but rather tightly intertwined with societal norms, values and ideologies; the capitalist ideology most importantly. Over the past decades, Google’s “techno-fundamentalist” ideology of neutral ranking was aligned with and overshadowed by non-objective considerations. New media scholars started to deconstruct the myth of objectivity soon after the search engine’s successful market entry. At first, they challenged the PageRank algorithm by arguing that it would threaten the democratic ideal of the web (#28) by systematically preferring big, well-connected, often commercial websites at the expense of smaller ones. Later they switched over to questioning search engines’ business models based on user-targeted advertising and the commercialization of search engine results and privacy issues these trigger. A major criticism in this body of work concerns the ‘consumer profiling’ conducted by Google – and others like Bing – that enable search engines to adjust advertisements to users’ individual interests. (#21; #22)

Due to the growing amount of user data these companies acquired, the search algorithm and the “organic” search results changed too. Besides hyperlinks other factors were thrown into the measuring of a website’s quality including user profiles and click behaviour most particularly, but also the structure of a website, timeliness, and the amount of keywords and content. Accordingly, new media researchers, but increasingly also journalists, criticized the intensified personalization of search engine results, search engine biases and discrimination. This illustrates that search algorithms are tightly intertwined with the business models their companies rely on. The capitalist ideology is embedded in search engines and “acts through algorithmic logics and computational systems“ (Mager 2014: 32).

Truth: It is important to keep in mind that search engines and their algorithms are no neutral technologies, but rather incorporate societal values and ideologies; the capitalist ideology most importantly. Only then may we come up with forward-looking governance models respecting local regulations and resonating with human rights (especially in Europe, where data protection is enshrined as a fundamental right).

 


Source: Sergey Brin and Lawrence Page, The anatomy of a large-scale hypertextual Web search engine, Computer Networks and ISDN Systems 30: 107- 117 (1998); Astrid Mager, Defining Algorithmic Ideology: Using Ideology Critique to Scrutinize Corporate Search Engines, tripleC 12(1): 28-39 (2014).

intro course STS & digital tech

This is the abstract for my introductory course into Science and Technology Studies using digital technology as an exemplary case (data, algorithms & prognosis more specifically). I’m already looking forward to heated discussions on social media, AI, self-driving cars, recommender systems and their sociopolitical dimensions and governance implications! (@ the Deptartment of Science and Technology Studies, University of Vienna; in German).

Technik im Alltag am Beispiel von Daten, Algorithmen und Prognosen

Suchmaschinen, soziale Netzwerke und eine Vielzahl von Apps am Handy sind aus unserem Alltag nicht mehr wegzudenken. Sie haben sich in unsere alltäglichen Praktiken eingenistet, gestalten aber gleichzeitig auch welche Informationen wir finden, wie wir über Distanz kommunizieren, und wie wir unseren Körper wahrnehmen, wenn wir zum Beispiel an Gesundheitsapps denken. Sie werfen aber auch eine Reihe gesellschaftspolitischer Fragen auf: Was bekommen wir in Suchmaschinen-Ergebnissen, Newsfeeds und Online-Recommendations zu sehen und was nicht? Welche neuen Formen von Bias und Diskriminierung entstehen dabei? Wie können auf Basis gesammelter Daten Zukunftsprognosen erstellt werden und welche Konsequenzen gehen damit einher? Was bedeutet die zunehmende Quantifizierung unterschiedlicher Lebensbereiche für Individuen und Gesellschaft? Wie können wir global agierende Technologie-Unternehmen und deren Geschäftsmodelle (Stichwort ‘Datenhandel’) regulieren und welche gesellschaftliche Teilhabe ist dabei möglich?

Diese Fragen möchten wir in unserem Kurs anhand von klassischen Einführungstexten aus der Wissenschafts- und Technikforschung (STS), sowie aktuellen Texten aus den kritischen New Media Studies behandeln. In jeder Einheit wird die Lehrveranstaltungsleiterin zunächst ein klassisches STS-Konzept – soziale Konstruktion von Technologie, Politik von Technologie, Actor-Network Theory, Technikentwicklung und Geschlecht, Partizipation etc – vorstellen und zur Diskussion aufbereiten (Pflichttext). Darauf aufbauend werden wir einen Text aus den Themenfeldern Daten, Algorithmen und Prognosen diskutieren, der das jeweilige Konzept zur Anwendung bringt (Referatstext). Dieser wird von Studierenden in der Gruppe aufbereitet und zur Diskussion gestellt/ moderiert. Zusätzlich dazu werden zwei schriftliche Arbeitsaufgaben gestellt, die wir im Seminar diskutieren werden. Voraussetzungen für den Zeugniserwerb sind Anwesenheit, Mitarbeit, mündliche Präsentation (Textdiskussion oder Position in der Bürgerkonferenz), schriftliche Arbeitsaufgaben, sowie die Absolvierung der schriftlichen Abschlussprüfung. Da der Kurs größtenteils auf englischsprachigen Texten basiert sind grundlegende Englischkenntnisse erforderlich. Die Unterrichtssprache ist deutsch.

More information can be found at the University of Vienna website.

body data – data body

Together with Katja Mayer I wrote an article about quantified self, big data and social justice in the health context. The title is “Body data-data body: Tracing ambiguous trajectories of data bodies between empowerment and social control in the context of health” and it has just recently been published by the wonderful open access journal Momentum Quarterly!! Here is the link to the full text (completely free of charge!)! Don’t get irritated by the German title and abstract, the article is in English, no worries! 😉

mq

Thanks to Leonhard Dobusch and Dennis Tamesberger!! I’m happy to be part of this great Momentum Quarterly editorial team!

Where is SUSI in the AI?

Screenshot_2019-02-25 astridmager ( astridmager) TwitterThanks for your interest and great response to the FOSSASIA 2019 workshop I advertised in my previous blog post! :) Are you a SUSI.AI developer/ contributor? Are you up for an experiment? Would you be willing to write a short piece of text on how the social appears in the technical development of SUSI.AI/ your daily work practices? This text should only be half a page or a page and you should’t think about it too hard; rather: you just find a nice spot (like I did last spring in Berlin, where the picture above was taken) and quickly write down what comes to your mind when you hear the following question:

When and how did you encounter SUSI (standing for the social in terms of social biases, user imaginations, gender relations, your own desires and expectations, or something else that comes to your mind..) when developing/ contributing to SUSI.AI and how did you handle SUSI back then?

Please send your memories to me (astrid.mager(at)oeaw.ac.at) so that we can discuss/ work with them during the workshop. Based on these texts we’ll be able to draw out how to (better) handle SUSI in the future, but also how SUSI can be made productive in terms of creating more “open”, “transparent” or “fairer” (AI) technology more generally.

If you don’t find the time to write such a memory, don’t worry! I’d still be happy to see you at the workshop and learn about your ideas on the way SUSI figures in your work and how you usually deal with it!

Remember: The workshop titled “Where is SUSI in the AI?” will take place on Saturday, 16th March, 18-18.55, at the Event Hall 2-1. I’m already looking forward to seeing you there!!! :) Please use this link to sign up for the workshop! Thank you!

If you’re interested in learning more about working with memories in software design, I’d be happy to give you further insights in the method “mind scripting” I’ve been toying around with just recently. It’s a method developed by my colleague Doris Allhutter, who particularly created this method to investigate (and potentially also to intervene in) software practices.

 

Call for workshop participants @ open tech summit singapore

If you’re a SUSI.AI developer I’d love to get in touch with you to learn about your work practices, your ideas about SUSI.AI and open source more generally, and to discuss what role the social – in terms of social biases, user imaginations, gender relations, your own desires, or something else that is important for you as a coder – plays in the technical development of SUSI.AI/ your own work. I’ve organized a workshop to provide a space for mutual learning experiences and to initiate a dialogue between informatics and social sciences; an interface I find tremendously important in times of growing social biases, discrimination and surveillance corporate tech triggers. Please let me know if you’d like to participate in the workshop and what you’re interested in to better prepare it in advance! Also, please spread the word and motivate other SUSI.AI developers to show up! The more participants, the better! 😉 If you don’t have time to participate in the workshop – I’m sure you guys will be busy over there – I’d still be happy to hear from you and find some other opportunity to chat at the summit. It’s going to be my first Asian Tech Summit so I’m really looking forward to be there and learn more about your great work!! Thanks also to Michael Christen and Mario Behling for supporting my work so far! I’m of coures looking forward to meeting you guys in Singapore too!!! YAY! :)

fossThis ethnographic study on SUSI.AI is part of my ongoing research project “Algorithmic Imaginaries. Visions and values in the shaping of search engines”; funded by the Austrian Science Fund (FWF). A short – bit outdated – description of my project can be found at the ITA website. I’m happy to explain it further once we meet, of course!

Here’s the abstract for the workshop titled “Where is SUSI in the AI” (Saturday 16th March, 18-18.55, Event Hall 2-1). Please use this link to sign up for the workshop.

There is a long research tradition in the field of science and technology studies (STS) showing the importance of the social in technical design processes. The notion sociotechnical design practices, for example, stands for tight entanglements and co-shaping processes of technical and social elements. Following this basic assumption critical algorithm studies, infrastructure studies, and software studies have started to investigate how social biases in big data, preferences of designers and coders, or imaginations of future users shape digital tools, software, or artificial intelligence. Moreover, innovative methods have been developed to not only analyze, but also problematize and intervene in software practices. “De-biasing” has become an issue of concern bringing together computer scientists and social scientists to learn from each other in the attempt to bring fairness, accountability and transparency to the table of software design.

Following this research tradition the proposed workshop tries to bring together developers, coders, researchers and other contributors working on SUSI.AI to address the following question: “Where is SUSI in the AI”? During the workshop the participants are invited to show and share how SUSI (standing for the social in terms of social biases, user imaginations, gender relations, developers’ own desires, or something else that is important for the SUSI.AI team) actually figures in the design process and how they deal with SUSI/ or hope to deal with SUSI in the future. While the workshop mainly invites contributors working on SUSI.AI, it is open to developers working on similar AI projects as well.

If you’re up for experimenting with a method using memory work before and during the workshop, please check out my next blog post! To be continued.. 😉

lecture @ technical university of vienna

In January I was kindly invited to give a lecture on my habilitation project “Algorithmic Imaginaries“. This talk was part of the lecture series “Aspects of the Digital Transformation” at the The Centre for Informatics and Society (CIS) of the Faculty of Informatics. Thanks a lot to Florian Cech and Hilda Tellioglu for the warm welcome including fine wine and bread! Thanks also to the audience who triggered really interesting discussions! You can find the video on the C!S website if you want to watch it (in English):

tu wien

Suchmaschinen in Europa – europäische Suchmaschinen?

I was invited to write a blog post about my research for the Young Academy blog at the daily newspaper “Der Standard“. Here’s the teaser:

Suchmaschinen in Europa – europäische Suchmaschinen?
Suchmaschinen sind gesellschaftspolitischen Entwicklungen unterworfen. Doch welche Rolle spielt Europa dabei?

Enjoy reading the the full text here (in German)!

If you want to learn more about all the great members of the Young Academy, check out the summer series portraits of new YA members. Mine is titled “Kleine Davids gegen Google Goliath“. It’s a fine compilation of interdisciplinary research my young colleagues are doing.

Bildschirmfoto 2018-10-18 um 09.55.21

the future is now

That’s the handout of the master course “The future is now. Exploring the role of sociotechnical imaginaries in the making and governing of digital technology” I’m currently teaching at the Department of Science and Technology Studies (University of Vienna). The course is tightly connected to both my reasearch project “Algorithmic Imaginaries” and the special issue “We are on a mission” for New Media & Society I’m guest-editing together with Christian Katzenbach. It’s great to go through all kinds of imaginary concepts together with my students! Here’s the abstract:

Contents, aims and methods of course

Digital innovations such as artificial intelligence, blockchain technology or internet of things are driven by imaginaries of future societies. Future imaginaries are enacted to promote digital developments or legitimate certain modes of internet governance. Software providers, technology companies and legislators dig into the rich pool of cultural norms, visions and values to support (or question) digital tools, rules and regulations. Future prospects seem to be central for making decisions in the present. The future, however, is not only imagined, but also constructed, made and unmade in different constellations and contexts.

This course will focus on the role of sociotechnical imaginaries in the making and governing of digital technology. We will discuss questions such as: How does science-fiction contribute to the shaping of future technologies? How do images and metaphors influence public and policy debates on digital technologies? What do sociotechnical imaginaries tell us about the co-production of digital technology and political order? How are cultural norms, visions and values embedded in software design and infrastructure? How can we study sociotechnical design practices and modes of internet governance? To answer these questions we will draw on theories and concepts from science and technology studies (STS) and critical new media studies. Theoretical discussions will be mixed with empirical work (e.g. analysis of a small selection of newspaper articles, online materials, interviews (1 or 2), experiments etc), which will lead to a small research project that students will present in class. In the seminar papers students will individually write an exposé for a research project, which can, but must not be related to the group work presented in class.