The Changing Landscape of Literacies: Big Data and Algorithms

Carrington, V.

Published Online: September 20, 2018
Full Text: HTML, PDF

Abstract

This paper begins with a young British woman – Sophie – and her interpretation of the customized advertising and news she encounters on the social networking and search platforms she accesses via her mobile phone. The paper adopts Sophie as a provocation for identifying and thinking through a range of issues that arise from these new contextual landscapes. To unpack Sophie’s perceptions and experiences, the paper turns to a framing discussion of the impact and reach of data in contemporary culture and the discourses that have grown up around it. The paper then turns to the challenges posed by this new economic and cultural landscape for the ways in which we approach identity, text and being an effective literate citizen-worker.

KeywordsData, algorithms, text, critical data literacies, identity

Introduction

Sophie [1] was part-way through our interview about her use of mobile phones when the conversation drifted into her description of increasing personalization she believed she was experiencing. For 21-year-old Sophie, living in a large regional UK city, this sense of a personal connection was becoming one of the key taken-for-granted elements of her life with digital technologies. For Sophie, the Internet had disappeared from view, overshadowed by the device and the apps running on it. From Sophie’s perspective, these apps communicate directly with her:

I think it tries to be really personal to you. It tries to … almost humanize itself in that way ‘cause it’s kind of trying to speak to you. It’s trying to, you know, have a … trying to communicate in a way.

This personalization ensures that Sophie is surrounded by information and advertising that reflects her existing patterns of behaviour:

And the more you look at certain things, the more – I’m assuming – it works to show you certain things as well. So, I think, especially with Facebook the more you kind of … um .. so, my boyfriend has a band, and you can pay to push the advertisement on it. And, um, if someone likes the same genre band as you’re promoting it will come up on the side of your Facebook, so it will advertised for them to click on. So, you know, those kinds of things are in your face all the time.

While obliquely invoking the power of artificial intelligence (AI) as the app ‘works to show you certain things’ – and by definition, not others – Sophie is in fact describing the actions of the proprietary algorithms underwriting her experience of customization and targeted advertising. Sophie accepts that her everyday life is strongly mediated by her personal digital media and the internet, to the point where she states, “I don’t think there’s such a thing as being offline”. She is clear in articulating her understanding of the ways in which user online activity is leveraged to target her with selected advertising and content. It was, however, striking that Sophie was unproblematically accepting of this ‘relationship’. As a critical digital literacies researcher, I found Sophie’s easy acquiescence concerning. I also found that as a researcher, I did not have a readily available or useful framework for interpreting this behaviour.

In response, this paper takes the form of a small provocation. Borrowing from Nancy Verhoeff (2014), the paper engages with Sophie as a ‘theoretical console’, an instance that defies explanation via existing frameworks. Consequently, the paper explores the context in which her engagements with technologies and data take place and considers their implications for those of us interested in literacy research and education. I take the view that understanding big data – the narratives that surround it and the algorithms that power it – is key to understanding the challenges faced by Sophie and others as citizens able to interpret, leverage and produce key cultural texts.

To explore these issues, the paper begins by briefly outlining the exponential grown of digital data and its far-reaching impact. It then turns to the discursive regimes, the powerful narratives that are emerging around what is often termed ‘big’ data acting to naturalize these practices and embed them in the everyday. Finally, the paper argues that as literacy educators and researchers, we need to urgently engage with big data as a key cultural text and narrative, opening critical debates around the ways in which it is collected and used as well as on the ways in which these practices potentially impact the capacity of individuals and groups to participate effectively in their social, civic and economic worlds.

Big Data Landscapes

Each time that Sophie navigates to an online site, uploads or comments on a photo, uses a map, views a video, makes a purchase, searches, makes a keystroke, she becomes a set of data points in the algorithmic currents of what is being called ‘big data’. As Shaw notes (2014, p. 1), “data now stream from daily life: from phones and credit cards and televisions and computers; from the infrastructure of cities’ from sensor-equipped buildings, trains, buses, planes, bridges, and factories”. Every action Sophie takes using the apps on her phone produces information and becomes data. Data flows, as Shaw noted above, from every action taken by users, to be harvested and commercialized. There is a lot of data being created and collected. It is not, however, the volume of data that matters. The focal point of ‘big data’ is that new computational and statistical methods allow diverse data bases to be stored, linked and analysed – individually and/or collectively (boyd & Crawford 2012). It is this collection and analysis of large and diverse sets of data that form the architecture underlying the sense of ‘communication’ and personalization described by Sophie in regional England. These pieces of data can be collected from diverse sources, commensurated and analysed, and then sold on to third parties, all without Sophie’s explicit knowledge, input or control. One of the UKs largest data companies, Read Group, collects and combines “transactional history, lifestyle choices, behavioural insights and geo-demographics” (https://readgroup.co.uk/services/unrivalled-data)and on-sells this information to advertising companies. Users like Sophie are essentially trading their personal data and individual agency for the social connectivity, entertainment and pleasure provided by ‘free’ apps (as well as paid) across the range of digital technologies.

Sophie makes mention of Facebook – of course. The architecture and operation of Facebook uses the data it collects in conjunction with its proprietary algorithms to personalize the experience of its one billion users, building high levels of trust and encouraging the provision of even more personal information (Peters 2012). Essentially, the Facebook [2] business model, like others, relies on providing a ‘free’ platform that facilitates the generation, capture and analysis of user personal data that is sold on to brokers and advertisers for profit. As with other ostensibly ‘free’ apps (Lanier 2018) Facebook users like Sophie are effectively paying for their use of the app – and then some – with the income generated from their personal information and activities. The fascinating aspect of this is not just the version of ‘free’ that has become widely accepted, but that the data generated by users have become understood as a form of capital owned by corporations rather than the product of labour (Ibarra et al 2017). Profits generated, as results of these labours are not used to compensate users, creating a significant disconnect between labourers and the product of their labour, and between individuals and corporations. In part, this disconnect has been enabled by the discourses created around the notion of big data. The rapid growth of data and the algorithms that process and analyze it has been accompanied by a discursive regime that carries significant cultural power (Kirchin 2014; Puschmann & Burgess 2014). Anderson’s (2008) widely circulated description of the ‘petabyte age’ exemplifies the discourses that have often been wrapped around big data:

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behaviour, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

The assumption here is that big data can and should capture everything, that the emerging patterns have innate meaning, and crucially, predictive capacities. There is an explicit claim that this data is more powerful because it is gathered and interpreted without the bias of theory or human interference and therefore captures an objective ‘truth’. This is not the case. The assumption that all data should be harvested is itself highly problematic. Against these claims, boyd and Crawford (2012) present a multi-layered definition of big data as “a cultural, technological, and scholarly phenomenon” that rests on the interplay of technology, analytic power and a mythology that assigns particular patterns of truth (p. 663). Refuting claims to neutrality and truth, Kitchin (2014) argues that data absorbs bias via the selections of location, income, gender, race, ethnicity and education; the technologies and the protocols used to collect the data; the choices about the metadata and variables being created, collected or ignored; and, finally, the capacity of the data to accurately represent the phenomena they are designed to measure (see also Iliadis & Russ 2016). Doing things with and to data requires processing. The vast volumes of data collected are sorted and analysed using a range of mathematical formulas – algorithms. Algorithms are “computer programs, a set of instructions for carrying out procedures step-by-step, and range from quite simple to very complex” (Tufekci 2017, p. 206)[3]. The more complex algorithms work to make a range of complicated and essentially subjective decisions about the data fed into them, often without human oversight or intervention. These ‘gatekeeper’ algorithms make decisions about what news feeds we see, which of our Facebook status updates are visible and which are not, which advertisements we are shown and when, whether or not we meet the profile of a terrorist and find ourselves denied boarding a flight, our likelihood of being categorized as a high risk customer for a health insurer. As Courtland (2018, p. 3) notes, “computer calculations are increasingly being used to steer potentially life-changing decisions, including which people to detain after they have been charged with a crime; which families to investigate for potential child abuse, and – in a trend called ‘predictive policing’ – which neighbourhoods police should focus on”.

Algorithms may be shrouded in the language of mathematics and computing but they remain cultural products. As such, algorithms are not constructed free from bias. The bias of the individuals and groups writing the algorithms impacts in quite simple ways, for example, the value that is attached to a postcode or to regular purchases in a particular type of store. The operations of these algorithms – to which we do not have access – have consequences (Pasquale 2015). These biases impact on people’s lives and futures. Invisible algorithms are used to determine credit ratings, education, access to housing, health insurance, risk assessments, or suitability for employment (Kirchner 2015). As O’Neil (2016, p. 10) notes, “an algorithm processes a slew of statistics and comes up with a probability that a certain person might be a bad hire, a risky borrower, a terrorist, or a miserable teacher. That probability is distilled into a score, which can turn someone’s life upside down”. These are therefore intensely important life issues for individuals and groups.

As this necessarily brief sketch outlines, we find ourselves in an era characterized by the accumulation and storage of massive volumes of data, much of it collected from individuals going about their everyday lives without compensation for their labour or accountability from the organizations who profit (Dourish 2016). Additionally, the data collected is analysed using unseen algorithms and used to shape the life opportunities of individuals and groups. Some may find this account polemic and perhaps shrill. In response, I would argue that taking this view is necessary if we are to ensure that our young have the skills and knowledge necessary to thrive in this environment. Sophie is, I would argue, illustrative of the ways in which data flow in and through everyday life and the deep complexities of our relationship with it. We provide the data, are (re)constituted by that data for a range of different audiences, and we feel the impact of valuations we have no access to; all normalized by the strong narratives that now exist around big data and the affordances of individual technologies embedded in our everyday lives. Importantly for this paper, all of these actions require text. Data, whether it is gathered from social networking, online transactions or institutions forms a powerful cultural text (Ozga 2009) that has already been shaped and shaded by the series of choices and practices underpinning its generation, capture and redeployment. It is important for young people to understand that each interaction with data collection and analysis embeds the rules of a new engagement with institutions, corporations, technology and each other. While it may be tempting to believe that young people are not included in these data captures because they are less likely to be actively involved in economic transactions online (a dubious hope) or have credit cards in their own names, young and old are active on social networking sites and undertake Google searches. All of these require constructing and/or interpreting text.

Key Challenges from these new landscapes

The cultural narrative giving tacit permission to users to give away their data and in the process giving up control of some of the key texts of their lives is powerful and as we have seen, deeply problematic. There are a number of challenges rising from this changing landscape that should be of concern to literacy researchers and educators. Amongst other outcomes, the collections, aggregations and dis-aggregations of the data supplied by users create what have been termed ‘algorithmic identities’. Cheney-Lippold (2011, p. 165) calls the new selves being constructed around the representation of individuals as simultaneously data and commodity “algorithmic identities”. These identities are formed by comparison with shifting categories of ideal ‘measurable’ types. As critical literacy researchers, a key principle we share is that texts – printed or multimodal or algorithmic – are not neutral; that they have to power to shape the ways in which individuals are ‘seen’ by their communities and that this power can be harnessed by the individual to allow him/her to participate in a variety of fields. There is, consequently, a strong focus on agency in a critical literacies approach. Algorithms, as they now operate inhibit individual agency and the power to create our own narrative of the self to share with the world around us. Data now algorithmically creates this narrative; we are increasingly “strategically fictionalized” (Chenery-Lippold 2017, Loc 747). These assigned identities are not about you as a person; they are about the patterns of your activities and their interpretation by an algorithm. And yet, these fictionalized identities have real world consequences influencing, for example, credit ratings, access to health insurance and at what cost, employment, welfare funding and housing opportunities. Scaltsas (2018) identifies what he calls an ‘agency gap’; the gap between who we think we are and who algorithms construct us to be. We have no frameworks or processes for either articulating, bridging or critiquing this increasingly important divide.

While it may be tempting to believe that young people are not included in these data captures because they are less likely to be actively involved in economic transactions online (a dubious hope) or less likely to have credit cards in their own names, young and old are active on social networking sites, blogs, and using search engines. Young people’s data is scooped up and used to profile, for profit, and for predictive analytics that impact on potential life pathways and opportunities. Some of these young people are opening their own social media accounts and downloading apps by exaggerating their age; others are included with or without their knowledge on their parent/family social media accounts. Regardless, their data is being harvested and processed, profiles are being constructed. These various ‘algorithmic’ or ‘measurable’ identities include the construction of risk assessments about us, our potential for criminality, our health and education and/or employment trajectories. These quantitative measurements do not take account notions of gender, race, class or citizenship (Cheney-Lippold 2017) that form the base of our established socio-political system. While gender, for instance, is a deeply contested term, it is nonetheless a touchstone for a range of key debates and action, informed by a range of views and experiences. To have identity categories such as gender that are central to individual development, community and the trajectory of one’s life circumvented by the action of algorithms based on unknown snippets of data is deeply problematic for the individual in the short term and the broader society in the medium and long term. Regardless of positioning as a psychological individualized process or part of a sociocultural process, the development and use of effective literate practices is recognized as a key aspect of identity development in contemporary culture.

One of the societal roles of literacy has been to provide shared skills and information and as a consequence, enable informed public debate. Many of the proprietary algorithms used by large corporations are working to erode the shared arena that has historically served to facilitate conversation (Tufecki 2017). There is no longer a set of agreed knowledge or information shared by all and this has real world consequences. Algorithms – as demonstrated by Sophie – ensure that every user’s feed is different and tailored to the algorithmically constructed categorizations attached to them. This creates shifting filter bubbles that feed us the news and opinions it calculates we already agree with. This is impeding our ability, as individuals and as a society, to engage in debate around key social issues, to accept and respect diverse views, and more broadly threatens to create deep divisions in our societies. The public sphere available for discussion and debate is being fractured (see for example http://www.people-press.org/2017/10/05/1-partisan-divides-over-political-values-widen/1_5-15/). This may seem a polemic stance, but we have only to look to the Cambridge Analytica scandal around the Leave UK and Trump election campaigns to see the way data has been leveraged and the impact of this incursion on our societies and democratic processes.

For critical literacy teachers and researchers, this is concerning, not just for the narratives and issues around data collection and analysis, but that data has become understood as a form of capital detached from a form of labour (Ibarra et al 2017). This ‘data as capital’ model “treats data as natural exhaust from consumption to be collected by firms” (Ibarra et al 2017, p. 2). This positioning allows corporations and politicians to regard online activity as a free service. In exchange for the provision of this service, corporations are then free to reap the capital gains of the data and on-going surveillance. The labour of users in producing data is decoupled from the profit that same data returns. It may seem to some that the work associated with using social media, taking part in the quizzes, games or surveys that permeate social media platforms, watching videos or shopping online does not equate to ‘labour’. However, labour is at the heart of capitalism. As Fuchs reminds us, labour is a commodity and “every second of labour costs money” and the “reason why capital has the interest to make workers work as long as possible for as little wages as possible and to make them labour as intensively as possible so that the highest possible profit (which is the outcome of unpaid labour time) can be achieved” (2014, p. 6). Enticing users to labour unpaid is the road to profit. Understanding the generation of data as labour requires acknowledgement of users of apps and social networking sites as producers and owners of a commodity created by their labour. At the time of the interview, Sophia was providing her labour without compensation and was not receiving a share of the profits generated via the data she produced. Perhaps she was labouring to produce data in exchange for what she perceived to be free services alongside the customization and the seductive feeling of connection. These, however, would not be good reasons. The creation of text and its deployment is a form of labour. The creation of data is a form of labour. Young people who are already engaged in this form of digital labour may well find themselves performing labour in the gig economy. These young people need to be critically aware of the labour-capital relationships in which they are involved and to have the knowledge and skills to avoid exploitation. Young people – all people – increasingly need to be able to “manage the trade in and uses of information about us” (Richards & King 2014, p. 412). Partly this control should come from updated regulatory systems that protect the ownership of data by the individuals who create it and an understanding of new forms of labour, but it should also come from a critical data literacy that is enacted by individuals on their own behalf.

Moving Forward: Critical Engagement with Data

Issues rising from the collection of data from user activity outlined above – the lack of critical engagement by users; the power differential between user and app company/data use, between individual and the predictive model created by algorithms; the shift of identity construction and its deployment to a computerized algorithm; the disconnect between labourer and the product of labour alongside the disconnect between user and his/her data – are all taking place in the shade of narratives of big data that work to obscure problematic economic, equity and identity practices. At the same time, the gap between how we understand ourselves to be as individuals or groups and how we are constructed by algorithms -the agency gap (Scaltsas 2018) – has the potential to significantly impact our ability to engage effectively in a range of economic, political and social fields.  Liberal democracies are premised on the notion of a rational autonomous individual making informed choice (Harari 2018) however the current context has the consequence that the potential for the disenfranchisement of individuals is high, as is the potential for weakening social theory and its application to complex social challenges. The public sphere and potential for informed debate of key social, cultural and economic issues is weakened by these activities as is the base of shared knowledge that allows for shared values and practices.

As classroom literacy teachers and researchers, we must ensure our curriculum and pedagogy create and sustain classrooms where information is shared and subject to scrutiny, where the power of community is demonstrated, where the ethics of data and of social structures and the way they operate are unpacked and discussed. We need to focus on building capacities we might call a critical data literacy. These critical and analytic skills can then be turned towards the ways in which we are constructed by data and the ways in which data are produced and circulated. As teachers we will need to collaborate across disciplines to ensure that the principles of a critical data literacy are shared and enacted consistently across the curriculum. We must model effective ways to engage with and challenge the implementation, interpretation and outcomes of algorithmic processes as they impact our profession and our everyday classroom practices. As researchers and teacher-educators we need to be building the understandings and resources necessary to prepare our pre-service teachers for the challenges of a world shaped by big data and algorithms. They are going to require a critical orientation, skills with technology, a concern for ethics and citizen rights and a range of practical theories for framing changes in key social and educational categorizations.

This brings us back, inevitably, to Sophie, sitting at the crossroad of technology, everyday life and data, describing the happy customization that has seeped into her relationship with her smart phone. Sophie’s interview focused predominantly on her use of her smart phone and the ways in which it was embedded in her life, facilitating her engagement in a range of different social spaces. While her sense of identity was linked closely to her use of digital devices and the internet she did not appear to have considered her own status as an unpaid labourer with an algorithmic identity shaped by data analytics over which she has no knowledge or control. While Sophie, a final year university undergraduate, is successfully literate across a range of useful cultural contexts, she is not effectively knowledgeable or literate in relation to the ways in which she is producing data or in the ways in which the data is being collected, transformed and used to produce textual forms that construct identities that attach to her as an individual, and that carry increasing power, but over which she has virtually no control. From a critical data literacies perspective, Sophie is not able to read the narratives or the codes that structures her identity and experiences, she does not have the skills with this form of text to produce and deploy it, and even more importantly, she certainly does not understand ‘how it works or who it works for’ (Galloway 2004). In Frierian terms, she cannot ‘read the world’ (Friere 1974). As a direct consequence she is incapable of drawing together the resources to ‘change’ the world and just as significantly, does not have the resources to avoid exploitation as a citizen-worker. Drawing on Cheney-Lippold (2017), Sophie is increasingly designed to be a ‘dividual’ floating within a sea of controlled information that limits her ability to question the world as she finds it. The floating nature of Sophie’s self is symptomatic of the fracturing of a public sphere that would allow information and knowledge to be shared, debated and acted upon. This is not the only issue here. The individualization of information as well as the way that algorithms create inaccessible algorithmic identities to stand in for individual people weakens the capacity of citizens to work collaboratively to ensure fair and equitable treatment of themselves and others across a range of social, political and economic spaces.

Given the rise of big data to cultural significance and its power to shape our present and our future, Sophie’s story is problematic. Kitchin (2014, p. 127) argued the need for “conversations about the kind of big data worlds we might want to live in” noting that these discussions are remain few and far between. Every citizen needs the opportunity to develop a set of skills and orientations that will enable us to read a society where big data are shaping how we are constructed as individuals and citizens with controlled and often pre-determined life chances. As this paper has argued, these narratives are already deeply embedded and the practices around labour exploitation have gone largely unchallenged. The need for a critical approach to data and the practices that enable young people growing up in this landscape to read their world and act effectively upon it – a critical data literacy – is pressing.

References

Anderson, C. (2008). The end of theory: The data deluge makes the scientific method obsolete. Wired, Available: http://archive.wired.com/science/discoveries/magazine/16-07/pb_theory). Accessed 18th March 2015.

boyd, d. (2016). What world are we building? Points. Data Society. Available https://points.datasociety.net/what-world-are-we-building-9978495dd9ad#.8wrz0bidr. Accessed 26th January 2016.

boyd, d. & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, Vol. 15 (5) ,pp. 662–679.

Cheney-Lippold, J. (2011). A new algorithmic identity: soft biopolitics and the modulation of control. Theory, Culture & Society, 28(6), 164-181.

Cheney-Lippold, J. (2017). We are Data: Algorithms and the making of our digital selves. New York: NYU Press. Kindle Edition.

Courtland, R. (2018). Bias detectives: the researchers striving to make algorithms fair, Nature. Accessed 23 July, 2018. Available: httpa://www.nature.com/articles/d41586-018-05469-3.

Crawford, K., Miltner, K. & Gray, M. (2014). Critiquing Big Data: Politics, ethics, epistemology. International Journal of Communication, 8, 1663-1672.

Dourish, P. (2016). Algorithms and their others: Algorithmic culture in context. Big Data & Society 3(2): 1–11.

Executive Office of the President, (2014). Big Data: Seizing Opportunities, Preserving Values. .

Freire, P. (1974). Education for Critical Consciousness. New York: Crossroad Publishing.

Galloway, A. (2004). Protocol: How control exists after decentralization. London: MIT Press.

Harari, Y. (2018) 21 Lessons for the 21st Century. London: Johathan Cape.

Ibarra, A. , Goff, L, Hernández, J., Diego, J. Lanier and Weyl, E. Glen (2017) Should We Treat Data as Labor? Moving Beyond “Free” (December 27, 2017). American Economic Association Papers & Proceedings, Vol. 1, No. 1, 2018. Available at SSRN: https://ssrn.com/abstract=3093683

Iliadis, A. & Russo, F. (2016) Critical data studies: An introduction. Big Data & Society, July-December 2016, pp. 1-7.

Kirchner, L. (2015). When discrimination is baked into algorithms. The Atlantic, September 6, 2015. Available: http://www.theatlantic.com/business/archive/2015/09/discrimination-algorithms-disparate-impact/403969/ Accessed 24th September 2015.

Kitchin, R. (2014). The Data Revolution. London: Sage.

Lanier, J. (2018). Ten arguments for deleting your social media accounts right now. London: Vintage.

O’Neil, C. (2016). Weapons of Math Destruction: How bid data increases inequality and threatens democracy. New York: Penguin.

Ozga, J. (2009). Governing education through data in England: from regulation to self-evaluation. Journal of Education Policy, 24(2), 149-162.

Pasquale F. (2015). The black box society: The secret algorithms that control money and information. Cambridge: Harvard University Press.

Peters, B. (2012). The age of big data. Forbes. July 12, 2012. Available at http://www.forbes.com/sites/bradpeters/2012/07/12/the-age-of-big-data/. Accessed 3rd August, 2015.

Puschmann, C. & Burgess, J. (2014). Metaphors of big data. International Journal of Communication 8, 1690-1709.

Richards, N. & King, J. (2014). Big data ethics. Wake Forest Law Review, 49, 393-432.

Scaltsas, T. (2017) Valuative Intelligence, Medienimpulse, 2017(4), p.1-11, http://medienimpulse.at/articles/view/1166

Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems, Big Data & Society, July-December, pp. 1-12.

Shaw, J. (2014). Why “Big Data” is a big deal: Information science promises to change the world. Harvard Magazine, March-April, 2014, pp. 1-13. Available at: http://harvardmagazine.com/2014/03/why-big-data-is-a-big-deal. Accessed 3rd August 2015.

Tufecki, Z. (2017). We’re building a dystopia just to make people click on ads. TED Talk, Available: https://www.youtube.com/watch?v=iFTWM7HV2UI&list=PL1b2rTid6VCJaKlUyNTJbEeJ5sleM9jly

Van Rijmenam, M. (2015). How Amazon is leveraging big data. Datafloo, 24th January, 2015. Available at: https://datafloq.com/read/amazon-leveraging-big-data/517. Accessed 3rd August 2015.

Verhoeff, N. (2014). Mobile screens: The visual regime of navigation. Amsterdam: Amsterdam University Press.

Biographical Information

Professor Victoria Carrington writes extensively in the fields of sociology of literacy and education and has a particular interest in the impact of digital technologies on literacy and identity practices both in and out of school. Her research interests in the field of digital technologies and digital cultures have informed much of her work around early adolescents and youth. Her work has drawn attention to issues of text production, identity and literacy practices within the affordances of digital technologies and new media.  

[1] Sophie’ was interviewed as part of an on-going project seeking to understand the impacts of mobile digital devices on young people and their perceptions/experiences of everyday life.

[2] Facebook and Google and their subsidiaries, such as Twitter & Instagram are heavily invested in a business model that relies on user generated data. An entire ecology of apps works to support this model.

[3] Seaver’s (2017) ethnographic research tracing the development of algorithms suggests that the term ‘algorithm’ is so widely used that it has almost lost any specific meaning. The term, he argues, is used with little understanding of the contexts in which algorithms are developed.


Comments are closed.


Digital Culture & Education (DCE) is an international inter-disciplinary peer-reviewed journal dedicated to the exploration of digital technology’s impacts on identity, education, art, society, culture and narrative within social, political, economic, cultural and historical contexts.

We are interested in empirical and conceptual approaches to theorising globalisation, development, sustainability, wellbeing, subjectivities, networks, new media, gaming, multimodality, literacies and related issues and their implications for how we educate and why. We encourage submissions in a variety of modes and invite guest editors to propose special editions.

DCE is an online, open access journal. It does not charge for article submission or for publication. All manuscripts submitted to DCE are double blind reviewed. Articles are published through a Creative Commons (CC) License and made available for viewing and download on a bespoke page at www.digitalcultureandeducation.com

 

Follow us on Twitter at @DigitalCultureE


The scale and speed at which digital culture has entered all aspects of our lives is unprecedented. We publish articles and digital works including eBooks (published under Creative Commons Licenses) that address the use of digital (and other) technologies and how they are taken up across diverse institutional and non-institutional contexts. Scholarly reviews of books, conferences, exhibits, games, software and hardware are also encouraged.

All manuscripts submitted to Digital Culture & Education (DCE) are double-blind reviewed where the identity of the reviewers and the authors are not disclosed to either party.

Digital Culture & Education (DCE) does not have article submission charges. Read more


Manuscripts should include:
1. Cover sheet with author(s) contact details and brief biographical statement(s).

Instructions for Authors

Manuscripts submitted should be original, not under review by any other publication and not published elsewhere.
The expected word count for submissions to the journal is approximately 7500 words, excluding references. Each paper should be accompanied by an abstract of up to 200 words.  Authors planning to submit manuscripts significantly longer than 7500 words should first contact the Editor at editor@digitalcultureandeducation.com

All pages should be numbered. Footnotes to the text should be avoided and endnotes should be used instead. Sponsorship of research reported (e.g. by research councils, government departments and agencies, etc.) should be declared.

Read more


Digital Culture & Education (DCE) invites submissions on any aspect of digital culture and education.  We welcome submissions of articles and digital works that address the use of digital (and other) technologies and how they are taken up across diverse institutional and non-institutional contexts. For further inquiries and submission of work, send an email to editor@ digitalcultureandeducation.com