Starting to read The Telekommunist Manifesto:

The Manifesto covers the political economy of network topologies and cultural production respectively.

Based on an exploration of class conflict in the age of international telecommunications, global migration, and the emergence of the information economy.

the work of Telekommunisten is very much rooted in the free software and free culture communities.

This text is particularly addressed to politically motivated artists, hackers and activists

^ I’m sure it will have its flaws, but can’t deny that it sounds pretty up my street.

A video by Paul Frazee about Beaker Browser.

Paul states some of the goals of Beaker:

  • more software freedom (no code hidden away on a server)
  • lowering the barriers to creating and publishing an app or a website
  • more opportunity
  • having fun – keeping the web individual and diverse

It’s very adjacent to IndieWeb to me. Everyone has their own profile drive, which is kind of like your personal website. All the data is yours – it’s attached to your hyperdrive. Own your data. And apps access the data in your hyperdrive, you don’t send anything to them.

One very nice thing with Beaker, you get your Beaker profile just by running the browser – you don’t need to set up and maintain a server. (No Servers! No Admins!) You also get an easy to maintain address book, where you can basically follow other people.

I like the idea of being able to fork apps easily, too. It’s as if you were using Facebook, but you wanted to change part of the interface, and you could, because you have immediate access to the source and can just fork it and tweak it.

I read Hello World – How to Be Human in the Age of the Machine by Hannah Fry. It’s about the increasing pervasiveness of algorithmic decision-making in everyday life, and how much we should rely on them.

Front cover of the book Hello World.

It’s a really good book – very engagingly written and easy to read, on what could potentially be a pretty dense topic. It’s full of real-world stories to ground the more abstract questions, and it also weaves into that a nice basic overview of what algorithms are, and how the latest crop of machine-learning algorithms work.

So briefly – very broadly an algorithm is just a set of step-by-step logical instructions that show, from start to finish, how to do something. However generally the world algorithm is used a bit more specifically, still in some sense a set of step-by-step instructions, but a more mathetmatical and defined series of steps, and usually run by a computer.

And when people talk about whether algorithms are good or bad, they pretty much always mean decision-making algorithms – something that makes a decision that affects a human in some way. So for example long division is an algorithm, but it’s not really having any decision making effect on society. We’re talking more about things like putting things in a category, making an ordered list, finding links between things, and filtering stuff out. And they might be ‘rule-based’ expert systems, in that the creator programs in a set of rules that the system then executes, or more recently machine learning algorithms, where you train an algorithm on a dataset by reinforcing ‘good’ or ‘bad’ behaviour. Often with these we can’t always be sure how the algorithms has come to a conclusion.

So what the book is really focused on is the effect our increased use of decision-making algorithms like these is having on things like power, advertising, medicine, crime, justice, cars and transport, basically stuff that makes up the fabric of society, and where we’re starting to outsource these decisions to algorithms.

The book does a really good job of explaining some of the problems in outsourcing those decisions.

One big problem being that we have a tendency to trust the decision made by a computer. But we have to really aware of the biases in these systems. Part of this bias is part of the bigger problem endemic in the tech industry – that’s it’s overrepresented by white men who have a very limited world view and a particular set of biases. The system is often going to be made in the image of its creator, right.

But aside from that ML can also biased in that if the data that goes in to them is biased, so will the outcomes be. Garbage in, garbage out. And there’s a lot of biases and garbage statistics in the world. So say if policing disproportiately targets a particular group in arrests and justice treats them differently in sentencing, then they’re more likely to be targeted by an algorithm based on existing policing and crime stats. You have to really challenge existing biases, not build them in to the system.

The book is very even-handed, and isn’t a polemic against machine learning by any regards. There are plenty positives, like image classification of tumours where ML at great speed cases that a pathologist should look at in more detail.

I really liked the conclusion that we should not see machine learning decision making as an either or. Like either we hand it over to machine learning, or we keep everything. It gives the great example of ‘centaur chess‘, where a human plays with an artificial intelligence against another human with an artificial intelligence. Interesting this is something being championed by Gary Kasparov, who was famously beaten by IBM’s Deep Blue AI at chess a few decades back. It opens up new possibilities where AI is complementary and not a replacement.

I think my criticism with the book would be that it doesn’t really challenge the framing of the debate around ML. So its lettting the current arbiters of ML set the agenda to some degree, and then the critcism is in the details and not the higher level. So I mean there’s a whole chapter on whether we have driverless cars or not? But no mention of whether we should rather be endeavouring to take cars off the road completely. And with regards to things like predictive policing there is no questioning of the idea of policing as an institution in the first place, just a question of how we should use algorithms within it. And there isn’t a single mention of climate change which I found pretty amazing.

But still it does a great job of outlining the positives and pitfalls of decision-making algorithms. I’d recommend it, I’d just like the follow up book to be how we can use them for more liberatory purposes!

Facebook VP of Global Affairs and Communications, Nick Clegg:

We don’t benefit from hate speech… we benefit from positive human connection.

Nick Clegg on CNN

OK Cleggy. Not so sure about that. You will only care about positive human connection when it makes you money. I’d suggest that those two things are mutually exclusive.

The architecture of the social network — its algorithmic mandate of engagement over all else, the advantage it gives to divisive and emotionally manipulative content — will always produce more objectionable content at a dizzying scale.

Opinion | Facebook Can’t Be Reformed – The New York Times

Read Data, Compute, Labour

The monopolisation of AI is not just – or even primarily – a data issue. Monopolisation is driven as much by the barriers to entry posed by fixed capital, and the ‘virtuous cycles’ that compute and labour are generating for the AI providers.

Nick Srnicek talks about how imbalanced access to fixed capital and labour are as big issues as access to large datasets when it comes to the big tech monopolies.

economic policy in response to Big Tech must go beyond the fascination with data. If hardware is important too, then opening up data is an ineffective idea at best and a counter-productive idea at worst.

I think the argument being that something like the EU’s data strategy focuses too much on the data itself, and neglects the hardware, capital and labour needed to do useful things with that data.

It could simply mean that the tech giants get access to even more free data – while everyone else trains their open data on Amazon’s servers.

Data, Compute, Labour | Ada Lovelace Institute

Liked Reading on the Nova2 by Ton Zijlstra

I have now read several non-fiction books on my Nova2 reader. This is a marked improvement from before. I dislike reading non-fiction on my Kindle. Part of it is in the slightly bigger screen of the Nova2, and easier flipping back and forth between parts of a book. Part of it is that it’s a separa…