Towards a kind of corporate digital responsibility


We live in divided times.

At the recent CAMSS UK[1] conference in London, we presented our research thoughts on Digital Society, the survey we conducted on Digital Inclusion at the end of 2017.

We asked the room: “On a scale of utopian to dystopian, how do you feel about the impact that digital is making on society”. I was surprised to find that everybody (bar me) placed their mark much closer to the utopian side than dystopian.

I should say that we were asking the question of senior IT professionals, people who had spent their career using technology to enable businesses to perform better. And I should also add that we had earlier had various debates on the “What is Digital”[2] topics, ranging from the usual “what’s different” through the potentially radical changes of the Artificial Intelligence (AI) age.

Just as a reminder, for me – digital is a mindset: “what would the web do”, a bucket of technologies. But it is NOT cloud, social, mobile and analytics – they are just where it started.

The keynote of CAMSS UK was the incredible Tanmay Bakshi – 14 year old AI evangelist, a brilliant passionate speaker and clearly a soaker up of technology capability and expertise. There are plenty of others who have praised his session. Yet, Tanmay and the room were clearly firmly in the utopian space. AI, and digital, will help us all. Those that fear job losses are wrong. People are happy to be extensively analysed by computers.

I don’t buy it. In amongst two questions that concern me, he made an excellent point – namely that there is always both good and bad uses of technology. Absolutely true. The challenge is to ensure that the good can always mitigate the bad.

So why am I challenging two points? First, job losses. It is undoubtedly true that AI has the power to assist – it will assist health professionals make faster and better choices, lawyers to get information faster, and retailers to better understand the effectiveness of their advertising and more. But by the very nature of this description, AI enables us to save time – to operate faster. And if we are operating faster, we are working at higher productivity, and if we are doing that, then a shareholder business mentality is that there are costs to be saved to increase profitability. And that’s generally an indicator of job losses.

Let’s go briefly to Brexit. Irrespective of personal position, one might hypothesise based on current data that we won’t have enough health professionals or carers to look after an ageing (and growing) population, and we won’t have enough pickers to pick produce in the fields.  In fact, there are skills shortages in a number of areas of industry.

We need AI. We need robotics and automation. I believe, that with big reductions in net migration, it is the only way in which we maintain functioning as a society. And in that context, I completely agree with Tanmay that it is a necessity to help us. But let us not think that there will not be an impact on jobs. It will change the nature of the workplace even more than offshoring has – indeed, it will partly replace offshoring given the decreasing cost differential of onshore versus offshore.

My second challenge to utopia (and AI) returns to how we feel about privacy. Post Facebook/Cambridge Analytica, if we were to repeat our survey now, we would see a downward trend in trust towards big business and their use of personal data. A question was asked in the event: “Do you think that people will be OK with AI led interpretation of their behaviours to a degree even greater than perhaps they know themselves?”. The answer was positive, that people would be good with it, as it would deliver them personal benefit.

Now this takes us in to the whole privacy debate, and the impact that GDPR will make.  Our Digital Inclusion survey sits on the fence on AI. It’s clearly not yet trusted. Many people expressed concerns with having voice assistants listening to them 24/7, or even simply being given advice on major retailing decisions. As society becomes more aware of the insight that organisations gain from your purchases, your posts, your likes, your movements, your browsing history, even the movement of your cursor – will everyone be comfortable with it?

I think we have some guidance from our survey. If it clearly gives an individual personal time benefit, and it relates to improving their personal health, and their information is trusted to be safe and secure, then on the whole we’re happy. So how we do capture this in a way that provides guidance to organisations on how to best engage with these technologies for the good of all people, and therefore of society?

We’ve got a proposition.

The final point of our presentation was to introduce the term “Corporate Digital Responsibility” (CDR).

Now, you’ll be more familiar with Corporate Social Responsibility (CSR), which has evolved quickly over the past few years to be more than community and carbon, to cover many aspects that indicate an organisation is seen to be doing the right thing in the context of the sustainability of the planet and society.

Corporate Digital Responsibility (CDR) is therefore a build on CSR, not a replacement for it. How do we encapsulate the right, the ethical, the best of outcomes we can create with all data and technologies that we can see under the Digital banner – social media, blockchain, AI and machine learning and more. How can we commit to a set of principles that demonstrate that our focus is to drive for more positive, more utopian outcomes for society, in a conscious and aware state rather than simply hoping for the best?

Corporate Digital Responsibility is about protecting people’s right’s around data (in line with regulation), about ensuring that trust is maintained because they see that products and services save them personal time, help them with their health, and protect them from less acceptable or threatening uses of those same technologies.

As we presented this, there was a very valid point from the audience, in essence stating that we were talking about principles ahead of their time. We agree. When we started talking and exploring about Digital Society 18 months ago, we got lots of quizzical looks and confusion. But that has changed. Bigger names in the industry are saying similar things. The thinking is coalescing, and maturing. Our goal is to continue to raise the profile of closing the #DigitalDivide – to ensure that digital inclusion, accessibility, positive outcomes for more of society, increasing trust and increasing personal value, are at the forefront of the discussion.

In that regard, we are proposing to continue to measure on an annual basis how society, through people around the world, feel about digital technologies. This will be our guide as to the success of business and government balancing R&D and innovation, regulation and privacy, opportunity and threat over the coming years during the most technologically disruptive time of our existence.

For those who share similar concerns, whether currently sitting on the utopian or dystopian side of the #digitaldivide, we would welcome your input.



Help us lay the intellectual foundations for a new radical politics. Sign up to get email notifications about anything new in this blog. See also our new book: Backlash: Saving globalisation from itself.

Rate this post!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Radix is the radical centre think tank. We welcome all contributions which promote system change, challenge established notions and re-imagine our societies. The views expressed here are those of the individual contributor and not necessarily shared by Radix.

Leave a Reply

The Author
Latest Related Work
Follow Us