When will we start to give a damn?

22 May 2023
Authors Finn Raben

Having attended a number of conferences over the past couple of weeks, I have been amazed not only at how ChatGPT has dominated almost all conversations and presentations, but also at the almost total absence of any cautionary note regarding its usage!

5 min read
When will we start to give a damn?

There is no doubt about the extensive benefits that careful deployment of ChatGPT can bring to our sector, and there are already numerous initiatives to bring ChatGPT “into” the data and insights process in a number of innovative ways… but what about the “dark side”?

Our profession, our sector, has long positioned itself as the guardian of truth and objectivity.

Yet, in Toronto, in September 2022, Judith Passingham and Mike Cooke led an industry-wide call for a collaborative initiative to set down guidelines for the deployment of A.I., which has gone completely unanswered.

In just the past 3 months, we have witnessed:

  • An open letter from the Future of Life Institute, co-signed by Elon Musk, Steve Wozniak (co-founder of Apple), Stuart Russell (Prof. of Computer Science at USC Berkeley), and a number of researchers at Deep Mind (a total of more than 1,000 signatures), asking that development of AI systems be suspended, as the race to develop AI was now out of control;

  • Sam Altman, CEO of OpenAI (the creators of ChatGPT), publicly warned against the real dangers that generative AI represents in terms of large-scale disinformation and stressed the need for regulators and society to become immediately involved;

  • The White House meeting with the CEOs of Google, Microsoft, Anthropic and OpenAI to reiterate their “ethical, moral and legal responsibility” to ensure their products are safe and secure and to stress they would regulate and legislate as necessary;

  • Yuval Noah Harari, writing in The Economist, states that if language is the bedrock of all human culture (ie literature, law, philosophy, economics etc., is all enshrined in language and text), then any system that can manipulate and generate language (without any independent verification), can now “hack” the operating system of our civilisation

  • Emad Mostaque, founder of StabilityAI, in an interview on the BBC, questioned: “…if we have agents more capable than us, that we cannot control, that are going across the internet and can achieve a level of automation (hitherto unseen), what does that mean? The worst case scenario is that it proliferates and basically controls humanity?”

The point here is that these warnings are not coming from “nay-sayers” or disaffected employees who see their jobs at risk from robots. They are coming from those at the leading edge of AI development, with a simple, common theme: If we create computers smarter than ourselves, we cannot predict what will happen next. 

The final point to make is that if all of the initiatives that our sector is happily promoting can be determined to be led by “good actors”, then we should be in no doubt that there is an equivalent, if not larger, a cohort of “bad actors” that are designing ways in which the current AI resource can be used for nefarious or criminal means, and what are we doing about it? It’s just not good enough to say it’s someone else’s problem.

The data and insights sector has a long-standing tradition and an excellent reputation for leading the ethical debate and designing the ethical principles behind our work to guarantee citizen and consumer trust in what we do. Our industry also has a role (responsibility!) to act as the reference point for citizens to say what they find acceptable/unacceptable from AI, and yet no one seems interested in this idea either. Arguably the most important role we could have,  to quote Judith from the paper:

...it is challenging to see how we can position our industry as the voice of the citizen within discussions about the ethical progression of AI, without being demonstrably ethical ourselves"

Judith Passingham

The call went out last September for us to come together and start work on that.

When will we start to give a damn about it?

Finn Raben
Founder at Amplifi Consulting