AI has basically supercharged everything. We think that it is unleashed, but there are already quite a few regulations. Even if most of the development and implementation happens in the US, the EU is still ahead. The successful implementation of AI will involve a dual effort of self-regulation and outside regulation from lawmakers and legislators.
ChatGPT is the “productised” version of AI; it is the public-facing result of something that is much deeper, sophisticated, and abstract. However, AI has the potential to democratise the industry individually and, by allowing all researchers to improve their skills, elevate the general baseline. We are not yet at the point where the top ones are also improved.
There is also an intergenerational gap where young generations will learn to leverage AI much more naturally than the older population. They will discover new applications. However, they will need to develop critical thinking to question the results or otherwise. No one will be able to check and question AI.
We will need to develop appropriate certification models for quite a few aspects. First, for the models themselves, to ensure that they don’t harm, that they are used responsibly, and that they are constructed properly. However, humans will start generating their own certifications to ensure that they can identify when they are talking to other humans, to validate their persona when responding to questionnaires, or even when interacting with each other online. Certification models can be hugely influential in facilitating the development of AI in a thoughtful manner.
The next US campaign is set to be massively influenced by bots, deep fakes and so on. More so than the last two. This explosion will only exacerbate social division and will muddle information and the thought process of the population. Imagine a reality in which a person can generate millions of synthetic virtual entities that support their extreme political views and influence polling. This can have disastrous effects.
We need protection on two fronts. First, to protect the industry from the most harmful potential effects of AI. Second, to protect the industry at large from sweeping regulations from above that may negatively impact it.
But AI has the potential to create a better society, increase output per person, lower the working week, and generate more leisure. It’s up to us to guide it in that direction.