Jakub Growiec (Fot. arch. prywatne)
“We are edging towards the boundaries of our cognitive capability, the barrier being our brain’s cognitive capacity, which cannot grow endlessly. In contrast, long-term trends in total computing capacity show that ‘the sky’s the limit’: it just keeps growing and with no barriers in sight so far,” Jakub Growiec, a SGH Warsaw School of Economics professor and member of NBP staff, tells Obserwator Finansowy.
Obserwator Finansowy: As an economist, you deal with theory of economic growth. In your research, you employ highly complex mathematical and statistical-econometric analysis tools. In principle, these models serve to describe past and present phenomena, and sometimes to create forecasts looking a few years ahead. Now, we have agreed to talk about a more distant future, which is the stuff of literary or film fiction, among others. Do economists have tools to analyse such a distant future and answer the question about what the world will look like in 20, 30 and 40 years and what economic growth will be based on then?
Jakub Growiec: Analytical and modelling tools certainly have their limitations. It is also clear that the farther we look into the future, the more speculation there is, and the less hard, concrete data. What we can say with certainty is that if we follow long-term trends, i.e. changes lasting decades, sometimes centuries, we can extrapolate them into the future with a little more confidence than when looking at short-term events that come and go. Short-term developments cannot, of course, be foreseen. No-one foresaw the COVID-19 pandemic; neither did anyone fully predict the previous financial crisis. These are phenomena that appear suddenly. In contrast, long-term trends, e.g. technological ones, are easier to grasp. When making forecasts for 20-30 years ahead, we can only rely on those.
And what is economic growth based on today and how may it differ from the growth we will observe in, for example, the middle of this century?
I think that at the moment we are at a stage when modern technologies have emerged and are developing quickly, especially in highly developed countries. These are in particular digital technologies. Their progress is the key mechanism of growth in the wealthiest countries. But on the other hand, on a global scale, what matters is convergence. It is important for global economic growth that previously less developed countries catch up with the most developed ones. This often happens on the basis of technologies that we have known for some time, but which have not been used on a such a large scale before, for example in China or countries of South and South-East Asia. These are the regions that are developing the fastest today, and it is not only due to technological progress or state-of-the-art technologies. Although China is an important player also in this area.
The digital revolution has not led to a productivity increase, unlike the industrial revolution of the 19th century , which, by mechanising human labour, triggered a quantum leap in productivity. Will such rapid growth finally materialise at the subsequent stages of the digital revolution? What conditions must be met for that to happen?
It is true that the traditionally measured factor productivity growth is not particularly impressive at the moment. This is in quite stark contrast with how fast digital technologies are developing. And indeed, there’s a difference compared to what happened during the industrial revolution.
Economists are divided on that point. On the one hand, there is a camp which claims that the trend we are seeing now, i.e. a slowdown rather than acceleration in productivity, will stay with us for the next few decades. Then there is the other camp, which claims that the slowdown is temporary and that the digital technologies booming today will finally start to have a positive impact on productivity growth. Only this will happen at a slightly later stage, because these technologies lead, among others, to automation, to dramatic changes in what and how we produce. Their performance depends on the computing power of computers, on how quickly information is processed and how efficient the algorithms employed are. These disruptive changes, which for now can be seen in the achievements in the area of artificial intelligence, are not yet translating into GDP growth, but will possibly start doing so at a further stage of automation. This is the other camp.
So the breakthrough will happen when automation is maximised?
Such a potential breakthrough could come with the moment of complete automation of a certain process. As long as this process is multi-stage, i.e. it consists, for example, of 10 stages, and we have automated the first, second, third stage, then the pace of production will still be limited by the remaining stages which have not been automated. At those unautomated stages we will be constrained by the way humans work and how quickly they can pass to the next stage. Automation will make advances, but its impact on productivity will not be as big as it could be if automation was complete.
Does human labour limit productivity?
It seems that when it comes to the digital era and the automation of production processes, it does. In general, it makes sense to distinguish between mechanisation and automation. Mechanisation has been around for a long time, notably since the industrial revolution. It basically involves machines carrying out physical work that was previously performed by people, using energy from combustion of mineral fuels or other sources. And this is a process that has been going on for over 200 years and which does affect productivity a lot.
Yet now we’re also dealing with automation. This is something different – it is a change in the area of information processing and decision-making. This does not concern the performance of physical actions, but the information layer. The caveat is that it is, in a sense, a different dimension, while GDP is, after all, largely a measure rooted in physical actions. In this sense, perhaps this fast growth in a different, digital dimension might not necessarily translate one for one into growth in the GDP dimension one day. This is one reservation. The other reservation is that until computers with large computing power arrived, it was man who was always responsible for data analysis and decision-making. Let’s be clear about it – we are gradually moving towards the frontier of our cognitive capacity, i.e. more and more people have higher education, and we are also acquiring this education more and more efficiently. Yet this is a process that has a limit. This limit is simply the cognitive capability of our brain, which cannot expand forever. In contrast, looking at long-term trends in total computing power, the sky’s the limit – this capacity keeps growing, and there is seemingly no limit similar to the point where the cognitive capacity of our brain reaches a ceiling. So, yes, in the long term, the cognitive barrier may limit the pace of growth, unless automation is complete.
Man versus automation and artificial intelligence… In a scene from Stanley Kubrick’s 2001: A Space Odyssey participants in a mission to Jupiter ask themselves whether the supercomputer steering the spaceship, apart from its massive computing capacity, has feelings. It turns out it does and it wants to use them in order to dominate man. Ultimately man wins the duel with the computer, or rather, artificial intelligence, but only because a supercomputer can in general be simply disconnected from its power supply. The full automation that you are talking about can put man completely on the margins of the whole production process: man will not have to or even be able to supervise the respective stages or the entire production process. Artificial intelligence can organise the production process by itself. Isn’t this gradually becoming dangerous?
OK, now you are asking a question reaching far beyond an economist’s toolkit, so I will stipulate that what I am about to say is more speculation and science fiction than my research findings. As far as I understand, the scenarios you describe are generally not very realistic. If we have an entity with a higher intelligence than another entity, in general that other entity will not win the clash. That is – we will not pull that power plug, because that more intelligent entity will be smart enough to know that we will want to do it and will do its best to thwart it, or to prevent the unplugging from producing the expected effect.
Yes, it is becoming dangerous and I agree that this is a very important debate. By the way, I highly recommend Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies. It’s a brilliant book which analyses all the dilemmas of this kind wisely and with deep insight.
And what about a specific example of automation provided by self-driving cars. We are probably all fascinated by the idea of letting autonomous cars on the roads, but here a moral problem arises – that of decisions sometimes taken by man individually and in a split second. Such as, whether to run over a dog on the left side of the road or hit a child who runs onto the road on its right side. And where should the traffic rules be broken at a moment like this… Will artificial intelligence cope with such dilemmas?
OK, here we are talking about a much narrower application of artificial intelligence, which is much less of a threat to humanity than the kind of general superintelligence that we mentioned before. It seems to me that such split-second dilemmas will probably be solved better anyway than by the average human, who panics and makes some random moves, I don’t know, closes his eyes perhaps… I wouldn’t worry about that sort of thing. In general, it seems to me that, potentially, if we did eventually succeed in fine-tuning artificial intelligence to do a great job on the roads, that would be an opportunity for us to cut down dramatically on accidents and road casualties.
In what areas of the economy would you see a possibility of full automation, i.e. reaching a stage at which some processes are fully automated?
What we can say at the moment is which applications are developing the fastest. Then we can consider what possibilities there are to go beyond the barriers, when, for example, one algorithm will cope with an increasing number of simultaneous tasks. The applications we have at the moment are, after all, rather narrow. Although their growth is awesome. As an example we can take the self-driving car that you mentioned. It is being developed, but is not sufficiently advanced yet. The same with simultaneous translation of texts.
Translations are improving, but can be quite inaccurate at times, we stumble across simplifications, misinterpretations of meaning. This is hardly welcome.
When a human translates a text, we are not always completely happy with it, either. Sometimes I see a translation and it occurs to me I would have translated it differently. I may be wrong there, though, it’s often a matter of opinion. In the past it was evident that machine translations were downright awful, they were hard to make head or tail of. Now things have definitely changed. With regard to interpreting some sentences that are ambiguous, though – it’s an open question, and I am not convinced that man will always have the upper hand in this area. But it seems to me that even in areas such as artistic creation there is a lot of progress and artificial intelligence has a great role to play.
What scope of human work will not give in easily, or will completely resist automation, even in 20, 30, or 40 years?
I think that in such a horizon there is, after all, a high probability that there will be no general artificial intelligence with higher capabilities than humans. We don’t expect it within this timeframe. Given this, there will indeed be some tasks at which humans will compare relatively favourably with algorithms. We can already see the trend that the jobs automated the fastest are ones that are easy to codify and easy to carry out mechanically, while it the most difficult ones to automate are either the creative jobs, requiring large knowledge and competences, as well as cooperation in teams, or jobs carried out in relation to another human, e.g. care. On the surface of it, these are jobs that do not call for very advanced qualifications. But they require a specific approach that humans are simply equipped with, since we have evolved over the millennia, while machines have been around for a relatively short time.
The increasingly comprehensive automation will certainly have a big impact on societies. Will they become more egalitarian, happier? Or, on the contrary, will inequality and dissatisfaction increase?
I think there are two aspects to this issue. On the one hand, we will see the total benefit for everybody resulting from the fact that the pie to be divided has grown, i.e. we will have an increase in productivity, and better products, giving us more satisfaction. But on the other hand, this trend may indeed increase inequalities and in particular the people who had been doing repetitive, easily automated jobs might have a problem re-training. A large number of people will certainly feel the negative effects of these processes.
Such full automation may affect, at least at the beginning, rather small areas around the world. It is difficult to imagine the automation of certain regions of Africa and Asia in the course of several decades.
There will be geographical differences, but it will also vary greatly across sectors of the economy.
So inequalities will generally increase?
I think it may be the same as with the industrial revolution. That is, in the long term, i.e. in many, many decades and centuries, in the end everybody will benefit. But at the beginning of the industrial revolution inequalities increased a lot. Those who were the first on the bandwagon benefited a lot, especially as there were a lot of parallel processes, e.g. colonialism, which was also driven by a disproportion in productivity resulting from employing machines.
The most able and best educated will migrate even more readily to regions which are centres of technology and automation. Will this be an unequivocally negative development for the peripheries deprived of human capital?
Certainly such clusters crop up, such as Silicon Valley, which attracts the most talented people to move there and do work which will be eventually beneficial to the whole world. The effects will be widely observed both in consumption and production. But in the meantime the disproportions will rise, because the fastest-developing clusters will get far ahead of the rest, leaving them behind. Will these differences ever even out? We don’t know. Comparing the present situation with the past we can see strong convergence today. Despite the large distance to cover, African countries are gradually catching up, although this is mainly happening in the area of industrial era technology.
And man’s expansion beyond Earth, seeking new opportunities for humanity in outer space, can it be an additional source of growth in a few decades? Don’t you deal with things like that as an economist, after all?
For now, this is futurology. I don’t give it much thought. Of course, if we had a technology enabling us to colonise other planets and use the resources and convert the whole process into productive work, it would obviously be highly conducive to growth.
Would such an expansion be similar to that of the 19th century? When capitalism of that time, with centres in Europe and the United States, could grow because it gradually absorbed less developed regions, such as Africa and Asia?
The question is, was it in fact able to develop because it absorbed those regions, or did it absorb them because thanks to the industrial revolution it had acquired such a capacity. It is true that one of the issues – but certainly not the only one – was that the energy resources and other resources used in production were more often than not plundered. We would definitely have fewer moral problems, because nobody lives on these other planets, so such predatory exploitation would possibly cause less outrage. Another issue for consideration is what Elon Musk said in the context of flights to Mars – that this is a kind of an insurance policy, taken out in case something bad happens to Earth as a whole.
Will civilisation based on a global digital revolution and data stored in the cloud be more or less sensitive to potential shocks and busts? If something happens, we will fly to Mars… History tells us that civilisations have sometimes collapsed quite suddenly, all it took was perhaps a longer period of drought or explosion of a volcano, migration of peoples, changes in trade links. The fall of the Western Roman Empire is an instructive case: it was followed by several centuries of decline in technology and also in art.
Yes, it could also be seen in engineering. What do I think about it? I think the world is increasingly more connected and the global information flow is instant. We are more and more mutually dependent; the technology is increasingly advanced. From the point of view of humanity, the risk is rising, while from the perspective of local communities, certain risks have been eliminated. For example, drought in one region used to kill people in that region. This is not so today. There are trade links, and food shortages can be supplemented, so the risks of this kind have diminished.
However, from the point of view of humanity as a whole, the first technology capable of killing us all is nuclear weapons. They have been with us since the Second World War, but fortunately since Hiroshima and Nagasaki these weapons have not been used. There are, of course, other global threats – such as the current COVID-19 pandemic, which is spreading because we have achieved historically remarkable ease of travelling around the world.
Is reality as a field of study more interesting for an economist today, when, as is my impression, we are in the middle of great civilisation changes, or will it be more interesting in a few decades? Or perhaps in a few decades many phenomena will stabilise? Become simple?
If they become simple, that will be possibly a bad sign. I think the complexity will probably increase, like it has done in the past. In my opinion things are interesting now and will be interesting in the future.
But if you ask about the complexity of civilisation, this is exactly the sign of its development. Not only is total GDP rising, but so is GDP per capita. In pre-industrial times the growth of civilisation consisted in there being more and more people on the planet. Today individual people are also, on average, gradually becoming wealthier. But GDP per capita is not the only way to measure the degree of development of a civilisation. It seems to me that in the past the total computing power on Earth was proportional to the number of people. Quite simply, everybody had a brain, which was the only effective way to compute. Today these two things have become decoupled, because we observe fast, exponential growth of the computing power of computers.
How can we measure productivity when the economy is getting automated and it takes increasingly fewer people to service a production process?
Does the currently used econometric toolkit enable an accurate description of the automated economy, the economy of the future? Does the digital revolution call for an evolution of the instrument range of economists?
It definitely needs some evolution on the instrument side. When we think about such concepts as national accounts, they evolved, in fact, because of the attempts to measure what was happening in the industrial-type economy. In times further back in the past, the key measure of productivity was yield per hectare – it simply made no sense to measure other things. This changed after the industrial revolution. Likewise, in the context of the current times, when technology is developing rapidly, our problem is that, while GDP is an excellent measure, it only encompasses part of the economy. Possibly the value accruing to the end customer will gradually be decoupled from GDP, just as GDP diverged from agricultural production in the past. In the future, an increasingly larger chunk of value will be generated in the so-called digital dimensions, which are not considered part of the conventionally understood GDP. In this sense, while I think the GDP indicator is very important and useful, one can also try to implement more and more precise measures, related, among others, to data and their processing – digital measures of this kind.
Besides typical research work and scientific publications, you also have an account on YouTube, posting videos – a kind of a video blog. Why did you choose this way to tell your story about the world, the economy and the future?
Indeed, I have given it a try. Why? I think this came from the fact that I have explored a certain line of thought, and I have prepared a manuscript of a book I have not yet published. These are things which cannot be quite conveyed through scientific articles, and they are potentially interesting also to people outside academia. The topic is largely what we have talked about. A broad look at the history of human civilisation from the perspective of an economist, i.e. taking into account technological and economic processes, which, I feel, were the driver of development, which enabled humanity to go all this way – from small bands of hunter-gatherers to today’s globalised technology and global civilisation.
Jakub Growiec is a Professor at the SGH Warsaw School of Economics and an economic advisor at the Economic Analysis and Research Department of NBP. He is the author and co-author of many scientific publications in economics; winner of, among others, the Scientific Prize of the “Polityka” weekly in social sciences, the “Bank i Kredyt” (scientific journal of NBP) prize and a winner of the National Science Centre Award in 2020. His publications deal with economic growth and technological progress in the long term.
Professor Jakub Growiec expresses his own opinions in the interview and not the official position of NBP.