Stanford’s latest report highlights global AI trends: Corporate funding leads to concentration of AI development – universities lose out.
The AI Index 2023 Report, published by Stanford University, covers a particularly dynamic phase in the development of artificial intelligence: Over the past year, generative large-scale AI systems for synthesizing text, images, sound, programming code, and spoken language have appeared at almost a monthly rate. What is remarkable is that private investment in AI projects has fallen for the first time in ten years during the same period, and by a significant amount – by around 27 percent.
At the same time, it is striking that AI development is increasingly determined by the actions of a manageable group of financially powerful players from the private technology sector, while other societal actors representing the broader population and publicly funded research institutions currently appear to have little say in shaping the ongoing development. A concentration of AI development in the hands of Big Tech is emerging, as the development of new, larger machine learning models is decidedly costly.
Lion’s share of AI venture capital comes from the U.S.
In terms of private investment in AI startups, U.S. investors are raising the lion’s share (more than half of all private investment worldwide is attributable to U.S. capital), followed by China (about a quarter of U.S. investment). Private investors from the U.S. are investing about twenty times the amount of private capital raised for AI development in Germany. This does not take into account the imbalance in the public and government-related sectors.
The AI Index Report is a reference work for the critical examination of AI development worldwide. According to the preface, this year’s edition contains more data and source analysis collected by the editors themselves than before. One of the editors’ goals with the AI Index is to provide “a particularly credible source” of AI data and insights. New chapters cover AI in education and AI public opinion (opinion trends). AI legislation is also weighted more heavily in the current edition: The AI Index has expanded its observation radius from 25 countries previously to 127 countries.
The basis of the current evaluation is a database collected for 2022. An interdisciplinary group from business and research is behind the report – some limiting notes on the global significance follow at the end of the article. The editors’ claim is to use empirical data to enable the public to critically accompany the penetration of everyday life and the economy by AI.
Systems such as GPT-4 and Stable Diffusion in focus
The new generation of AI technology, which is in the process of establishing itself in business and everyday life, is seen as having the potential for disruptive change. The sixth edition of the report, which has been published annually since 2018, offers 386 pages of new evaluations on Foundation Models, their geopolitical significance and training costs, and the impact of AI systems on the environment. “Foundation Model” is a term coined by researchers at Stanford University. It refers to large language models (LLM) and AI models with multimodal capabilities such as ChatGPT, Stable Diffusion, Whisper, and DALL-E 2.
The Stanford report is chock full of interesting information, insights, and data
Overall, the report is divided into eight chapters on research and development, technical performance, technical AI ethics, economics, education, governance and statecraft, diversity, and public opinion. The public data used are linked in the report, and an appendix provides additional material. The Stanford report is chock full of interesting information, insights, and data from various surveys, as well as surveys conducted by the AI Index group itself. Information on the training costs of various models, some of which are not readily available on the Internet, is also informative.
Ten key findings of the AI Index 2023
- Private tech companies overtake unis in developing new ML models
- Performance saturation in benchmarks, foundation models need new benchmarks
- Environmental footprint: AI is both environmentally friendly and environmentally harmful
- Science: AI models accelerate scientific progress
- Misuse of AI models e.g. by deepfake increases strongly (26-fold since 2012)
- Labor market: increasing demand for professional AI skills (U.S.)
- Private investment in AI down nearly 27 percent in 2022
- Companies that already use AI benefit significantly economically
- Politicians show increasing interest in AI, legislation addresses AI
- Public opinion on AI: Chinese most open-minded, French particularly critical
According to the report, the tech industry has now significantly outpaced academic research in the output of machine learning models. For example, the AI Index for 2022 lists a total of 32 “significant ML models” from commercial vendors versus three with academic origins. Until 2014, most significant advances in the field had still come from university research, it said. Building contemporary (state-of-the-art) artificial intelligence systems requires ever-increasing amounts of data, computing power, and funds. Non-profit projects, crowdsourcing, and academia are increasingly losing out.
Funds and hardware determine the competitiveness
Among other things, Stanford University itself had to take the demo of the open-source model, which the researchers had created for small money, offline a few days after the presentation of their LLM Alpaca due to lack of resources. The proof of concept remained, and it was prone to hallucinations. A positive counterexample would be Stable Diffusion, whose nucleus emerged from academic research by the ComputerVision Group at the University of Heidelberg (later LMU Munich) with other partners. What criteria the editors had in mind that made an artificial intelligence model “significant” for them would be material for a separate discussion and can be looked up in the AI Index.
The annual improvements are marginal
The new models have apparently reached a plateau in performance; the report speaks of performance saturation in the traditional benchmarks. Although new artificial intelligence systems continue to deliver outstanding results, the annual improvements are marginal. New benchmarking series such as BIG-bench from Google and HELM developed at Stanford for the holistic comparison of large language models (Holistic Evaluation of Language Models) are presented.
AI both helps and harms the environment
Also central is the insight that AI both helps and harms the environment. For example, the training run of the open source model BLOOM 2022 emitted as much CO2 as 25 air travelers on a flight from New York to San Francisco. New models like BCOOLER, on the other hand, showed that AI can also be used to optimize energy consumption. New architectures such as Sparsity, which no longer activate the entire artificial neural network for each input, bring more efficiency and reduce energy consumption. Google’s PaLM is an example of this, while Nvidia uses reinforcement learning to improve chip design.
AI accelerates progress in the natural sciences
The AI Index report also registers ambivalences in other areas: (4) AI accelerates scientific progress, has assisted in hydrogen fusion, made matrix multiplication more efficient and helps develop new antibodies in vaccine research. DeepMind 2022, for example, used deep reinforcement learning to discover an algorithm that no human had yet come up with: the AlphaTensor artificial intelligence system is said to significantly speed up matrix multiplication. (5) On the other hand, the misuse of artificial intelligence is increasing rapidly – by a factor of 26 since 2012. The report cites the independent, open-source AIAAIC repository that documents incidents of ethical misuse of AI.
The database lists a wide range of issues, from deepfakes of celebrities using midjourney to fatal Tesla accidents to fraudulent dating algorithms, ethical problems with ChatGPT and failed facial recognition. One of the most critical incidents in 2022 was a deepfake video that spread a fake surrender message from Ukrainian President Selenski and was intended to demotivate the population and supporters as well as cause confusion.
Artificial intelligence increasingly a topic for legislation
Policy makers are increasingly interested in artificial intelligence. For example, an analysis of legislative data on AI from 127 countries found that 37 bills dealt with AI in 2022, up from just one bill in 2016. According to the AI Index, an analysis of parliamentary records from 81 countries shows that the topic of artificial intelligence now appears 6.5 times as often as it did in 2016.
Authoritarian-led countries score conspicuously positively
(10) FWhen it comes to public opinion on new technology, AI products and services, authoritarian-led countries score conspicuously positively, which raises follow-up questions about the survey practice and whether a representative part of the population had its say here, according to which criteria the respondents were selected and to what extent they were allowed to express themselves freely. For example, 78 percent of Chinese respondents to an IPSOS survey attributed positive attributes to AI products, similar to survey participants from Saudi Arabia (76 percent). Feedback was also more positive for India (71 percent) than for the United States, only 35 percent of whose respondents agreed with the statement that products and services that use AI have more advantages than disadvantages.

About the AI Index 2023: human-centered approach
The annual AI Report of the Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University in California is considered one of the most authoritative sources on global AI development. An interdisciplinary group of academic institutions and industry collaborate to produce it – for example, researchers from Anthropic and Hugging Face are also involved. The AI Index Report tracks, collects, condenses, and visualizes data related to artificial intelligence. The Steering Committee includes renowned artificial intelligence researchers such as Erik Brynjolfsson (Stanford University) and Jack Clark (Anthropic).
The editors are targeting executives, for whom the report is intended to provide a data foundation for decision-making. The AI Index Report is considered an independent source on the state of international artificial intelligence development, but it has a more pronounced reference to the U.S. economy and the social, economic and societal conditions there. The target audience is primarily decision-makers who are looking for guidance on the state of artificial intelligence for their organizations, companies and in politics.
“The AI Index Report is considered an independent source on the state of international AI development, but it has a more pronounced reference to the U.S.“
The report focuses on artificial intelligence development.
According to the editors, the report focuses on responsible, ethical and human-centered AI development. For the AI Index 2023, they partnered with several organizations, such as the Center for Security and Emerging Technology (Georgetown University), Linkedin, Netbase Quid, Lightcast and McKinsey. Google, OpenAI, the non-profit organization Open Philanthropy and the National Science Foundation (NSF), based in the US state of Virginia, are among the partners. The latter is an independent U.S. agency that has existed since 1950 and whose mission is to provide financial support for research and education “in all fields of science except medicine.”
Data representative for Europe and Africa?
While various surveys have interviewed people in different countries and regions, such as China, Saudi Arabia, India, and Europe, and compared data on different regions of the world, there is a practical focus on U.S. institutions and companies. However, looking at the partner organizations and sponsors of the AI Index Report, a practical focus is on US institutions and companies. It is important to keep this in mind when going through the report and transferring it to European conditions, for example. Especially if the report is to serve as a basis for economic decisions. A quick skim revealed that, apart from Stable Diffusion, no European Foundation Models appear to have been taken into account. The HELM (Holistic Evaluation of Language Models) comparison, also developed at Stanford by the Center of Research on Foundation Models (CRFM), shows in its current edition a broader spectrum of basic machine learning models than the AI Index 2023 has taken into account – partly also from Europe.
The index report brings ambiguity in the context of the attitude of all countries to this
For example, the assignment of innovations to the countries of their authors seems unclear: Are nationalities meant or the main residence? Would the development of an emigrated German engineer count as a US innovation? What about projects that are spread across several countries? That the entire continent of Africa should be devoid of any AI models deemed meaningful is jarring, but could also indicate a bias in the data base – as there are numerous African artificial intelligence initiatives and strategies, and a notable grassroots AI movement in African countries. What is “meaningful” might well be seen differently in other parts of the world than in the context of a survey produced through a U.S. lens.