A service of the

Download article as PDF

This article is part of Artificial Intelligence: Potential and Challenges for Europe

Generative artificial intelligence (GenAI) has been a catalyst for market and regulatory developments. Since the launch of the chatbot ChatGPT in November 2022, the tech industry has frequently released new products and services. These range from chips, machine learning models, cloud computing, and software to answer engines, showcasing the impact of GenAI in fostering competition, innovation, and the creation of markets and technologies.

Nevertheless, running and deploying GenAI requires developers to access three main components in the value chain: computing resources, machine learning models and data (Carugati, 2023b). Despite various players in the field, certain sectors face competition concerns due to market features and potential issues, such as tying.

Computing resources, including graphic cards and cloud computing, are pivotal in the development and deployment of machine learning models. However, access to these resources is primarily provided by a limited number of tech firms. To secure access, some model developers enter into partnerships with large cloud computing providers, exchanging access to their models for access to cloud computing resources. Although these collaborations might foster competition, they may also give rise to problematic practices such as tying (Carugati, 2023c).

Machine learning models generate output based on input data, such as text, images, videos or music. Despite claims that models are concentrated in the hands of a few large online platforms (Coyle, 2023), there is no substantive evidence supporting this assertion. On the contrary, thousands of closed-source and open-source models compete on various parameters like task requirements, language specifications and model size.

Data, another essential component, raises concerns about a potential competitive advantage for models provided by large online platforms with access to vast proprietary datasets (Riekeles and von Thun, 2023). However, as in the case of models, there is no substantive evidence supporting this concern. Model developers can leverage thousands of public and proprietary datasets, differentiated based on task requirements, language specifications and domain specificity.

Furthermore, GenAI developments grapple with unsolved regulatory challenges related to the use of copyrighted data (intellectual property rights), personal data (data protection), AI risks (AI governance) and competition.

Competition authorities around the globe, including those in the United Kingdom,1 Portugal,2 India,3 Hungary,4 Europe,5 the United States,6 and soon France,7 are closely monitoring GenAI developments.

In this context, GenAI poses challenges for competition authorities due to emerging markets and technologies coupled with regulatory instabilities. New products and services are shaping new and existing markets, like answer engines and advertising. Regulatory uncertainties are influencing competition in GenAI.

At the current developmental stage, competition authorities should focus on understanding market and regulatory developments through cooperation among themselves and relevant competent authorities. They should exercise formal enforcement powers and potentially update competition tools only when necessary and justified, guided by the insights gained from these market studies.

Emerging markets and technologies

GenAI requires developers to have access to three main components. As mentioned above, this value chain includes computing resources, machine learning models and data (Carugati, 2023b). Then, application developers integrate GenAI into their products and services.

Computing resources

Computing resources enable the development and deployment of machine learning models. The two main components include graphic cards for computation and AI workloads and cloud computing for running and deploying models at scale over the internet.

The graphic card sector is the main driver of running models. However, the sector faces card shortages due to high demand and low supply of components (Carugati, 2023c). In this sector, Nvidia is the leading supplier, especially in graphics processing units (GPUs), which perform several computations simultaneously. Competition authorities in France, Europe, the United States and China are currently investigating Nvidia’s business practices (Nvidia, 2023).

However, new players have entered the graphic card market, challenging Nvidia’s position. Advanced Micro Devices (AMD) and Intel have announced graphic cards dedicated to AI (Intel, 2023; AMD, 2023). Meta, Amazon and Alphabet have also developed in-house chips to improve AI workloads.8 However, the extent to which chips provided by new players and in-house chips exert competition pressure on Nvidia deserves in-depth scrutiny in the graphics card sector, and it is thus out of the scope of this paper.

The cloud computing sector is an essential infrastructure for deploying models (Carugati, 2023c). Cloud computing providers and model developers nurture a close, interdependent relationship. Model developers need cloud computing providers to run and deploy their models at scale without investing in the infrastructure. In turn, cloud providers see the model developer as a business driver. Accordingly, some cloud providers have established partnerships with model developers (see, e.g. Microsoft Corporate Blogs, 2023).

Partnerships take various forms. Some are exclusive, like the partnership between Microsoft and OpenAI, while others are non-exclusive, like the partnership between Amazon and Anthropic (Anthropic, 2023). The partnership generally enables the cloud computing provider to invest in cloud infrastructure. Some providers even develop an infrastructure dedicated to the partners’ needs, as Microsoft did by developing a specific computer to run OpenAI models (Langston, 2020). In exchange, the cloud provider can host, exclusively or non-exclusively, the partner on its cloud service and use the models in related services. For instance, Microsoft exclusively hosts OpenAI models on its Microsoft Cloud Azure and uses OpenAI models on its services, including Office 365, Edge, Bing and Windows.

The cloud sector is competitive, with several global, regional and national players, including Microsoft, Amazon, Google, OVH, Orange and Scaleway. However, the sector is under intense scrutiny by competition authorities worldwide, including South Korea, the Netherlands, Japan, France, the United Kingdom, the United States and Spain (Carugati, 2024). They are concerned with a trend towards concentration in the hands of a few global hyperscalers, including Amazon, Microsoft and Google, due to their scale and investment capabilities. Those hyperscalers are also the main partners with model developers. As the demand for GenAI increases, the partnership might intensify the trend towards concertation. They are also concerned with potential competition issues arising from barriers to switching, like data transfer fees, software licensing practices and interoperability, that make it more difficult for a customer to change a cloud provider. Partnerships with hyperscalers might raise additional competition issues from vertical integration, like tying and self-preferencing.

Machine learning models

Machine learning models derive output from input data, such as text, image, video or music. Models are either closed or open-source models. Developers of closed-source models might license the use of their models to third parties, allowing them to develop commercial applications. By contrast, developers of open-source models make them publicly available for free for research and/or commercial use. They might release various model elements, including the model and training data. This enables third-party developers to modify the model. While some developers might improve the model, others might revise it for malicious use (OECD, 2023).

In addition, model developers compete on various factors, including task requirements, language specifications and model size.

First, models differentiate on the intended generated output. A non-exclusive list of models includes text-to-text models (e.g. OpenAI GPT, Google PaLM, Anthropic Claude), text-to-image models (e.g. OpenAI DALL-E, Google Imagen, Adobe Firefly, Midjourney), text-to-video models (e.g. Runway Gen-2, Meta Make-A-Video) and text-to-music models (e.g. Google MusicLM, Meta MusicGen, Stability AI Stable Audio). As of January 2024, the community website Hugging Face counts more than 477,000 open-source models on its model repository.9

Second, the language of the training dataset is an important quality parameter of the output. If the training dataset does not contain a given language, it might provide a poor result due to the difficulty of deriving output from little or no input data. To address this issue, model developers created multilingual models (e.g. OpenAI GPT) by training them on input data containing various languages, including English, French, German and Spanish. Others developed monolingual models for a specific language (e.g. Meta CamemBERT for French). Monolingual models sometimes perform better than multilingual ones (OECD, 2023).

Third, models have different sizes. The size refers to the number of parameters required to adjust the model to provide the appropriate output from input data during the training session. The parameters thus encode the knowledge of the model (Competition and Markets Authority, 2023a). The performance of the model and its cost depend on the number of parameters (Rae et al., 2022). The more parameters, the more the models can learn from datasets. The downside is that more parameters require more data and computing power, thus increasing the model’s cost. Large models with a high number of parameters are therefore called large language models (LLMs). LLMs can perform various tasks even if the dataset contains general domain data, as they are zero-shot reasoners. They can thus generate output without having specific input on the prompt (Kojima et al., 2023). Some models are also fine-tuned on specific datasets to achieve specific tasks better than LLMs with general domain data, like Meta Code Llama, which generates code. Researchers and developers are already proposing smaller models with fewer parameters to reduce financial and environmental costs; these are called small language models (SLMs).10 Some SLMs perform similarly to LLMs (Schick and Schütze, 2021). Finally, some models, like Google Gemini Nano, can even run on a device and are thus called edge language models or on-device models. These models can perform on-device tasks offline and do not require cloud computing resources, thus reducing financial costs while ensuring greater privacy as data do not leave devices (Alizadeh et al., 2024).

However, the way competition between models of similar characteristics works deserves in-depth scrutiny with quantitative and qualitative data, especially on model performance and user preference, and is thus out of the scope of this paper.


Finally, model developers require data to run and deploy their models. Data is thus the indispensable input to derive output. As in traditional data-driven markets, the volume (scale), variety (scope), velocity (freshness) and quality of the dataset determine the quality of the generated output (Stucke and Grunes, 2016; Carugati, 2023b).

Model developers train their models on publicly available data from the internet or open-source repository. As of January 2024, the community website Hugging Face counts more than 99,000 open-source datasets on its dataset repository.11 Developers also use proprietary datasets from their own first-party or third-party services, such as data brokers, data marketplaces and publishers.

In addition, models compete on various dataset factors, including task requirements, language specifications and domain specificity.

First, the dataset determines the generated output. There are thus various datasets for various task requirements, including text-to-image, text-to-video and text-to-music datasets.

Second, the language of the training dataset determines the output of multilingual or monolingual models. A non-exhaustive language dataset includes majority languages, such as English, French, German and Spanish, and minority languages, such as Italian, Greek and Dutch.

Third, the domain specificity of the dataset is an important quality factor in specifying the intended task. Thus, datasets contain specific data for various intended tasks, such as code, legal, finance and art.

The training dataset has a time limitation, as it only contains data up to a certain date. Model developers can retrain the dataset on updated data. However, retraining the dataset is costly. Model developers can deploy the model on real-time data to deal with this issue. For example, Microsoft Bing generates real-time output from the internet by deploying GPT models on Microsoft Search and Index data (Ribas, 2023).

In traditional data-driven markets, data is an important factor of market power (Cabral et al., 2021). Data is a competitive advantage that benefits large online platforms like Alphabet and Meta (Competition and Markets Authority, 2020). Users also benefit from data due to data-driven network effects when the user utility changes with improved learning from data, creating value for users (Gregory et al., 2021). Some models, such as GPT, improve by learning from user dialogue data, suggesting that the more model users there are, the higher the model quality is.12

However, the validity of these claims in model markets deserves in-depth scrutiny. Indeed, no evidence suggests that data is a source of market power or that models developed by large online platforms benefit from data advantages. Indeed, available research suggests that some SLMs, like the Koala model trained on high-quality open-source datasets, perform similarly to LLMs trained on much larger volumes of proprietary datasets (Geng et al., 2023). Besides, there is not yet evidence that models developed by large online platforms, such as Google PaLM, Google Gemini, or Meta Llama, outperform models from newcomers, such as OpenAI GPT, Anthropic Claude, Mistral AI MISTRAL 7B. Finally, there is no widely available empirical research on the importance of data-driven network effects on model performance.


Models enable the development of applications for intended tasks, such as generating text. Model developers either develop their own first-party AI-powered applications (e.g. OpenAI ChatGPT) or enable third-party ones (e.g. Hervey AI).

Then, some applications enable first-party and third-party add-ins that complement the app. For instance, OpenAI ChatGPT allows the development of tailored GPTs dedicated to a specific domain. These complementors are then available at an app store.13

Applications raise several competition issues at both downstream and upstream levels that deserve in-depth scrutiny.

At the upstream level, model and application developers use a cloud provider that hosts the model. They then become customers of the cloud provider. The latter might develop an infrastructure dedicated to the hosted model. The cloud provider might thus have the ability and an incentive to impose technical and commercial conditions to recover the investment cost. For instance, a condition might make it more difficult to move the model and/or application from one cloud provider to another by limiting interoperability between cloud providers. This condition might negatively impact competition in the cloud sector and reinforce the position of the cloud provider hosting the model.

At the downstream level, some model and application developers might provide other services in several markets. They might have the ability and incentive to integrate their own AI-powered applications with other services. For instance, Google is integrating its AI-powered solutions into Google Search (Reid, 2023), Google Chrome (Tabriz, 2024) and Google Workplace (Voolich Wright, 2023). Microsoft is doing the same with its AI-powered Copilot applications in its search engine Bing, browser Edge, productivity software Office and operating system Windows. These vertical integrations pose potential competition issues related to tying, bundling and self-preferencing, as the firm has the incentive to promote its own services over third-party ones. For instance, Google promotes its AI-powered Search Generative Experience (SGE) in Google Search to generate an answer in direct competition with third-party chatbots, like OpenAI ChatGPT. Vertical integration also poses issues related to refusal to deal. It will be the case if a dominant firm prevents third parties from offering competing services in the dominant market. For instance, a hypothetical problematic scenario would be if Microsoft prohibits a third party to provide a competing version of Copilot in Microsoft Windows and Microsoft Office.

Moreover, these new AI-powered applications impact how competition works in several sectors, including advertising and cloud computing, to name a few. For instance, search engines are moving from providing search results with links that redirect to a publisher’s website to answer engines that generate answers with citations. When the search engine offers search and answer results, the generated answer might substitute or complement the publisher’s content. Whether it leads to substitutability or complementarity requires in-depth scrutiny and implies different considerations for publishers and advertisers. In case of a substitution, the user will most likely not click on the publisher’s website or will click significantly less. In turn, publishers might lose traffic and advertising revenue (Hagey et al., 2023). Advertisers might then direct their spending to the answer engine, as users will remain on it (Carugati, 2023a).

Regulatory instabilities

GenAI raises several regulatory concerns that impact how competition works. The four main concerns are intellectual proprietary rights, data protection, AI governance and competition (Carugati, 2023d).

Intellectual proprietary rights

Model developers train and deploy their models on public and proprietary datasets, including copyrighted data. Copyright protection requires the data owner’s consent to use data, which might decide to license its dataset against a fee.

Around the globe, there have been concerns that some model developers use datasets without consent, leading to litigations in several countries, like the ongoing lawsuit by the New York Times in the US against OpenAI and Microsoft (Grynbaum and Mac, 2023). In this case, OpenAI declined any wrongdoing because it considers that it can fairly use the publisher’s content without consent under US copyright law (OpenAI, 2024). To address this issue, some model developers, including OpenAI and Google, propose opt-out mechanisms, allowing publishers to proactively block model developers from collecting their content to train their models.14 Some model developers also conclude partnerships with publishers for the use of their content, like the partnership between OpenAI and Axel Springer (2023).

In response, legislators and regulators have proposed several initiatives to address copyright concerns, including an obligation to publish summaries of copyrighted data used for training models in the forthcoming European AI Act (European Parliament, 2023), a code of practice on copyright and AI in the UK (UK Government, 2023), workshops with content creators in the US (Federal Trade Commission, 2023a) and a proposal to amend the EU copyright directive by some French politicians (Hartmann, 2024).

These regulatory developments directly impact GenAI, as data access is primordial to the development of models. OpenAI even argued that prohibiting using copyrighted data to train models would prevent GenAI development (Titcomb and Warrington, 2024).

Data protection

Training datasets contain personal and non-personal data. The collection and use of personal data raises data protection and privacy concerns, especially regarding the user’s consent.

Data protection authorities worldwide are increasingly looking at how models use personal data and their implications in terms of regulatory requirements, like the ongoing consultation on GenAI and data protection in the UK (Information Commissioner’s Office, 2024). Some regulators even took enforcement actions by preventing the use of AI-powered applications due to alleged data protection infringements, like the temporary ban of ChatGPT in Italy in April 2023 (Bertuzzi, 2023).15

These regulatory developments also impact GenAI, as data protection requirements have important implications for the lawfulness of the training datasets. Non-compliance could potentially result in bans and fines.

AI governance

AI models give rise to new governance issues, such as addressing risks associated with their use, including manipulation and biometric identification. In this context, AI governance is the priority of legislators around the globe. As of January 2024, the OECD counts 646 legislative initiatives worldwide that are about AI governance only (OECD.AI, 2021). The forthcoming European AI Act, which is still under legislative process after a political agreement in December 2023, is just one of them (Council of the European Union, 2023). However, the AI Act will likely have far-reaching implications in other jurisdictions worldwide, as Europe is often the global rule-setter (Bradford, 2020). The proposed text includes specific provisions concerning GenAI and high-impact general-purpose AI (GPAI) models, identified as posing systematic risks. According to the latest officially available information, developers of such models must disclose to users that content is AI-generated, design the model to prevent the generation of illegal content and publish summaries of copyrighted data used for training. High-risk GPAI, which might only apply to OpenAI GPT-4, is required to conduct impact assessments of risks and report them to the European Commission (European Parliament, 2023). The latter announced in January 2024 a dedicated AI office within the Commission, tasked with coordinating AI policy at the EU level and overseeing the AI Act (European Commission, 2024a).

These regulatory developments significantly impact competition. Firstly, certain models classified as high-risk GPAI face more extensive regulatory compliance requirements than models that potentially pose similar risks. The question of whether this regulatory burden will place the former at a competitive disadvantage compared to the latter merits closer scrutiny. As of January 2024, European legislators have not disclosed any impact assessment of the provisions affecting high-risk GPAI models on competition. This is particularly concerning as legislative debates have indicated a desire to promote European models such as the French Mistal AI and German Aleph Alpha by excluding them from regulatory burdens (Hartmann, 2023). In other words, the provision might be driven by a desire to achieve an industrial policy goal of promoting European firms rather than protecting users from all AI risks, irrespective of the model size. Secondly, the proliferation of regulatory initiatives may result in regulatory inconsistency, leading to increased compliance costs and regulatory burdens. Consequently, some model developers might encounter challenges in scaling and competing, especially when compared to developers with greater compliance resources and the ability to benefit from economies of scale in regulatory compliance.


As GenAI is an emerging technology, firms vigorously compete along all the above value chains, from computing resources and models to data. Yet, some competition authorities have already voiced concerns that GenAI might be concentrated in the hands of a few large online platforms with access to computing resources, models and data (for the US, see Federal Trade Commission, 2023b). Moreover, they outline that some business practices, such as tying, bundling, exclusive dealing and self-preferencing, might give rise to potential competitive concerns (Bundeskartellamt, 2023a). Competition authorities also closely monitor partnerships between large cloud providers and model developers. In Germany,16 the UK,17 and Europe,18 competition authorities investigate whether the partnership between Microsoft and OpenAI requires an obligation to review the transaction under their national merger control laws. If Microsoft/OpenAI undergoes a merger review, competition authorities could potentially accept or block the transaction or impose behavioural and structural conditions on how Microsoft should operate with OpenAI products and services. In the US, the Federal Trade Commission even launched a sector inquiry into these partnerships, requesting detailed information from Alphabet, Amazon, Anthropic Microsoft and OpenAI on the rationale and impact of the partnerships on competition (Federal Trade Commission, 2024a).

Besides, GenAI spurs innovation in several sectors, potentially disrupting current markets and creating new ones, such as answer engines replacing search engines (Carugati, 2023a). As of January 2024, competition authorities have not yet launched market studies into GenAI and its impact on specific markets, such as advertising or cloud computing. However, as noted earlier, they announced sector inquiries into GenAI and competition.

These regulatory initiatives will inform how competition in GenAI works. They are not formal investigations into non-compliance with national competition laws or specific digital markets regulations, like the European Digital Markets Act. However, the findings will likely influence GenAI developments in delivering positive outcomes on competition. Competition authorities have already warned that they will intervene with formal enforcement powers where necessary (Competition and Markets Authority, 2023a).

Policy recommendations

The paper finds that GenAI leads to emerging markets and technologies in the context of regulatory instabilities in various jurisdictions and legal regimes. Against this background, competition authorities worldwide should follow the below policy recommendations.

First, competition authorities should cooperate in an international forum to ensure international coherence. They should do joint studies in a forum like the European Competition Network or International Competition Network to foster experience-sharing without resource duplication, given the borderless nature of the issues posed by GenAI.

Second, competition authorities should undertake in-depth studies of some critical elements of the value chain and markets. They should do priority inquiries into graphic cards and cloud computing sectors as GenAI developments depend on them. Market characteristics and business practices in these sectors might impact competition in the long term. Competition authorities should also closely monitor how GenAI impacts competition in several important markets, including search engines and online advertising given their importance to content creators.

Thirdly, competition regulators should collaborate with relevant competent authorities to examine the impact of various legal regimes on competition. Considering the interactions between competition and other legal frameworks, they should ideally produce joint studies or, at the very least, joint statements addressing data protection, intellectual property rights, AI governance and regulations in digital markets. The outcomes of these collaborations should contribute to greater regulatory stability, providing market actors with the assurance that GenAI can deliver its full benefits responsibly.

Last but not least, competition authorities should exercise formal enforcement powers and potentially update competition tools only when necessary and justified. They should do so only after evidence of enforcement gaps is found following market studies. They should resist the call for quick intervention to avoid critics of underenforcement in digital markets.


Alizadeh, K., I. Mirzadeh, D. Belenko, K. Khatamifard, M. Cho, C. C. Del Mundo, M. Rastegari and M. Farajtabar (2024), LLM in a flash: Efficient Large Language Model Inference with Limited Memory, arXiv, Cornell University.

AMD (2023), AMD Showcases Growing Momentum for AMD Powered AI Solutions from the Data Center to PCs, https://www.amd.com/en/newsroom/press-releases/2023-12-6-amd-showcases-growing-momentum-for-amd-powered-ai-.html (3 January 2024).

Anthropic (2023), Expanding Access to Safer AI With Amazon, https://www.anthropic.com/news/anthropic-amazon (22 January 2024).

Autoridade da Concorrência (2023), Competition and Generative AI Intelligence, https://www.concorrencia.pt/sites/default/files/documentos/Issues%20Paper%20-%20Competition%20and%20Generative%20Artificial%20Intelligence.pdf (1 February 2024).

Autorité de la concurrence (2024, 18 January), Benoît Cœuré Delivers his 2024 New Year’s Message, News, https://www.autoritedelaconcurrence.fr/en/article/benoit-coeure-delivers-his-2024-new-years-message (25 January 2024).

AWS (2024), AWS Trainium, Amazon AWS, https://aws.amazon.com/machine-learning/trainium/ (19 January 2024).

Axel Springer (2023, 13 December), Axel Springer and OpenAI Partner to Deepen Beneficial Use of AI in Journalism, https://www.axelspringer.com/en/ax-press-release/axel-springer-and-openai-partner-to-deepen-beneficial-use-of-ai-in-journalism (25 January 2024).

Bertuzzi (2023, 13 April), Italian Data Protection Authority Bans ChatGPT Citing Privacy Violations, Euractiv, https://www.euractiv.com/section/artificial-intelligence/news/italian-data-protection-authority-banschatgpt-citing-privacy-violations/ (25 January 2024).

Bradford, A. (2020), The Brussels Effect: How the European Union Rules the World, Oxford University Press.

Bundeskartellamt (2023a), G7 Competition Authorities and Policymakers’ Summit Digital Competition Communiqué 2023, G7, https://www.bundeskartellamt.de/SharedDocs/Publikation/EN/Others/G7_2023_Communique.pdf?__blob=publicationFile&v=2 (25 January 2023).

Bundeskartellamt (2023b), Cooperation Between Microsoft and OpenAI Currently Not Subject to Merger Control, https://www.bundeskartellamt.de/SharedDocs/Meldung/EN/Pressemitteilungen/2023/15_11_2023_Microsoft_OpenAI.html (26 January 2024).

Cabral, L., J. Haucap, G. Parker, G. Petropoulos, T. Valletti and M. Van Alstyne (2021), The EU Digital Markets Act, JRC, 122910, Publications Office of the European Union, Luxembourg.

Carugati, C. (2023a), Antitrust Issues Raised by Answer Engines, Working paper, 7/2023, Bruegel.

Carugati, C. (2023b), Competition in Generative Artificial Intelligence Foundation Models, Working paper, 14/2023, Bruegel.

Carugati, C. (2023c), The Competitive Relationship between Cloud Computing and Generative AI, Working paper, 19/2023, Bruegel.

Carugati, C. (2023d), The Age of Competition in Generative Artificial Intelligence Has Begun, Bruegel, https://www.bruegel.org/first-glance/age-competition-generative-artificial-intelligence-has-begun (25 January 2024).

Carugati, C. (2024, 10 January), Competition Authorities Are Studying Similar Digital Markets, Digital Competition, https://www.digital-competition.com/infographics/competition-authorities-are-studying-similar-digital-markets (19 January 2024).

Competition and Markets Authority (2020), Online Platforms and Digital Advertising Market Study Final Report.

Competition and Markets Authority (2023a), AI Foundation Models: Initial Report, https://www.gov.uk/government/publications/ai-foundation-models-initial-report (1 February 2024).

Competition and Markets Authority (2023b, 8 December), CMA Seeks Views on Microsoft’s Partnership With OpenAI, News, https://www.gov.uk/government/news/cma-seeks-views-on-microsofts-partnership-with-openai (26 January 2024).

Council of the European Union (2023, 9 December), Artificial Intelligence Act: Council and Parliament Strike a Deal on the First Rules for AI in the World, Press release.

Coyle, D. (2023, 2 February), Preempting a Generative AI Monopoly, Project Syndicate, https://www.project-syndicate.org/commentary/preventing-tech-giants-from-monopolizing-artificial-intelligence-chatbots-by-diane-coyle-2023-02 (29 January 2024).

European Commission (2024,a 24 January), Commission Decision Establishing the European AI Office, https://digital-strategy.ec.europa.eu/en/library/commission-decision-establishing-european-ai-office (1 February 2024).

European Commission (2024b, 9 January), Commission Launches Calls for Contributions on Competition in Virtual Worlds and Generative AI, Press release, https://ec.europa.eu/commission/presscorner/detail/en/IP_24_85 (16 January 2024).

European Parliament (2023), EU AI Act: First Regulation on Artificial Intelligence, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (25 January 2024).

Federal Trade Commission (2023a), FTC Staff Report Details Key Takeaways from AI and Creative Fields Panel Discussion, https://www.ftc.gov/news-events/news/press-releases/2023/12/ftc-staff-report-details-key-takeaways-ai-creative-fields-panel-discussion (25 January 2024).

Federal Trade Commission (2023b), Generative AI Raises Competition Concerns, Technology Blog, https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns (25 January 2024).

Federal Trade Commission (2024a), FTC Launches Inquiry into Generative AI Investments and Partnerships, https://www.ftc.gov/news-events/news/press-releases/2024/01/ftc-launches-inquiry-generative-ai-investments-partnerships (26 January 2024).

Federal Trade Commission (2024b), FTC to Host Virtual Summit on Artificial Intelligence, Press release, https://www.ftc.gov/news-events/news/press-releases/2024/01/ftc-host-virtual-summit-artificial-intelligence (16 January 2024).

Geng, X., A. Gudibande, H. Liu, E. Wallace, P. Abbeel, S. Levine and D. Song (2023, 3 April), Koala: A Dialogue Model for Academic Research, Barkley Artificial Intelligence Research, https://bair.berkeley.edu/blog/2023/04/03/koala/ (24 January 2024).

Google (2024), Introduction to Cloud TPU, https://cloud.google.com/tpu/docs/intro-to-tpu (19 January 2024).

GPDP (2024, 29 January), ChatGPT: Italian DPA Notifies Breaches of Privacy Law to OpenAI, Garante per la protezione dei dati personali, https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9978020 (1 February, 2024).

Gregory, R. W., O. Henfridsson, E. Kaganer and H. Kyriakou (2021), The Role of Artificial Intelligence and Data Network Effects for Creating User Value, Academy of Management Review, 46(3), 534-51.

Grynbaum, M. M. and R. Mac (2023, 27 December), The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work, The New York Times, https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html (25 January 2024).

Hagey, K. M. Kruppa and A. Bruell (2023, 14 December), News Publishers See Google’s AI Search Tool as a Traffic-Destroying Nightmare, The Wall Street Journal, https://www.wsj.com/tech/ai/news-publishers-see-googles-ai-search-tool-as-a-traffic-destroying-nightmare-52154074 (25 January 2024).

Hartmann, T. (2023, 29 November) Behind France’s Stance Against Regulating Powerful AI Models, Euractiv, https://www.euractiv.com/section/artificial-intelligence/news/behind-frances-stance-against-regulating-powerful-ai-models/ (25 January 2024)

Hartmann, T. (2024, 20 January), French MPs Want to Amend EU’s Copyright Rules To Cover Generative AI, Euractiv, https://www.euractiv.com/section/artificial-intelligence/news/french-mps-want-to-amend-eus-copyright-rules-to-cover-generative-ai/ (25 January 2024).

Hungarian Competition Authority (2024, 4 January), GVH Launches Market Analysis on the Impact of Artificial Intelligence, Press release, https://www.gvh.hu/en/press_room/press_releases/press-releases-2024/gvh-launches-market-analysis-on-the-impact-of-artificial-intelligence (16 January 2024).

Information Commissioner’s Office (2024, 15 January), ICO Consultation Series on Generative AI and Data Protection, https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-consultation-series-on-generative-ai-and-data-protection/ (25 January 2024).

Intel (2023), Intel Core Ultra Ushers in the Age of the AI PC, News, https://www.intel.com/content/www/us/en/newsroom/news/core-ultra-client-computing-news-1.html (19 January 2024).

Janardhan, S. (2023), Reimagining Our Infrastructure for the AI Age, Meta, https://about.fb.com/news/2023/05/metas-infrastructure-for-ai/ (19 January 2024).

Kojima, T., S. S. Gu, M. Reid, Y. Matsuo and Y. Iwasawa (2023), Large Language Models are Zero-Shot Reasoners, arXiv, Cornell University.

Langston, J. (2020, 19 May), Microsoft Announces New Supercomputer, Lays Out Vision for Future AI Work, Microsoft Blog, https://news.microsoft.com/source/features/ai/openai-azure-supercomputer/ (22 January 2024).

Microsoft Corporate Blogs (2023, 23 January), Microsoft and OpenAI Extend Partnership, Official Microsoft Blog, https://blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiextendpartnership/ (22 January 2024).

Nvidia (2023), 10-Q Quarterly report, https://investor.nvidia.com/financial-info/sec-filings/sec-filings-details/default.aspx?FilingId=17074143 (19 January 2024).

OECD (2023), AI Language Models: Technological, Socio-Economic and Policy Considerations, OECD Digital Economy Papers.

OECD.AI (2021), Database of national AI policies, powered by EC/OECD, https://oecd.ai (25 January 2024).

OpenAI (2024, 8 January), OpenAI and journalism, https://openai.com/blog/openai-and-journalism (25 January 2024).

Press Trust of India (2023, 8 December), CCI Conducting Market Study On Artificial Intelligence To Assess Competition Landscape, Business Outlook India, https://business.outlookindia.com/news/cci-conducting-market-study-on-artificial-intelligence-to-assess-competition-landscape (16 January 2024)

Rae, J. W., S. Borgeaud, T. Cai, K. Millican, J. Hoffmann, F. Song et al. (2022) ‘Scaling Language Models: Methods, Analysis & Insights from Training Gopher’.

Reid, E. (2023, 10 May), Supercharging Search With Generative AI, Google Blog, https://blog.google/products/search/generative-ai-search/ (24 January 2024).

Ribas, J. (2023, 21 February), Building the New Bing, Microsoft Blog, https://blogs.bing.com/search-quality-insights/february-2023/Building-the-New-Bing (19 January 2023).

Riekeles, G. and M. von Thun (2023, 22 November), AI Won’t be Safe Until We Rein in Big Tech, European Policy Centre, https://www.epc.eu/en/publications/AI-wont-be-safe-until-we-rein-in-Big-Tech~55e63c (29 January 2024).

Schick, T. and H. Schütze (2021), It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners, in K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-Tur, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty and Y. Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, 2339-52.

Stucke, M. and A. Grunes (2016), Big Data and Competition Policy, Oxford University Press.

Tabriz, P. (2024, 23 January), Chrome Is Getting 3 New Generative AI Features, Google Blog, https://blog.google/products/chrome/google-chrome-generative-ai-features-january-2024/ (24 January 2024).

Titcomb, J. and J. Warrington (2024, 7 January), OpenAI Warns Copyright Crackdown Could Doom ChatGPT, The Telegraph, https://www.telegraph.co.uk/business/2024/01/07/openai-warns-copyright-crackdown-could-doom-chatgpt/ (25 January 2024).

UK Government (2023), The Government’s Code of Practice on Copyright and AI, UK Government, https://www.gov.uk/guidance/the-governments-code-of-practice-on-copyright-and-ai (25 January 2024)

Voolich Wright, J. (2023, 14 March), A New Era for AI and Google Workspace, Google Blog, https://workspace.google.com/blog/product-announcements/generative-ai?hl=en (24 January 2024).

© The Author(s) 2024

Open Access: This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).

Open Access funding provided by ZBW – Leibniz Information Centre for Economics.

DOI: 10.2478/ie-2024-0005