Impact of AI in the CCSs: Overview

Introduction
Artificial Intelligence (AI) has rapidly evolved, significantly impacting various sectors, including the cultural and creative sectors (CCSs). The integration of AI in these sectors has led to innovative applications that change day-to-day work, inspire creativity, and engage audiences in unconventional ways. Recent reports and literature on AI in CCSs highlight its transformative potential, from automating routine tasks to generating new forms of artistic expression. For instance, AI algorithms can analyse vast datasets to suggest potential trends in music and fashion, create personalised recommendations for media consumption, and even compose music or generate visual art (Moussaoui & Jamet, 2024). These advancements are reshaping traditional roles and practices within the sector, prompting a reconsideration of what it means to be creative, and what roles CCSs may play in the digital age (Caramiaux, 2020).
Policy frameworks and strategies are being developed to harness the benefits of AI while mitigating its risks. The European Commission, for example, has emphasised the importance of human-centric AI that supports cultural diversity and creativity (2022). Policies, for example, the first comprehensive EU AI Act (Regulation - EU - 2024/1689, 2024), are being crafted to ensure transparency, ethical use, and equitable access to AI technologies. Strategies include fostering collaboration between tech startups and creative sectors (2022), promoting data interoperability, and ensuring that smaller players in the industry can also benefit from AI advancements (EC 2024). Reports from various organisations, including UNESCO (2024) and the European Parliament (Caramiaux, 2020), provide comprehensive analyses of AI's impact on CCSs, offering recommendations for stakeholders to navigate this evolving landscape.
Despite the potential opportunities, the proliferation of AI in CCSs is not without risks. One significant concern is the potential for AI to displace human creativity, leading to a homogenization of cultural outputs and a loss of unique artistic voices. Additionally, the use of AI raises ethical issues related to authorship, copyright, and the potential misuse of personal data. There is also the risk of AI systems perpetuating existing biases, which can affect the diversity and inclusivity of cultural content. Addressing these challenges requires a balanced approach that considers both the technological possibilities and the socio-cultural implications of AI.
In conclusion, the integration of AI in cultural and creative sectors presents both exciting opportunities and significant challenges. As AI continues to develop and proliferate, stakeholders must engage in ongoing dialogue and collaboration to ensure that AI technologies are used in ways that enhance, rather than diminish, human creativity, and cultural diversity. By developing thoughtful policies, fostering human-centric innovation, and addressing ethical concerns, CCSs may harness the power of AI to create a vibrant and inclusive future. This report provides an overview of the current status of AI policies, strategies, applications, and ethical considerations in the context of CCSs.
AI in Europe – Strategies and Policies
The European Union (EU) is proactively addressing the rapid development of Artificial Intelligence (AI) – particularly in the United States and China – through a comprehensive and multifaceted approach. Recognising the transformative potential of AI, the EU has established a robust policy framework aimed at fostering innovation while ensuring ethical standards and societal benefits. Key initiatives include the European AI Strategy (EC, 2018), which outlines the EU's vision for AI development, and the AI Act, a proposed regulation to ensure trustworthy AI. The EU AI Act (Regulation - EU - 2024/1689, 2024), which entered into force in August 2024, focuses on risk-based regulation, categorising AI applications based on their potential impact on safety and fundamental rights. Whereas AI systems with “unacceptable risk” such as cognitive behavioural manipulation of people, social scoring, biometric identification and categorisation of people, and real-time and remote biometric identification system are banned, high-risk AI systems, such as those used in critical infrastructure or healthcare, are subject to stringent requirements, including transparency, accountability, and human oversight. Commonly used generative AI, such as ChatGPT, is not classified as high-risk but needs to comply with transparency requirements and EU copyright law. By the time of today (March 2025), the European Commission has approved a draft of guidelines to assist AI system providers and deployers in ensuring compliance with their obligations under the AI Act (EC, 2025b). Additionally, in the EU, all AI systems are subject to comply with the General Data Protection Regulation (GDPR) for all the processes, from data collection to processing.
In CCS, the EU has developed specific strategies to leverage AI's potential while addressing unique challenges. The European Commission's Directorate-General for Communications Networks, Content and Technology (DG Connect) and Directorate-General for Education, Youth, Sport and Culture (DG EAC) have commissioned studies to explore AI's opportunities and challenges in these sectors. A study addresses the need for attentions and measures to 1) AI’s impact on all the value chain: creation, production, dissemination, and consumption, 2) authorship, ownership, and copyright infringement, 3) AI’s potentials and risks on culture accessibility and discoverability, e.g., preservation of cultural heritage, 4) potentials and risks on cultural and linguistic diversity, e.g., automated translation for wider audience vs. underrepresented culture in centralised platforms/systems (Caramiaux, 2020).
A report “Opportunities and challenges of artificial intelligence technologies for the cultural and creative sectors” (2022) assesses use cases of (1) save costs and increase efficiency, (2) make decisions, (3) discover and engage the audience and (4) inspire human creation in CCS including architecture, book publishing, fashion, film, museums, music, news media, performing arts, visual art and video games. While there are many potential use cases to foster CCS’s activities, this study highlights the potential risks caused by AI systems, relationships, and processes built around AI technology. In other words, AI technology can provide new opportunities, but it can be dangerous depending on how people use it.
To support a meaningful use of AI in CCS, the EU emphasises the importance of data interoperability, access to skills, and fostering collaborative ecosystems (2022). Policies are being crafted to ensure that AI technologies are accessible to small and medium-sized enterprises (SMEs) and individual creators, promoting a diverse and inclusive cultural landscape (2025a). The EU also encourages knowledge exchange between tech startups and creative industries, aiming to bridge the gap between technology and creativity. Furthermore, ethical considerations are paramount, with a focus on transparency, accountability, and the protection of intellectual property rights.
Overall, the EU's strategies for AI in CCS aim to balance innovation with ethical considerations, ensuring that AI enhances rather than diminishes human creativity and cultural diversity. By fostering a supportive environment for trustworthy AI adoption, the EU seeks to empower creators and cultural organisations to thrive in the digital age.
AI x Cultural Institutions Highlight
Against the backdrop of opportunities and challenges of using AI in CCS, there are a number of initiatives that have been engaged to address those issues in the context of cultural institutions’ practices.
Museum associations have been actively engaging with the rapid development of AI, recognising both its potential and the ethical challenges it presents. The Network of European Museum Organisations (NEMO) has been at the forefront, issuing recommendations to guide the integration of AI in museums. In the statement issued in 2024, NEMO emphasises the importance of developing a political vision for museums and cultural heritage in an AI-driven society, advocating for collaboration between governments, regulatory bodies, and museum professionals to ensure that museums play a pivotal role in ethical AI practices. They also stress the need for financial investments in infrastructure, equipment, and training to enhance museums' professional capacities and ensure high-quality, interoperable data, and resolution of copyright issues. This also addresses the commitments to the EU’s Cultural Heritage Data Space (Europeana) and the European Collaborative Cultural Heritage Cloud. In relation to cultural heritage empowerment, the AI4culture platform provides comprehensive training and resources for individuals and institutions who are interested in applying AI technologies in the cultural heritage sector. Additionally, NEMO calls for the establishment of a European AI innovation hub for cultural heritage to centralise expertise and foster creativity and collaboration in alignment with the value of human-centred design, privacy, and open-source practices.
In the UK, an action research project “The Museums + AI Network” (2019-2020) was one of the first projects to examine AI’s potential and risks in the cultural institutions. The project facilitated industrial workshops and conducted interviews in the UK and the US to open up discussion around the key parameters, methods, and paradigms of AI with academics and museum professionals. The topics of AI applications cover Computer Vision technologies for digital collection; Natural Language Processing for visitor experience analysis, Predictive Analytics for forecasting visitor numbers, spend, and exhibition naming; Collection and Curation by AI technologies; and AI as an educational exhibition. The project developed a Toolkit consisting of existing AI applications, ethical considerations, case studies, and a self-assessment worksheet.
More recently, the Future Museum project has widely aimed at investigating cutting-edge technologies and helping museums to implement strategy settings for innovative solutions. It includes strategic use of AI, augmented and virtual reality technologies to maintain and enhance the museum’s relevance in society: for audience development and engagement, digital and data management, and business and collaboration opportunities. For example, the project examines the potential of new technologies to maintain and enhance audience relationships with museums and science centres, and to optimise audience revenue generation as well as ensuring accessibility, inclusivity, and sustainability.
To sum up, researchers and practitioners in CCS have responded to the opportunities and challenges of AI use. With the European Union’s initiatives, the digitalisation of cultural heritage is ongoing on a large scale. A number of museums are proactively engaged in discussions on the fair and reasonable usage of AI as well as motivated to take opportunities for solving issues and improving the business.
AI Ethics in CCSs
Alongside the rapid development and deployment of AI in various use cases, there have already been a number of issues and questions raised. As AI has been implemented in many areas already, even without the user’s attention, questions regarding AI are not much on whether or not to use it, but rather how to use it correctly and fairly. Countless scholars and practitioners have proposed “AI ethics” to address those issues. Corrêa et al. (2023) have identified 17 core ethical principles based on over 200 guidelines and recommendations all over the world. The identified core principles are: 1) Accountability/ liability, 2) Beneficence/ non-maleficence, 3) Children and adolescents rights, 4) Dignity/ human rights, 5) Diversity/ inclusion/pluralism/ accessibility, 6) Freedom/ autonomy/ democratic values/ technological sovereignty, 7) Human formation/ education, 8) Human-centeredness/ alignment, 9) Intellectual property, 10) Justice/ equity/ fairness/ non-discrimination, 11) Labor rights, 12) Cooperation/ fair competition/ open source, 13) Privacy, 14) Reliability/ safety/ security/ trustworthiness, 15) Sustainability, 16) Transparency/ explainability/ auditability, and 17) Truthfulness (Corrêa et al., 2023).
These ethical principles are relevant for many sectors, among these also CCS. AI use in CCS may vary from enhancing visitor experience, collection management and digitalisation, artistic creation and interpretation, research and historical analysis, etc. Depending on the context of the AI application, ethical implications differ. Questions to ask are: Who is involved in the situation? What technologies are used? How are the data handled and processed, by whom?
As the example below shows, ethics on AI varies depending on technical specificities, development and deployment processes, and implementation. There is no comprehensive guide to clear all the cases, but rather it is necessary to assess on a case-by-case basis, as well as in a recurring manner.
Hypothetical example: An interactive and personalised guide for museums
If a museum implements an interactive and personalised AI guide with emotion recognition technology, several critical ethical considerations emerge that require careful attention to protect visitor rights and maintain trust. While protecting intellectual property is a prerequisite for AI training, Privacy and consent represent the most fundamental concern, as facial recognition systems collect highly sensitive biometric data that uniquely identifies individuals. Unlike traditional visitor tracking methods, facial recognition creates a digital representation of visitors that persists in databases and can be linked to other personal information. Museums must ensure visitors provide explicit, informed consent before any biometric data collection, clearly explaining how their facial data will be used, stored, and eventually deleted, with special attention to minor visitors.
Data protection and security vulnerabilities pose significant risks in these systems. Personal biometric information, once compromised, cannot be changed like a password or credit card number. Museums implementing facial recognition must establish robust cybersecurity measures and transparent data governance policies, including automatic data deletion protocols – as demonstrated by the Sound & Vision Museum in the Netherlands, which deletes all visitor profile data and photos every evening when the museum closes. The institution must also consider who has access to this data and implement strict access controls to prevent unauthorised use or potential misuse by staff members. (Re-)education and training for staff to protect labour rights should be taken into account before the implementation.
Algorithmic bias and cultural representation create another layer of ethical complexity. AI systems often exhibit significant cultural bias, particularly affecting non-Western visitors or those from underrepresented communities. These biases can manifest in facial recognition systems that struggle to accurately identify individuals with darker skin tones or fail to appropriately interpret cultural expressions and behaviours. For example, a personalised guide might provide culturally insensitive recommendations or misinterpret visitor preferences based on biased training data, potentially alienating certain visitor groups and perpetuating existing inequalities in cultural access.
Transparency and accountability concerns arise when visitors cannot understand how the AI system makes personalisation decisions. Museums must address questions about algorithmic opacity – visitors have the right to know why certain content is recommended to them and how their facial data influences these choices. Additionally, there are legitimate concerns about creating "filter bubbles" where personalised experiences limit visitors to content that reinforces their existing preferences rather than encouraging cultural exploration and discovery. Museums should implement opt-out mechanisms, such as QR codes for anonymous participation, ensuring that visitors who prefer not to engage with facial recognition can still access meaningful museum experiences.
Future Remarks
Artificial intelligence is already embedded across the CCS value chain—from curating museum exhibitions and restoring fragile artefacts to generating marketing materials and entire works of art. Museums use computer-vision systems to catalogue collections and forecast visitor flows, while generative models let artists, designers and filmmakers iterate ideas at unprecedented speed. Recommendation engines such as Smartify’s personalised audio tours or Mad Systems’ adaptive guides tailor content to each visitor’s language, time constraints and thematic interests, deepening engagement and widening access for people with disabilities through real-time translation and description services. In conservation, AI-driven “digital twins” may predict structural decay and guide micro-level restorations that would be impossible by hand, helping custodians preserve heritage under tight budgets. Crucially, low-code creative tools lower the cost of production; independent creators and small museums may now prototype immersive experiences or commission bespoke music without large studios, expanding the diversity of voices across the sector.
These benefits arrive with significant risks. Generative systems trained on copyrighted works without consent have already triggered landmark lawsuits that could reshape intellectual property law. Facial-recognition pilots inside galleries raise acute privacy concerns: biometric data are immutable, and their misuse could subject visitors to lifelong surveillance or commercial profiling. Algorithmic bias remains pervasive: vision models under-identify darker-skinned faces and textual models default to Western narratives, potentially distorting collections and alienating communities the sector is meant to serve. Additionally, automation is eroding particularly entry-level creative jobs; AI systems are being integrated into many fields, substituting human work in CCSs, with many fearing falling remuneration and diminished artistic agency. Finally, introducing new technologies often requires organisational and value change in institutions as well as maintenance of hardware and software. Balancing these opportunities and challenges will demand transparent data practices, participatory design with affected communities, and new compensation models that keep human creativity—and cultural dignity—at the centre of technological change.
References
Caramiaux, B. (2020). The use of artificial intelligence in the cultural and creative sectors: Concomitant expertise for INI report: research for CULT Committee. Publications Office of the European Union. https://data.europa.eu/doi/10.2861/602011
Corrêa, N. K., Galvão, C., Santos, J. W., Del Pino, C., Pinto, E. P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E., & de Oliveira, N. (2023). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns, 4(10), 100857. https://doi.org/10.1016/j.patter.2023.100857
Directorate-General for Communications Networks, Content and Technology (European Commission), Izsak, K., Terrier, A., Kreutzer, S., Strähle, T., Roche, C., Moretto, M., Sorensen, S. Y., Hartung, M., Knaving, K., Johansson, M. A., Ericsson, M., & Tomchak, D. (2022). Opportunities and challenges of artificial intelligence technologies for the cultural and creative sectors. Publications Office of the European Union. https://data.europa.eu/doi/10.2759/144212
European Commission. (2018). COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE EUROPEAN COUNCIL, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS Artificial Intelligence for Europe. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2018:237:FIN
European Commission. (2024, January 24). COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS on boosting startups and innovation in trustworthy artificial intelligence.
European Commission. (2025a, February 13). European approach to artificial intelligence | Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
European Commission. (2025b, April 2). ANNEX to the Communication to the Commission Approval of the content of the draft Communication from the Commission—Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act). https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act
German Commission for UNESCO. (2024, November). Approaches to an ethical development and use of AI in the Cultural and Creative Industries. https://www.unesco.de/assets/dokumente/Deutsche_UNESCO-Kommission/02_Publikationen/DUK_Broschuere_KI_und_Kultur_EN_web_02.pdf
Moussaoui, M. H. E., & Jamet, R. (2024). AI’s influence on the Creative and Cultural Industries. IMAGO: A Journal of the Social Imaginary, 23, 291–312.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA Relevance) (2024). https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
Working together with






.png)
